Nov 28 12:35:38 crc systemd[1]: Starting Kubernetes Kubelet... Nov 28 12:35:38 crc restorecon[4690]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:38 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 12:35:39 crc restorecon[4690]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 12:35:39 crc restorecon[4690]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Nov 28 12:35:39 crc kubenswrapper[4779]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 28 12:35:39 crc kubenswrapper[4779]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 28 12:35:39 crc kubenswrapper[4779]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 28 12:35:39 crc kubenswrapper[4779]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 28 12:35:39 crc kubenswrapper[4779]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 28 12:35:39 crc kubenswrapper[4779]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.548885 4779 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551389 4779 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551406 4779 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551411 4779 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551415 4779 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551420 4779 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551425 4779 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551429 4779 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551433 4779 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551437 4779 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551441 4779 feature_gate.go:330] unrecognized feature gate: Example Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551445 4779 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551449 4779 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551453 4779 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551457 4779 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551462 4779 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551467 4779 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551471 4779 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551476 4779 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551480 4779 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551485 4779 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551497 4779 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551501 4779 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551505 4779 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551509 4779 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551512 4779 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551517 4779 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551522 4779 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551526 4779 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551530 4779 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551533 4779 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551537 4779 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551541 4779 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551545 4779 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551550 4779 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551554 4779 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551558 4779 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551561 4779 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551566 4779 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551570 4779 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551573 4779 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551577 4779 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551581 4779 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551584 4779 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551589 4779 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551592 4779 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551596 4779 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551600 4779 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551604 4779 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551608 4779 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551613 4779 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551618 4779 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551623 4779 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551627 4779 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551630 4779 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551634 4779 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551638 4779 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551641 4779 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551645 4779 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551648 4779 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551651 4779 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551655 4779 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551658 4779 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551662 4779 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551665 4779 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551669 4779 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551672 4779 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551676 4779 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551679 4779 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551683 4779 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551687 4779 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.551691 4779 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.551919 4779 flags.go:64] FLAG: --address="0.0.0.0" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.551929 4779 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.551939 4779 flags.go:64] FLAG: --anonymous-auth="true" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.551944 4779 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.551949 4779 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.551954 4779 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.551959 4779 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.551964 4779 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.551968 4779 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.551972 4779 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.551977 4779 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.551981 4779 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.551986 4779 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.551990 4779 flags.go:64] FLAG: --cgroup-root="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.551993 4779 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.551997 4779 flags.go:64] FLAG: --client-ca-file="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552002 4779 flags.go:64] FLAG: --cloud-config="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552006 4779 flags.go:64] FLAG: --cloud-provider="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552010 4779 flags.go:64] FLAG: --cluster-dns="[]" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552014 4779 flags.go:64] FLAG: --cluster-domain="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552018 4779 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552022 4779 flags.go:64] FLAG: --config-dir="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552026 4779 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552030 4779 flags.go:64] FLAG: --container-log-max-files="5" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552035 4779 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552039 4779 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552043 4779 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552047 4779 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552051 4779 flags.go:64] FLAG: --contention-profiling="false" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552055 4779 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552059 4779 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552063 4779 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552067 4779 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552072 4779 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552076 4779 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552082 4779 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552086 4779 flags.go:64] FLAG: --enable-load-reader="false" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552105 4779 flags.go:64] FLAG: --enable-server="true" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552109 4779 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552114 4779 flags.go:64] FLAG: --event-burst="100" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552118 4779 flags.go:64] FLAG: --event-qps="50" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552123 4779 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552127 4779 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552132 4779 flags.go:64] FLAG: --eviction-hard="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552137 4779 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552141 4779 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552145 4779 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552149 4779 flags.go:64] FLAG: --eviction-soft="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552153 4779 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552157 4779 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552161 4779 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552165 4779 flags.go:64] FLAG: --experimental-mounter-path="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552168 4779 flags.go:64] FLAG: --fail-cgroupv1="false" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552173 4779 flags.go:64] FLAG: --fail-swap-on="true" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552177 4779 flags.go:64] FLAG: --feature-gates="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552182 4779 flags.go:64] FLAG: --file-check-frequency="20s" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552186 4779 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552190 4779 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552194 4779 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552198 4779 flags.go:64] FLAG: --healthz-port="10248" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552202 4779 flags.go:64] FLAG: --help="false" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552206 4779 flags.go:64] FLAG: --hostname-override="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552210 4779 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552214 4779 flags.go:64] FLAG: --http-check-frequency="20s" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552218 4779 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552223 4779 flags.go:64] FLAG: --image-credential-provider-config="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552226 4779 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552230 4779 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552235 4779 flags.go:64] FLAG: --image-service-endpoint="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552239 4779 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552242 4779 flags.go:64] FLAG: --kube-api-burst="100" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552247 4779 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552251 4779 flags.go:64] FLAG: --kube-api-qps="50" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552256 4779 flags.go:64] FLAG: --kube-reserved="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552261 4779 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552265 4779 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552269 4779 flags.go:64] FLAG: --kubelet-cgroups="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552273 4779 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552277 4779 flags.go:64] FLAG: --lock-file="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552281 4779 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552285 4779 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552289 4779 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552294 4779 flags.go:64] FLAG: --log-json-split-stream="false" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552298 4779 flags.go:64] FLAG: --log-text-info-buffer-size="0" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552302 4779 flags.go:64] FLAG: --log-text-split-stream="false" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552306 4779 flags.go:64] FLAG: --logging-format="text" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552310 4779 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552314 4779 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552318 4779 flags.go:64] FLAG: --manifest-url="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552322 4779 flags.go:64] FLAG: --manifest-url-header="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552327 4779 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552331 4779 flags.go:64] FLAG: --max-open-files="1000000" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552336 4779 flags.go:64] FLAG: --max-pods="110" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552340 4779 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552344 4779 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552348 4779 flags.go:64] FLAG: --memory-manager-policy="None" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552352 4779 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552356 4779 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552360 4779 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552364 4779 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552373 4779 flags.go:64] FLAG: --node-status-max-images="50" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552377 4779 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552381 4779 flags.go:64] FLAG: --oom-score-adj="-999" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552385 4779 flags.go:64] FLAG: --pod-cidr="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552389 4779 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552397 4779 flags.go:64] FLAG: --pod-manifest-path="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552402 4779 flags.go:64] FLAG: --pod-max-pids="-1" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552406 4779 flags.go:64] FLAG: --pods-per-core="0" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552410 4779 flags.go:64] FLAG: --port="10250" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552415 4779 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552420 4779 flags.go:64] FLAG: --provider-id="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552424 4779 flags.go:64] FLAG: --qos-reserved="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552428 4779 flags.go:64] FLAG: --read-only-port="10255" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552433 4779 flags.go:64] FLAG: --register-node="true" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552437 4779 flags.go:64] FLAG: --register-schedulable="true" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552441 4779 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552448 4779 flags.go:64] FLAG: --registry-burst="10" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552452 4779 flags.go:64] FLAG: --registry-qps="5" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552456 4779 flags.go:64] FLAG: --reserved-cpus="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552459 4779 flags.go:64] FLAG: --reserved-memory="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552465 4779 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552469 4779 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552473 4779 flags.go:64] FLAG: --rotate-certificates="false" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552477 4779 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552481 4779 flags.go:64] FLAG: --runonce="false" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552485 4779 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552489 4779 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552493 4779 flags.go:64] FLAG: --seccomp-default="false" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552497 4779 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552501 4779 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552505 4779 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552510 4779 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552514 4779 flags.go:64] FLAG: --storage-driver-password="root" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552518 4779 flags.go:64] FLAG: --storage-driver-secure="false" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552522 4779 flags.go:64] FLAG: --storage-driver-table="stats" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552525 4779 flags.go:64] FLAG: --storage-driver-user="root" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552529 4779 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552534 4779 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552539 4779 flags.go:64] FLAG: --system-cgroups="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552543 4779 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552549 4779 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552553 4779 flags.go:64] FLAG: --tls-cert-file="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552557 4779 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552561 4779 flags.go:64] FLAG: --tls-min-version="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552565 4779 flags.go:64] FLAG: --tls-private-key-file="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552569 4779 flags.go:64] FLAG: --topology-manager-policy="none" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552573 4779 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552577 4779 flags.go:64] FLAG: --topology-manager-scope="container" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552581 4779 flags.go:64] FLAG: --v="2" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552587 4779 flags.go:64] FLAG: --version="false" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552592 4779 flags.go:64] FLAG: --vmodule="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552597 4779 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552607 4779 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552709 4779 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552714 4779 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552717 4779 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552721 4779 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552724 4779 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552729 4779 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552733 4779 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552737 4779 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552742 4779 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552745 4779 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552749 4779 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552753 4779 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552756 4779 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552760 4779 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552763 4779 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552767 4779 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552771 4779 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552774 4779 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552777 4779 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552782 4779 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552787 4779 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552791 4779 feature_gate.go:330] unrecognized feature gate: Example Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552794 4779 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552798 4779 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552802 4779 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552806 4779 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552809 4779 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552813 4779 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552817 4779 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552820 4779 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552824 4779 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552828 4779 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552832 4779 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552835 4779 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552839 4779 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552842 4779 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552846 4779 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552849 4779 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552853 4779 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552856 4779 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552861 4779 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552864 4779 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552868 4779 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552871 4779 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552875 4779 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552878 4779 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552882 4779 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552885 4779 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552892 4779 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552895 4779 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552899 4779 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552903 4779 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552907 4779 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552912 4779 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552916 4779 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552919 4779 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552923 4779 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552927 4779 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552932 4779 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552935 4779 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552939 4779 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552942 4779 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552946 4779 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552950 4779 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552954 4779 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552958 4779 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552961 4779 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552965 4779 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552968 4779 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552972 4779 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.552975 4779 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.552988 4779 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.561785 4779 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.561824 4779 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.561908 4779 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.561916 4779 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.561922 4779 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.561927 4779 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.561933 4779 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.561938 4779 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.561943 4779 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.561948 4779 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.561953 4779 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.561958 4779 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.561963 4779 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.561967 4779 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.561972 4779 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.561977 4779 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.561982 4779 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.561987 4779 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.561992 4779 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.561998 4779 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562004 4779 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562010 4779 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562015 4779 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562020 4779 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562024 4779 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562030 4779 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562035 4779 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562040 4779 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562045 4779 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562049 4779 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562054 4779 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562059 4779 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562064 4779 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562070 4779 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562075 4779 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562111 4779 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562118 4779 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562125 4779 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562134 4779 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562139 4779 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562144 4779 feature_gate.go:330] unrecognized feature gate: Example Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562149 4779 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562154 4779 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562159 4779 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562163 4779 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562168 4779 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562173 4779 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562177 4779 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562184 4779 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562189 4779 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562194 4779 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562198 4779 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562203 4779 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562208 4779 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562213 4779 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562219 4779 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562225 4779 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562230 4779 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562235 4779 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562240 4779 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562246 4779 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562252 4779 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562258 4779 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562263 4779 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562268 4779 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562273 4779 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562278 4779 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562283 4779 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562288 4779 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562293 4779 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562298 4779 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562302 4779 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562308 4779 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.562317 4779 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562464 4779 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562474 4779 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562479 4779 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562484 4779 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562489 4779 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562494 4779 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562499 4779 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562504 4779 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562510 4779 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562515 4779 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562520 4779 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562525 4779 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562530 4779 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562534 4779 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562539 4779 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562544 4779 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562549 4779 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562554 4779 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562559 4779 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562565 4779 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562573 4779 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562579 4779 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562587 4779 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562593 4779 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562598 4779 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562603 4779 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562609 4779 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562614 4779 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562620 4779 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562625 4779 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562630 4779 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562635 4779 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562640 4779 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562644 4779 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562650 4779 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562655 4779 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562660 4779 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562666 4779 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562672 4779 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562678 4779 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562684 4779 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562690 4779 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562695 4779 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562701 4779 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562706 4779 feature_gate.go:330] unrecognized feature gate: Example Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562711 4779 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562716 4779 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562721 4779 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562726 4779 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562731 4779 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562737 4779 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562743 4779 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562749 4779 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562754 4779 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562759 4779 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562765 4779 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562770 4779 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562775 4779 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562780 4779 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562785 4779 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562790 4779 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562795 4779 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562800 4779 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562805 4779 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562810 4779 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562815 4779 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562820 4779 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562825 4779 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562830 4779 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562835 4779 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.562840 4779 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.562848 4779 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.563317 4779 server.go:940] "Client rotation is on, will bootstrap in background" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.566269 4779 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.566369 4779 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.567048 4779 server.go:997] "Starting client certificate rotation" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.567074 4779 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.567594 4779 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-24 06:27:44.539564561 +0000 UTC Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.567719 4779 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 617h52m4.971849852s for next certificate rotation Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.572739 4779 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.576203 4779 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.588999 4779 log.go:25] "Validated CRI v1 runtime API" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.604929 4779 log.go:25] "Validated CRI v1 image API" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.606803 4779 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.609518 4779 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-11-28-12-31-03-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.609549 4779 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.638974 4779 manager.go:217] Machine: {Timestamp:2025-11-28 12:35:39.636845371 +0000 UTC m=+0.202520805 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:232cf3c8-8956-4a87-8900-bbd0298775e9 BootID:78a2023c-0feb-4049-a56a-d55919a84d1c Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:fc:c8:22 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:fc:c8:22 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:ee:0c:bc Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:69:8d:ae Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:de:a0:cf Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:26:f8:7b Speed:-1 Mtu:1496} {Name:eth10 MacAddress:aa:f8:39:f8:15:e3 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:e2:eb:c1:41:76:8b Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.639429 4779 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.639756 4779 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.641015 4779 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.641374 4779 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.641431 4779 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.641761 4779 topology_manager.go:138] "Creating topology manager with none policy" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.641782 4779 container_manager_linux.go:303] "Creating device plugin manager" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.641960 4779 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.642024 4779 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.642332 4779 state_mem.go:36] "Initialized new in-memory state store" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.642478 4779 server.go:1245] "Using root directory" path="/var/lib/kubelet" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.643662 4779 kubelet.go:418] "Attempting to sync node with API server" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.643703 4779 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.643728 4779 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.643751 4779 kubelet.go:324] "Adding apiserver pod source" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.643769 4779 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.649217 4779 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.649978 4779 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Nov 28 12:35:39 crc kubenswrapper[4779]: E1128 12:35:39.650036 4779 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.650025 4779 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Nov 28 12:35:39 crc kubenswrapper[4779]: E1128 12:35:39.650222 4779 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.650830 4779 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.652509 4779 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.653462 4779 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.653505 4779 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.653520 4779 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.653550 4779 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.653573 4779 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.653586 4779 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.653600 4779 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.653621 4779 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.653637 4779 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.653651 4779 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.653669 4779 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.653683 4779 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.654259 4779 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.654965 4779 server.go:1280] "Started kubelet" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.655889 4779 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.656332 4779 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.656835 4779 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.657023 4779 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 28 12:35:39 crc systemd[1]: Started Kubernetes Kubelet. Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.657930 4779 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.657967 4779 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.658002 4779 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 04:13:03.207217314 +0000 UTC Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.658193 4779 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 231h37m23.549155151s for next certificate rotation Nov 28 12:35:39 crc kubenswrapper[4779]: E1128 12:35:39.658468 4779 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.658607 4779 volume_manager.go:287] "The desired_state_of_world populator starts" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.658642 4779 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.658894 4779 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.659003 4779 server.go:460] "Adding debug handlers to kubelet server" Nov 28 12:35:39 crc kubenswrapper[4779]: E1128 12:35:39.659202 4779 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="200ms" Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.659685 4779 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Nov 28 12:35:39 crc kubenswrapper[4779]: E1128 12:35:39.659943 4779 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.660491 4779 factory.go:55] Registering systemd factory Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.660571 4779 factory.go:221] Registration of the systemd container factory successfully Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.661352 4779 factory.go:153] Registering CRI-O factory Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.661403 4779 factory.go:221] Registration of the crio container factory successfully Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.661501 4779 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.661531 4779 factory.go:103] Registering Raw factory Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.661554 4779 manager.go:1196] Started watching for new ooms in manager Nov 28 12:35:39 crc kubenswrapper[4779]: E1128 12:35:39.661458 4779 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.107:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187c2bd31781e257 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 12:35:39.654873687 +0000 UTC m=+0.220549071,LastTimestamp:2025-11-28 12:35:39.654873687 +0000 UTC m=+0.220549071,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.663639 4779 manager.go:319] Starting recovery of all containers Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.684835 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.684967 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.684994 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.685049 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.685067 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.685087 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.685138 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.685158 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.685183 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.685204 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.685225 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.685247 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.685267 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.685290 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.685308 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.685326 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.685346 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.685368 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.685389 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.685412 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.685438 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.685466 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.685494 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.685519 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.685546 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.685576 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.685605 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.685631 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.685659 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.685682 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.685702 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.685722 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.685746 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.685772 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.685849 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.685905 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.685925 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.685976 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.685996 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.686015 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.686034 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.686053 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.686072 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.686157 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.686189 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.686208 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.686226 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.686244 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.686265 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.686283 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.686303 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.686321 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.686344 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.686365 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.686384 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.686439 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.686462 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.686480 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.686500 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.686519 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.686540 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.686558 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.686581 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.686600 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.686620 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.686638 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.686657 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.686675 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.686700 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.686725 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.686751 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.686778 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.686800 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.686818 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.686837 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.686856 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.686874 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.686891 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.686912 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.686929 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.686949 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.686968 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.686988 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.687005 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.687023 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.687041 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.687062 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.687081 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.688041 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.688076 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.689731 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.689775 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.689793 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.689834 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.689855 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.689875 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.689897 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.689915 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.689946 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.689967 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.690001 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.690019 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.690037 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.692326 4779 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.692561 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.692791 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.692982 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.693250 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.693450 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.693621 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.693755 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.693876 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.694006 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.694168 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.694322 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.694495 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.694686 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.694871 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.695043 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.695258 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.695504 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.695688 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.695866 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.696045 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.696259 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.696460 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.696641 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.696817 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.697004 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.697241 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.697446 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.697621 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.697795 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.697952 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.698159 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.698414 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.698600 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.698804 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.698963 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.699088 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.699413 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.699574 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.699724 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.699920 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.700142 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.700363 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.700557 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.700755 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.701375 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.701407 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.701422 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.699744 4779 manager.go:324] Recovery completed Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.701463 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.701547 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.701566 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.701579 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.701615 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.701628 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.701642 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.701657 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.701671 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.701712 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.701726 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.701739 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.701809 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.701827 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.701842 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.701900 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.701915 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.701930 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.701943 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.701983 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.702053 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.702068 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.702081 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.702121 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.702135 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.702148 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.702163 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.702199 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.702212 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.702226 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.702247 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.702287 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.702301 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.702315 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.702328 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.702362 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.702375 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.702388 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.702401 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.702437 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.702451 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.702463 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.702475 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.702488 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.702520 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.702532 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.702544 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.702556 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.702568 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.702602 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.702616 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.702629 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.702641 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.702654 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.702667 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.702727 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.702741 4779 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.702753 4779 reconstruct.go:97] "Volume reconstruction finished" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.702762 4779 reconciler.go:26] "Reconciler: start to sync state" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.718043 4779 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.719946 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.719981 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.719992 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.721477 4779 cpu_manager.go:225] "Starting CPU manager" policy="none" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.721576 4779 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.721676 4779 state_mem.go:36] "Initialized new in-memory state store" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.722177 4779 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.724595 4779 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.724880 4779 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.724930 4779 kubelet.go:2335] "Starting kubelet main sync loop" Nov 28 12:35:39 crc kubenswrapper[4779]: E1128 12:35:39.725004 4779 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 28 12:35:39 crc kubenswrapper[4779]: W1128 12:35:39.729340 4779 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Nov 28 12:35:39 crc kubenswrapper[4779]: E1128 12:35:39.729539 4779 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.730395 4779 policy_none.go:49] "None policy: Start" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.732992 4779 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.733038 4779 state_mem.go:35] "Initializing new in-memory state store" Nov 28 12:35:39 crc kubenswrapper[4779]: E1128 12:35:39.758560 4779 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.798133 4779 manager.go:334] "Starting Device Plugin manager" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.798203 4779 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.798218 4779 server.go:79] "Starting device plugin registration server" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.798649 4779 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.798671 4779 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.799682 4779 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.799858 4779 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.799873 4779 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 28 12:35:39 crc kubenswrapper[4779]: E1128 12:35:39.808438 4779 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.825448 4779 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.825571 4779 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.826762 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.826798 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.826810 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.826964 4779 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.827162 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.827215 4779 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.828264 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.828275 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.828295 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.828309 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.828311 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.828340 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.828433 4779 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.828482 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.828517 4779 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.829020 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.829046 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.829057 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.829210 4779 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.829330 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.829374 4779 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.829856 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.829880 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.829890 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.829901 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.829901 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.829925 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.830005 4779 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.830124 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.830149 4779 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.830447 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.830466 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.830479 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.830645 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.830671 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.830683 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.830961 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.830982 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.830992 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.831111 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.831134 4779 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.831627 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.831659 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.831673 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:35:39 crc kubenswrapper[4779]: E1128 12:35:39.860612 4779 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="400ms" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.898978 4779 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.900286 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.900321 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.900332 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.900356 4779 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 28 12:35:39 crc kubenswrapper[4779]: E1128 12:35:39.900845 4779 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.107:6443: connect: connection refused" node="crc" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.905013 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.905052 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.905076 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.905114 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.905141 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.905216 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.905301 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.905335 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.905375 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.905422 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.905471 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.905514 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.905547 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.905568 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 12:35:39 crc kubenswrapper[4779]: I1128 12:35:39.905589 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.006702 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.006837 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.006873 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.006903 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.006934 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.006966 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.006994 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.007023 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.007057 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.007088 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.007131 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.007228 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.007234 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.007291 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.007275 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.007349 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.007160 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.007306 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.007363 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.007398 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.007431 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.007240 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.007470 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.007505 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.007540 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.007548 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.007555 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.007505 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.007672 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.007770 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.102063 4779 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.104194 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.104278 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.104301 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.104344 4779 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 28 12:35:40 crc kubenswrapper[4779]: E1128 12:35:40.105155 4779 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.107:6443: connect: connection refused" node="crc" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.148201 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.154058 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.170345 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.176725 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.181740 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 12:35:40 crc kubenswrapper[4779]: W1128 12:35:40.202517 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-91aa86b42fa0dc67d2223abe6b084f0fc791103fff8f450af4ee9028b1ca7678 WatchSource:0}: Error finding container 91aa86b42fa0dc67d2223abe6b084f0fc791103fff8f450af4ee9028b1ca7678: Status 404 returned error can't find the container with id 91aa86b42fa0dc67d2223abe6b084f0fc791103fff8f450af4ee9028b1ca7678 Nov 28 12:35:40 crc kubenswrapper[4779]: W1128 12:35:40.203751 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-8372e86744ab1ad95ca4617b5b52b89fed81955ff4c270a4bb79e5e9fcd0ff97 WatchSource:0}: Error finding container 8372e86744ab1ad95ca4617b5b52b89fed81955ff4c270a4bb79e5e9fcd0ff97: Status 404 returned error can't find the container with id 8372e86744ab1ad95ca4617b5b52b89fed81955ff4c270a4bb79e5e9fcd0ff97 Nov 28 12:35:40 crc kubenswrapper[4779]: W1128 12:35:40.205528 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-da80a9fcc822ccc2f6689fb046e837dc90743cf37e89128f80b716402c5b1d0b WatchSource:0}: Error finding container da80a9fcc822ccc2f6689fb046e837dc90743cf37e89128f80b716402c5b1d0b: Status 404 returned error can't find the container with id da80a9fcc822ccc2f6689fb046e837dc90743cf37e89128f80b716402c5b1d0b Nov 28 12:35:40 crc kubenswrapper[4779]: W1128 12:35:40.209517 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-1fa38f8bc39e1d3300c7385b389d5663ccec7e31d8a8955e01011110a597dc61 WatchSource:0}: Error finding container 1fa38f8bc39e1d3300c7385b389d5663ccec7e31d8a8955e01011110a597dc61: Status 404 returned error can't find the container with id 1fa38f8bc39e1d3300c7385b389d5663ccec7e31d8a8955e01011110a597dc61 Nov 28 12:35:40 crc kubenswrapper[4779]: W1128 12:35:40.212572 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-619766a05fc60a9887a0948d6b0dc4347373f8f4f32b7f9c4bf8bd592dee2b3f WatchSource:0}: Error finding container 619766a05fc60a9887a0948d6b0dc4347373f8f4f32b7f9c4bf8bd592dee2b3f: Status 404 returned error can't find the container with id 619766a05fc60a9887a0948d6b0dc4347373f8f4f32b7f9c4bf8bd592dee2b3f Nov 28 12:35:40 crc kubenswrapper[4779]: E1128 12:35:40.261465 4779 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="800ms" Nov 28 12:35:40 crc kubenswrapper[4779]: W1128 12:35:40.475450 4779 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Nov 28 12:35:40 crc kubenswrapper[4779]: E1128 12:35:40.475542 4779 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Nov 28 12:35:40 crc kubenswrapper[4779]: W1128 12:35:40.475813 4779 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Nov 28 12:35:40 crc kubenswrapper[4779]: E1128 12:35:40.475891 4779 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.505789 4779 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.507823 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.507884 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.507901 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.507939 4779 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 28 12:35:40 crc kubenswrapper[4779]: E1128 12:35:40.508432 4779 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.107:6443: connect: connection refused" node="crc" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.658264 4779 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.730681 4779 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="2dd288476ad4d58bebb413208bbe2f45bf3997fd7587a90b08ff3af6bdc2ad10" exitCode=0 Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.730757 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"2dd288476ad4d58bebb413208bbe2f45bf3997fd7587a90b08ff3af6bdc2ad10"} Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.730886 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"1fa38f8bc39e1d3300c7385b389d5663ccec7e31d8a8955e01011110a597dc61"} Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.731078 4779 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.732681 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.732713 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.732722 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.733868 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0417da6607c0d549767642332fa4fb21bbef525d7073d0a352120092d3450f2b"} Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.733893 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"619766a05fc60a9887a0948d6b0dc4347373f8f4f32b7f9c4bf8bd592dee2b3f"} Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.735798 4779 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a" exitCode=0 Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.735857 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a"} Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.735878 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"da80a9fcc822ccc2f6689fb046e837dc90743cf37e89128f80b716402c5b1d0b"} Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.735953 4779 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.737124 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.737148 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.737158 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.737671 4779 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="d9e9439db88e70aa53dff88d8b0a4f533ad90c8652e9a4d58e93fda87fa7f5f4" exitCode=0 Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.737748 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"d9e9439db88e70aa53dff88d8b0a4f533ad90c8652e9a4d58e93fda87fa7f5f4"} Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.737783 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"91aa86b42fa0dc67d2223abe6b084f0fc791103fff8f450af4ee9028b1ca7678"} Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.737854 4779 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.739139 4779 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.741258 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.741317 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.741359 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.741377 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.741390 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.741411 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.743540 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6"} Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.743622 4779 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6" exitCode=0 Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.743689 4779 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.743684 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8372e86744ab1ad95ca4617b5b52b89fed81955ff4c270a4bb79e5e9fcd0ff97"} Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.744569 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.744619 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:35:40 crc kubenswrapper[4779]: I1128 12:35:40.744641 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:35:40 crc kubenswrapper[4779]: W1128 12:35:40.854795 4779 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Nov 28 12:35:40 crc kubenswrapper[4779]: E1128 12:35:40.854876 4779 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Nov 28 12:35:41 crc kubenswrapper[4779]: E1128 12:35:41.062655 4779 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="1.6s" Nov 28 12:35:41 crc kubenswrapper[4779]: W1128 12:35:41.206940 4779 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Nov 28 12:35:41 crc kubenswrapper[4779]: E1128 12:35:41.207067 4779 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Nov 28 12:35:41 crc kubenswrapper[4779]: I1128 12:35:41.309474 4779 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 12:35:41 crc kubenswrapper[4779]: I1128 12:35:41.310960 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:35:41 crc kubenswrapper[4779]: I1128 12:35:41.311014 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:35:41 crc kubenswrapper[4779]: I1128 12:35:41.311025 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:35:41 crc kubenswrapper[4779]: I1128 12:35:41.311062 4779 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 28 12:35:41 crc kubenswrapper[4779]: E1128 12:35:41.311753 4779 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.107:6443: connect: connection refused" node="crc" Nov 28 12:35:41 crc kubenswrapper[4779]: I1128 12:35:41.752814 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"aaf14e5e2229156dc442c92253ef1f23c75a5a6f5dec2d2537cddcdd1df54b92"} Nov 28 12:35:41 crc kubenswrapper[4779]: I1128 12:35:41.753134 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2a76dbc5b41ebf68792cd449e4a245678be24151f0c980eedd06f956674b2435"} Nov 28 12:35:41 crc kubenswrapper[4779]: I1128 12:35:41.753147 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6912a42c418059dabf07c7d940bf1c4102c8dcf91cd4dd6ca0b177f4acd276ec"} Nov 28 12:35:41 crc kubenswrapper[4779]: I1128 12:35:41.755507 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"0ca35c83bfed6e6b9e11bc2acb282ab619c3a04941a8ed540853cdd43531a00d"} Nov 28 12:35:41 crc kubenswrapper[4779]: I1128 12:35:41.755735 4779 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 12:35:41 crc kubenswrapper[4779]: I1128 12:35:41.756900 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:35:41 crc kubenswrapper[4779]: I1128 12:35:41.756928 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:35:41 crc kubenswrapper[4779]: I1128 12:35:41.756938 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:35:41 crc kubenswrapper[4779]: I1128 12:35:41.759317 4779 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a" exitCode=0 Nov 28 12:35:41 crc kubenswrapper[4779]: I1128 12:35:41.759421 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a"} Nov 28 12:35:41 crc kubenswrapper[4779]: I1128 12:35:41.759491 4779 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 12:35:41 crc kubenswrapper[4779]: I1128 12:35:41.760968 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:35:41 crc kubenswrapper[4779]: I1128 12:35:41.761014 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:35:41 crc kubenswrapper[4779]: I1128 12:35:41.761028 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:35:41 crc kubenswrapper[4779]: I1128 12:35:41.765515 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"54bf19864670db9dbeda1e3b133e9246f9e4027714f684783efed888890af9ed"} Nov 28 12:35:41 crc kubenswrapper[4779]: I1128 12:35:41.765544 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9512174ef01c8751a11fc5e6193513236518b4a9d5b63b05020544b8708b70b9"} Nov 28 12:35:41 crc kubenswrapper[4779]: I1128 12:35:41.765556 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e7be2ce5bc20d31216029627f86e27657d444334d72ba98e4ae9923c9d23cf35"} Nov 28 12:35:41 crc kubenswrapper[4779]: I1128 12:35:41.765636 4779 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 12:35:41 crc kubenswrapper[4779]: I1128 12:35:41.766610 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:35:41 crc kubenswrapper[4779]: I1128 12:35:41.766634 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:35:41 crc kubenswrapper[4779]: I1128 12:35:41.766644 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:35:41 crc kubenswrapper[4779]: I1128 12:35:41.769609 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b5a538ac7a3b48f9c58a68688a95342fb3a9d26ee3e5d7c65f1e3b8d99993294"} Nov 28 12:35:41 crc kubenswrapper[4779]: I1128 12:35:41.769648 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b887fb78d1be13c77a88ce49c84ff0839a51056e29d59d571ab7da133dd0d897"} Nov 28 12:35:41 crc kubenswrapper[4779]: I1128 12:35:41.769662 4779 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 12:35:41 crc kubenswrapper[4779]: I1128 12:35:41.769663 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0e79e9cc7bdaacc427604d12cf94272c7ed3d93519b1d285ba336edded1b3642"} Nov 28 12:35:41 crc kubenswrapper[4779]: I1128 12:35:41.770428 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:35:41 crc kubenswrapper[4779]: I1128 12:35:41.770471 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:35:41 crc kubenswrapper[4779]: I1128 12:35:41.770485 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:35:42 crc kubenswrapper[4779]: I1128 12:35:42.778572 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9026b47ba3a0076e3f66e452bc9a223292a17659f2b80d04ef6eb6a5c0448710"} Nov 28 12:35:42 crc kubenswrapper[4779]: I1128 12:35:42.778657 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3bafddd2d81f67f1445e3714d50eba5cfd6f75d60c2cb47d16f2086861a10bd6"} Nov 28 12:35:42 crc kubenswrapper[4779]: I1128 12:35:42.778760 4779 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 12:35:42 crc kubenswrapper[4779]: I1128 12:35:42.780059 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:35:42 crc kubenswrapper[4779]: I1128 12:35:42.780146 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:35:42 crc kubenswrapper[4779]: I1128 12:35:42.780168 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:35:42 crc kubenswrapper[4779]: I1128 12:35:42.782501 4779 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f" exitCode=0 Nov 28 12:35:42 crc kubenswrapper[4779]: I1128 12:35:42.782579 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f"} Nov 28 12:35:42 crc kubenswrapper[4779]: I1128 12:35:42.782672 4779 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 12:35:42 crc kubenswrapper[4779]: I1128 12:35:42.782852 4779 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 12:35:42 crc kubenswrapper[4779]: I1128 12:35:42.784291 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:35:42 crc kubenswrapper[4779]: I1128 12:35:42.784339 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:35:42 crc kubenswrapper[4779]: I1128 12:35:42.784361 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:35:42 crc kubenswrapper[4779]: I1128 12:35:42.784294 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:35:42 crc kubenswrapper[4779]: I1128 12:35:42.784434 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:35:42 crc kubenswrapper[4779]: I1128 12:35:42.784455 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:35:42 crc kubenswrapper[4779]: I1128 12:35:42.912413 4779 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 12:35:42 crc kubenswrapper[4779]: I1128 12:35:42.914213 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:35:42 crc kubenswrapper[4779]: I1128 12:35:42.914289 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:35:42 crc kubenswrapper[4779]: I1128 12:35:42.914305 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:35:42 crc kubenswrapper[4779]: I1128 12:35:42.914335 4779 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 28 12:35:43 crc kubenswrapper[4779]: I1128 12:35:43.789308 4779 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 12:35:43 crc kubenswrapper[4779]: I1128 12:35:43.789848 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4493f154b47a353308d54341114bbbd12157f9575b873e1648d1dae6a386a534"} Nov 28 12:35:43 crc kubenswrapper[4779]: I1128 12:35:43.789880 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8c9cede79cbe4c47d953dfa702fe815cc14ee242dede33edec3c4617824c89b9"} Nov 28 12:35:43 crc kubenswrapper[4779]: I1128 12:35:43.789894 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"16c959e0d582f2f01523650db7c0a1d6483dda34c3fcdfaa29d2d25e4d0b0f24"} Nov 28 12:35:43 crc kubenswrapper[4779]: I1128 12:35:43.789909 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 12:35:43 crc kubenswrapper[4779]: I1128 12:35:43.790287 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:35:43 crc kubenswrapper[4779]: I1128 12:35:43.790327 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:35:43 crc kubenswrapper[4779]: I1128 12:35:43.790339 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:35:44 crc kubenswrapper[4779]: I1128 12:35:44.799277 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"123567b9e202a9aae6ab83bca1ea909a496c476395703ab65e855be02f7af06e"} Nov 28 12:35:44 crc kubenswrapper[4779]: I1128 12:35:44.799364 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"71b9d44446078a2bb53a5a9b0a3f7a87ecf24a8554fb968a0250fc3a4cfb2d5f"} Nov 28 12:35:44 crc kubenswrapper[4779]: I1128 12:35:44.799416 4779 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 12:35:44 crc kubenswrapper[4779]: I1128 12:35:44.799528 4779 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 12:35:44 crc kubenswrapper[4779]: I1128 12:35:44.800833 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:35:44 crc kubenswrapper[4779]: I1128 12:35:44.800885 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:35:44 crc kubenswrapper[4779]: I1128 12:35:44.800898 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:35:44 crc kubenswrapper[4779]: I1128 12:35:44.801785 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:35:44 crc kubenswrapper[4779]: I1128 12:35:44.801843 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:35:44 crc kubenswrapper[4779]: I1128 12:35:44.801863 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:35:44 crc kubenswrapper[4779]: I1128 12:35:44.909263 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 12:35:44 crc kubenswrapper[4779]: I1128 12:35:44.957566 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 12:35:45 crc kubenswrapper[4779]: I1128 12:35:45.437304 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 12:35:45 crc kubenswrapper[4779]: I1128 12:35:45.437532 4779 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 12:35:45 crc kubenswrapper[4779]: I1128 12:35:45.438920 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:35:45 crc kubenswrapper[4779]: I1128 12:35:45.438968 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:35:45 crc kubenswrapper[4779]: I1128 12:35:45.438985 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:35:45 crc kubenswrapper[4779]: I1128 12:35:45.445527 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 12:35:45 crc kubenswrapper[4779]: I1128 12:35:45.803418 4779 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 12:35:45 crc kubenswrapper[4779]: I1128 12:35:45.803589 4779 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 12:35:45 crc kubenswrapper[4779]: I1128 12:35:45.803612 4779 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 12:35:45 crc kubenswrapper[4779]: I1128 12:35:45.804386 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 12:35:45 crc kubenswrapper[4779]: I1128 12:35:45.805435 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:35:45 crc kubenswrapper[4779]: I1128 12:35:45.805501 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:35:45 crc kubenswrapper[4779]: I1128 12:35:45.805522 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:35:45 crc kubenswrapper[4779]: I1128 12:35:45.805950 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:35:45 crc kubenswrapper[4779]: I1128 12:35:45.806008 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:35:45 crc kubenswrapper[4779]: I1128 12:35:45.806030 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:35:45 crc kubenswrapper[4779]: I1128 12:35:45.806533 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:35:45 crc kubenswrapper[4779]: I1128 12:35:45.806580 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:35:45 crc kubenswrapper[4779]: I1128 12:35:45.806593 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:35:46 crc kubenswrapper[4779]: I1128 12:35:46.806604 4779 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 12:35:46 crc kubenswrapper[4779]: I1128 12:35:46.807578 4779 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 12:35:46 crc kubenswrapper[4779]: I1128 12:35:46.807648 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:35:46 crc kubenswrapper[4779]: I1128 12:35:46.807852 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:35:46 crc kubenswrapper[4779]: I1128 12:35:46.807870 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:35:46 crc kubenswrapper[4779]: I1128 12:35:46.809536 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:35:46 crc kubenswrapper[4779]: I1128 12:35:46.809590 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:35:46 crc kubenswrapper[4779]: I1128 12:35:46.809602 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:35:47 crc kubenswrapper[4779]: I1128 12:35:47.060796 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 28 12:35:47 crc kubenswrapper[4779]: I1128 12:35:47.061609 4779 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 12:35:47 crc kubenswrapper[4779]: I1128 12:35:47.063647 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:35:47 crc kubenswrapper[4779]: I1128 12:35:47.063719 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:35:47 crc kubenswrapper[4779]: I1128 12:35:47.063740 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:35:47 crc kubenswrapper[4779]: I1128 12:35:47.820059 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 12:35:47 crc kubenswrapper[4779]: I1128 12:35:47.820319 4779 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 12:35:47 crc kubenswrapper[4779]: I1128 12:35:47.821909 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:35:47 crc kubenswrapper[4779]: I1128 12:35:47.821980 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:35:47 crc kubenswrapper[4779]: I1128 12:35:47.822004 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:35:48 crc kubenswrapper[4779]: I1128 12:35:48.629980 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 28 12:35:48 crc kubenswrapper[4779]: I1128 12:35:48.630259 4779 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 12:35:48 crc kubenswrapper[4779]: I1128 12:35:48.632250 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:35:48 crc kubenswrapper[4779]: I1128 12:35:48.632364 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:35:48 crc kubenswrapper[4779]: I1128 12:35:48.632381 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:35:49 crc kubenswrapper[4779]: I1128 12:35:49.445250 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 12:35:49 crc kubenswrapper[4779]: I1128 12:35:49.445491 4779 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 12:35:49 crc kubenswrapper[4779]: I1128 12:35:49.447786 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:35:49 crc kubenswrapper[4779]: I1128 12:35:49.447831 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:35:49 crc kubenswrapper[4779]: I1128 12:35:49.447846 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:35:49 crc kubenswrapper[4779]: E1128 12:35:49.808565 4779 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 28 12:35:50 crc kubenswrapper[4779]: I1128 12:35:50.819969 4779 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 28 12:35:50 crc kubenswrapper[4779]: I1128 12:35:50.820069 4779 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 12:35:50 crc kubenswrapper[4779]: I1128 12:35:50.831131 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 12:35:50 crc kubenswrapper[4779]: I1128 12:35:50.831373 4779 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 12:35:50 crc kubenswrapper[4779]: I1128 12:35:50.832915 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:35:50 crc kubenswrapper[4779]: I1128 12:35:50.832996 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:35:50 crc kubenswrapper[4779]: I1128 12:35:50.833076 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:35:51 crc kubenswrapper[4779]: I1128 12:35:51.122121 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 12:35:51 crc kubenswrapper[4779]: I1128 12:35:51.659019 4779 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Nov 28 12:35:51 crc kubenswrapper[4779]: I1128 12:35:51.822834 4779 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 12:35:51 crc kubenswrapper[4779]: I1128 12:35:51.824362 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:35:51 crc kubenswrapper[4779]: I1128 12:35:51.824432 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:35:51 crc kubenswrapper[4779]: I1128 12:35:51.824451 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:35:52 crc kubenswrapper[4779]: W1128 12:35:52.214656 4779 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 28 12:35:52 crc kubenswrapper[4779]: I1128 12:35:52.214738 4779 trace.go:236] Trace[1203801078]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Nov-2025 12:35:42.213) (total time: 10001ms): Nov 28 12:35:52 crc kubenswrapper[4779]: Trace[1203801078]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (12:35:52.214) Nov 28 12:35:52 crc kubenswrapper[4779]: Trace[1203801078]: [10.001254884s] [10.001254884s] END Nov 28 12:35:52 crc kubenswrapper[4779]: E1128 12:35:52.214755 4779 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 28 12:35:52 crc kubenswrapper[4779]: I1128 12:35:52.340269 4779 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 28 12:35:52 crc kubenswrapper[4779]: I1128 12:35:52.340546 4779 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 28 12:35:52 crc kubenswrapper[4779]: E1128 12:35:52.663684 4779 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Nov 28 12:35:52 crc kubenswrapper[4779]: W1128 12:35:52.745752 4779 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 28 12:35:52 crc kubenswrapper[4779]: I1128 12:35:52.745867 4779 trace.go:236] Trace[267844311]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Nov-2025 12:35:42.744) (total time: 10001ms): Nov 28 12:35:52 crc kubenswrapper[4779]: Trace[267844311]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (12:35:52.745) Nov 28 12:35:52 crc kubenswrapper[4779]: Trace[267844311]: [10.001759919s] [10.001759919s] END Nov 28 12:35:52 crc kubenswrapper[4779]: E1128 12:35:52.745900 4779 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 28 12:35:52 crc kubenswrapper[4779]: W1128 12:35:52.791078 4779 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 28 12:35:52 crc kubenswrapper[4779]: I1128 12:35:52.791171 4779 trace.go:236] Trace[519571723]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Nov-2025 12:35:42.789) (total time: 10001ms): Nov 28 12:35:52 crc kubenswrapper[4779]: Trace[519571723]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (12:35:52.791) Nov 28 12:35:52 crc kubenswrapper[4779]: Trace[519571723]: [10.001288461s] [10.001288461s] END Nov 28 12:35:52 crc kubenswrapper[4779]: E1128 12:35:52.791189 4779 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 28 12:35:52 crc kubenswrapper[4779]: I1128 12:35:52.812248 4779 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 28 12:35:52 crc kubenswrapper[4779]: I1128 12:35:52.812317 4779 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 28 12:35:52 crc kubenswrapper[4779]: I1128 12:35:52.817663 4779 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 28 12:35:52 crc kubenswrapper[4779]: I1128 12:35:52.817751 4779 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 28 12:35:54 crc kubenswrapper[4779]: I1128 12:35:54.964974 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 12:35:54 crc kubenswrapper[4779]: I1128 12:35:54.965298 4779 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 12:35:54 crc kubenswrapper[4779]: I1128 12:35:54.966985 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:35:54 crc kubenswrapper[4779]: I1128 12:35:54.967044 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:35:54 crc kubenswrapper[4779]: I1128 12:35:54.967071 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:35:54 crc kubenswrapper[4779]: I1128 12:35:54.972596 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 12:35:55 crc kubenswrapper[4779]: I1128 12:35:55.833785 4779 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 12:35:55 crc kubenswrapper[4779]: I1128 12:35:55.834753 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:35:55 crc kubenswrapper[4779]: I1128 12:35:55.834788 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:35:55 crc kubenswrapper[4779]: I1128 12:35:55.834800 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:35:56 crc kubenswrapper[4779]: I1128 12:35:56.135663 4779 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 28 12:35:56 crc kubenswrapper[4779]: I1128 12:35:56.654422 4779 apiserver.go:52] "Watching apiserver" Nov 28 12:35:56 crc kubenswrapper[4779]: I1128 12:35:56.688610 4779 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 28 12:35:56 crc kubenswrapper[4779]: I1128 12:35:56.688937 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h"] Nov 28 12:35:56 crc kubenswrapper[4779]: I1128 12:35:56.689242 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 12:35:56 crc kubenswrapper[4779]: I1128 12:35:56.689784 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:35:56 crc kubenswrapper[4779]: I1128 12:35:56.689855 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:35:56 crc kubenswrapper[4779]: E1128 12:35:56.689859 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:35:56 crc kubenswrapper[4779]: E1128 12:35:56.689887 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:35:56 crc kubenswrapper[4779]: I1128 12:35:56.689963 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 12:35:56 crc kubenswrapper[4779]: I1128 12:35:56.690139 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:35:56 crc kubenswrapper[4779]: I1128 12:35:56.690213 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 12:35:56 crc kubenswrapper[4779]: E1128 12:35:56.690349 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:35:56 crc kubenswrapper[4779]: I1128 12:35:56.691774 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 28 12:35:56 crc kubenswrapper[4779]: I1128 12:35:56.691862 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 28 12:35:56 crc kubenswrapper[4779]: I1128 12:35:56.692247 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 28 12:35:56 crc kubenswrapper[4779]: I1128 12:35:56.693044 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 28 12:35:56 crc kubenswrapper[4779]: I1128 12:35:56.693227 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 28 12:35:56 crc kubenswrapper[4779]: I1128 12:35:56.693363 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 28 12:35:56 crc kubenswrapper[4779]: I1128 12:35:56.693538 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 28 12:35:56 crc kubenswrapper[4779]: I1128 12:35:56.693717 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 28 12:35:56 crc kubenswrapper[4779]: I1128 12:35:56.694265 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 28 12:35:56 crc kubenswrapper[4779]: I1128 12:35:56.732603 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:56 crc kubenswrapper[4779]: I1128 12:35:56.750084 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:56 crc kubenswrapper[4779]: I1128 12:35:56.761465 4779 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 28 12:35:56 crc kubenswrapper[4779]: I1128 12:35:56.766990 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:56 crc kubenswrapper[4779]: I1128 12:35:56.786719 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:56 crc kubenswrapper[4779]: I1128 12:35:56.800396 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:56 crc kubenswrapper[4779]: I1128 12:35:56.810781 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:56 crc kubenswrapper[4779]: I1128 12:35:56.835817 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.091126 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.108640 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.109372 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.112905 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.119287 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.129066 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.150353 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.177699 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.206027 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.218288 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.235238 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbbbf6f-004c-42ae-8a38-1bcc6cb88ac2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9cede79cbe4c47d953dfa702fe815cc14ee242dede33edec3c4617824c89b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4493f154b47a353308d54341114bbbd12157f9575b873e1648d1dae6a386a534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9d44446078a2bb53a5a9b0a3f7a87ecf24a8554fb968a0250fc3a4cfb2d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://123567b9e202a9aae6ab83bca1ea909a496c476395703ab65e855be02f7af06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c959e0d582f2f01523650db7c0a1d6483dda34c3fcdfaa29d2d25e4d0b0f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.245589 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.258936 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.271197 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.286132 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.295355 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.807545 4779 trace.go:236] Trace[2099937112]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Nov-2025 12:35:43.784) (total time: 14023ms): Nov 28 12:35:57 crc kubenswrapper[4779]: Trace[2099937112]: ---"Objects listed" error: 14023ms (12:35:57.807) Nov 28 12:35:57 crc kubenswrapper[4779]: Trace[2099937112]: [14.023360627s] [14.023360627s] END Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.807585 4779 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 28 12:35:57 crc kubenswrapper[4779]: E1128 12:35:57.809078 4779 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.810128 4779 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.853879 4779 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:32930->192.168.126.11:17697: read: connection reset by peer" start-of-body= Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.854027 4779 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:32930->192.168.126.11:17697: read: connection reset by peer" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.854551 4779 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.854602 4779 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.855777 4779 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:32936->192.168.126.11:17697: read: connection reset by peer" start-of-body= Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.855850 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:32936->192.168.126.11:17697: read: connection reset by peer" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.910535 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.910577 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.910594 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.910611 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.910634 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.910658 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.910678 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.910697 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.910718 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.910736 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.910769 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.910788 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.910810 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.910830 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.910848 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.910868 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.910889 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.910911 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.910930 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.910949 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.910950 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.910970 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.910993 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.911015 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.911035 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.911055 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.911076 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.911117 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.911144 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.911168 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.911190 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.911211 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.911213 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.911230 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.911285 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.911322 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.911339 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.911363 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.911397 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.911414 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.911429 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.911445 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.911460 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.911477 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.911496 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.911516 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.911512 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.911553 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.911570 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.911585 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.911601 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.911632 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.911648 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.911662 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.911679 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.911712 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.911730 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.911748 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.911764 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.911800 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.911817 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.911834 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.911866 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.911885 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.911903 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.911919 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.911951 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.911966 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.911982 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.911997 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.912031 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.912049 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.912066 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.912081 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.912130 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.912146 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.912164 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.912210 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.912232 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.912803 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.912828 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.912898 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.912926 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.912953 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.912997 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.913018 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.913080 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.913119 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.913163 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.913185 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.913209 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.913233 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.913254 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.913274 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.913293 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.913313 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.913426 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.913449 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.913554 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.913579 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.913913 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.913942 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.914049 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.914077 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.914128 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.914154 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.914247 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.914272 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.914293 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.914628 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.914653 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.914710 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.914733 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.914753 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.914857 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.914875 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.914911 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.914928 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.914945 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.914960 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.914996 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.915012 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.915028 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.915043 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.915082 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.911660 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.915271 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.915460 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.915482 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.915544 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.915562 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.915584 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.915620 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.915635 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.915651 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.915666 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.915706 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.915721 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.915738 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.915774 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.915792 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.915807 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.915821 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.915900 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.915924 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.911735 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.911778 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.911947 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.911961 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.912163 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.912169 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.912183 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.912504 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.912524 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.912670 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.912744 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.912804 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.912858 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.912966 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.913119 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.913358 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.913460 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.913479 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.913563 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.913704 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.913709 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.913712 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.913760 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.913811 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.913957 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.913980 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.914029 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.914249 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.914292 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.914337 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.914351 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.914359 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.914470 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.914536 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.914818 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.914868 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.915142 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.915949 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.916385 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.916421 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.916449 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.916481 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.916505 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.916528 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.916554 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.916580 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.916607 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.916632 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.916657 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.916684 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.916711 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.916735 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.916759 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.916787 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.916813 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.916840 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.916867 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.916895 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.916920 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.916945 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.916969 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.916993 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.917017 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.917041 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.917067 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.917112 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.917165 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.917281 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.917311 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.917338 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.917366 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.917396 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.917421 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.917445 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.917471 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.917497 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.917520 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.917549 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.917572 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.917596 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.917618 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.917640 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.917663 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.917687 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.917713 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.917737 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.917763 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.917790 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.917817 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.917842 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.917866 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.917894 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.917920 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.917976 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.918013 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.918050 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.918078 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.918127 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.918157 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.918185 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.918213 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.918280 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.918334 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.918365 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.918392 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.918420 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.918446 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.918514 4779 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.918531 4779 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.918544 4779 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.918559 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.918572 4779 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.918584 4779 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.918599 4779 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.918611 4779 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.918627 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.918641 4779 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.918655 4779 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.918669 4779 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.918681 4779 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.918698 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.918712 4779 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.918727 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.918739 4779 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.918752 4779 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.918764 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.918776 4779 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.918787 4779 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.918800 4779 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.918813 4779 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.918827 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.918842 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.918856 4779 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.918870 4779 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.918882 4779 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.918899 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.918913 4779 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.918925 4779 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.918938 4779 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.918952 4779 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.918964 4779 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.918976 4779 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.918989 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.919002 4779 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.919016 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.919028 4779 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.919041 4779 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.919055 4779 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.919068 4779 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.919082 4779 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.919537 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.916811 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.916852 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.917086 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.917365 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.917383 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.917390 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.917577 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.917768 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.917940 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.918335 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.918411 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.918695 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.918753 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.918839 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.918934 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.919179 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.919553 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.919593 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.919920 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.919976 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.920062 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.920353 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.920437 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.920862 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.924029 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.925016 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.925145 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.925351 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.925580 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.925823 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.926108 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.926088 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.926164 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.926175 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.926334 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.926509 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.926521 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.926692 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.926712 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.926823 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.926370 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.931592 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.936958 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.937849 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.938177 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.938347 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.938758 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.938835 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.939410 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.940214 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.940438 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.941116 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.942676 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.942730 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.943189 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: E1128 12:35:57.945001 4779 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 12:35:57 crc kubenswrapper[4779]: E1128 12:35:57.945162 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 12:35:58.445135819 +0000 UTC m=+19.010811263 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.945624 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.944394 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.946544 4779 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.950718 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.950944 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: E1128 12:35:57.951229 4779 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 12:35:57 crc kubenswrapper[4779]: E1128 12:35:57.951556 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 12:35:58.451529968 +0000 UTC m=+19.017205362 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.951402 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.951472 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: E1128 12:35:57.951599 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 12:35:58.45158507 +0000 UTC m=+19.017260464 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.951617 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.951777 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.952043 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.952267 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.952338 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.952484 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.952773 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.952925 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.952972 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.953013 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.953253 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.953264 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.953211 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.953785 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.954260 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.954278 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.954448 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.955401 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.957431 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.958233 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.969416 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.970786 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.971511 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: E1128 12:35:57.978771 4779 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 12:35:57 crc kubenswrapper[4779]: E1128 12:35:57.978811 4779 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 12:35:57 crc kubenswrapper[4779]: E1128 12:35:57.978824 4779 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 12:35:57 crc kubenswrapper[4779]: E1128 12:35:57.978889 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 12:35:58.478867942 +0000 UTC m=+19.044543296 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.981662 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.982655 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 12:35:57 crc kubenswrapper[4779]: E1128 12:35:57.982846 4779 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 12:35:57 crc kubenswrapper[4779]: E1128 12:35:57.982874 4779 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 12:35:57 crc kubenswrapper[4779]: E1128 12:35:57.982888 4779 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 12:35:57 crc kubenswrapper[4779]: E1128 12:35:57.982941 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 12:35:58.482925589 +0000 UTC m=+19.048600943 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.983408 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.983504 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.984368 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.995327 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:57 crc kubenswrapper[4779]: I1128 12:35:57.996308 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.001358 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.005441 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.005720 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.006207 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.010368 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.011030 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.011162 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.011461 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.011733 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.012015 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.012411 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.012462 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.014282 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.014484 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.016273 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.016866 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.019023 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.019293 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.019493 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.019596 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.019824 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.019850 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.019906 4779 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.019916 4779 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.019925 4779 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.019933 4779 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.019942 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.019951 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.019959 4779 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.019968 4779 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.019976 4779 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.019985 4779 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.019994 4779 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020002 4779 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020011 4779 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020019 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020027 4779 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020038 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020046 4779 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020054 4779 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020064 4779 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020072 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020081 4779 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020104 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020113 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020121 4779 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020129 4779 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020143 4779 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020152 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020160 4779 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020168 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020176 4779 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020185 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020199 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020208 4779 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020217 4779 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020226 4779 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020235 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020243 4779 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020252 4779 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020260 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020269 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020277 4779 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020285 4779 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020293 4779 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020301 4779 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020309 4779 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020317 4779 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020325 4779 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020333 4779 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020341 4779 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020349 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020357 4779 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020366 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020374 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020382 4779 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020390 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020398 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020407 4779 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020414 4779 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020455 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020464 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020473 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020481 4779 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020490 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020499 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020507 4779 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020515 4779 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020525 4779 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020535 4779 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020543 4779 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020552 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020560 4779 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020567 4779 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020576 4779 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020583 4779 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020592 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020600 4779 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020608 4779 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020616 4779 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020625 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020633 4779 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020641 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020650 4779 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020658 4779 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020665 4779 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020674 4779 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020682 4779 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020689 4779 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020698 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020706 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020715 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020724 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020732 4779 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020741 4779 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020749 4779 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020757 4779 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020765 4779 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020773 4779 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020781 4779 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020790 4779 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020805 4779 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020815 4779 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020823 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020929 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 12:35:58 crc kubenswrapper[4779]: W1128 12:35:58.020985 4779 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.020992 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.021014 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.021311 4779 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.022485 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.023309 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.023381 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.023624 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.023872 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.023974 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.024060 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.024294 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.030850 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.031343 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.033336 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.033678 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.034057 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.041529 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.041864 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.042057 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.047349 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.047596 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.048513 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.049196 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.051488 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.052378 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.052425 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.052599 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.052596 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.052684 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.052601 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.053187 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.053197 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.053317 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.056297 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.056317 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.057362 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.057490 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.058525 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.061613 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.065713 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.065883 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.066082 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.066325 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.066384 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.068383 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.068631 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.075547 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-dlvj8"] Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.075753 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-kj9g2"] Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.075891 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-dlvj8" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.075950 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.076344 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.076396 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.076646 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.077725 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.078169 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.080711 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.080790 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.093049 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.093417 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.093809 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.093892 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.094020 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.094139 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.094266 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.094423 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.094572 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.097843 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.104806 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.112647 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.115587 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.119930 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbbbf6f-004c-42ae-8a38-1bcc6cb88ac2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9cede79cbe4c47d953dfa702fe815cc14ee242dede33edec3c4617824c89b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4493f154b47a353308d54341114bbbd12157f9575b873e1648d1dae6a386a534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9d44446078a2bb53a5a9b0a3f7a87ecf24a8554fb968a0250fc3a4cfb2d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://123567b9e202a9aae6ab83bca1ea909a496c476395703ab65e855be02f7af06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c959e0d582f2f01523650db7c0a1d6483dda34c3fcdfaa29d2d25e4d0b0f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.121166 4779 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.121277 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.121377 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.121457 4779 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.121536 4779 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.121618 4779 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.121702 4779 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.121782 4779 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.121868 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.121955 4779 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.122038 4779 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.122139 4779 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.122222 4779 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.122298 4779 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.122380 4779 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.122460 4779 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.122541 4779 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.122606 4779 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.122666 4779 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.122732 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.122788 4779 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.122848 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.122921 4779 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.122994 4779 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.123071 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.123169 4779 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.123233 4779 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.123287 4779 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.123354 4779 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.123424 4779 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.123483 4779 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.123540 4779 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.123595 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.123653 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.123706 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.123758 4779 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.123815 4779 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.123868 4779 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.123924 4779 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.123980 4779 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.124036 4779 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.124107 4779 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.124188 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.124254 4779 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.124315 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.124370 4779 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.124424 4779 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.124480 4779 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.124555 4779 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.129666 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.137809 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.145820 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dlvj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8b3aa68-52ee-40cd-a059-6e410b826ce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db55w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dlvj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.155659 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b2a3eb4-4de5-491b-b466-3a35b7d745ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kj9g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.169937 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373d4c2a-0b03-4671-945a-0583fa342b3d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e79e9cc7bdaacc427604d12cf94272c7ed3d93519b1d285ba336edded1b3642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0417da6607c0d549767642332fa4fb21bbef525d7073d0a352120092d3450f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b887fb78d1be13c77a88ce49c84ff0839a51056e29d59d571ab7da133dd0d897\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a538ac7a3b48f9c58a68688a95342fb3a9d26ee3e5d7c65f1e3b8d99993294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.178769 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.187201 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.194635 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.201315 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.201748 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.208597 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.214392 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 12:35:58 crc kubenswrapper[4779]: W1128 12:35:58.214604 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-3ae1121f084b43e5f3bf52a59539671f3945d3c4cd073f57e91968fbc3f1c7e5 WatchSource:0}: Error finding container 3ae1121f084b43e5f3bf52a59539671f3945d3c4cd073f57e91968fbc3f1c7e5: Status 404 returned error can't find the container with id 3ae1121f084b43e5f3bf52a59539671f3945d3c4cd073f57e91968fbc3f1c7e5 Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.216052 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbbbf6f-004c-42ae-8a38-1bcc6cb88ac2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9cede79cbe4c47d953dfa702fe815cc14ee242dede33edec3c4617824c89b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4493f154b47a353308d54341114bbbd12157f9575b873e1648d1dae6a386a534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9d44446078a2bb53a5a9b0a3f7a87ecf24a8554fb968a0250fc3a4cfb2d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://123567b9e202a9aae6ab83bca1ea909a496c476395703ab65e855be02f7af06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c959e0d582f2f01523650db7c0a1d6483dda34c3fcdfaa29d2d25e4d0b0f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:58 crc kubenswrapper[4779]: W1128 12:35:58.216953 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-f992f314742469edfa915810254b0d24fcfb3790989a102e944897f817c848f9 WatchSource:0}: Error finding container f992f314742469edfa915810254b0d24fcfb3790989a102e944897f817c848f9: Status 404 returned error can't find the container with id f992f314742469edfa915810254b0d24fcfb3790989a102e944897f817c848f9 Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.225798 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db55w\" (UniqueName: \"kubernetes.io/projected/a8b3aa68-52ee-40cd-a059-6e410b826ce7-kube-api-access-db55w\") pod \"node-resolver-dlvj8\" (UID: \"a8b3aa68-52ee-40cd-a059-6e410b826ce7\") " pod="openshift-dns/node-resolver-dlvj8" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.225839 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3b2a3eb4-4de5-491b-b466-3a35b7d745ec-mcd-auth-proxy-config\") pod \"machine-config-daemon-kj9g2\" (UID: \"3b2a3eb4-4de5-491b-b466-3a35b7d745ec\") " pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.225873 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzg5f\" (UniqueName: \"kubernetes.io/projected/3b2a3eb4-4de5-491b-b466-3a35b7d745ec-kube-api-access-rzg5f\") pod \"machine-config-daemon-kj9g2\" (UID: \"3b2a3eb4-4de5-491b-b466-3a35b7d745ec\") " pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.225922 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3b2a3eb4-4de5-491b-b466-3a35b7d745ec-proxy-tls\") pod \"machine-config-daemon-kj9g2\" (UID: \"3b2a3eb4-4de5-491b-b466-3a35b7d745ec\") " pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.225964 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a8b3aa68-52ee-40cd-a059-6e410b826ce7-hosts-file\") pod \"node-resolver-dlvj8\" (UID: \"a8b3aa68-52ee-40cd-a059-6e410b826ce7\") " pod="openshift-dns/node-resolver-dlvj8" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.225994 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3b2a3eb4-4de5-491b-b466-3a35b7d745ec-rootfs\") pod \"machine-config-daemon-kj9g2\" (UID: \"3b2a3eb4-4de5-491b-b466-3a35b7d745ec\") " pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.226150 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.240404 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.326647 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzg5f\" (UniqueName: \"kubernetes.io/projected/3b2a3eb4-4de5-491b-b466-3a35b7d745ec-kube-api-access-rzg5f\") pod \"machine-config-daemon-kj9g2\" (UID: \"3b2a3eb4-4de5-491b-b466-3a35b7d745ec\") " pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.326709 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a8b3aa68-52ee-40cd-a059-6e410b826ce7-hosts-file\") pod \"node-resolver-dlvj8\" (UID: \"a8b3aa68-52ee-40cd-a059-6e410b826ce7\") " pod="openshift-dns/node-resolver-dlvj8" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.326733 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3b2a3eb4-4de5-491b-b466-3a35b7d745ec-proxy-tls\") pod \"machine-config-daemon-kj9g2\" (UID: \"3b2a3eb4-4de5-491b-b466-3a35b7d745ec\") " pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.326757 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3b2a3eb4-4de5-491b-b466-3a35b7d745ec-rootfs\") pod \"machine-config-daemon-kj9g2\" (UID: \"3b2a3eb4-4de5-491b-b466-3a35b7d745ec\") " pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.326792 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-db55w\" (UniqueName: \"kubernetes.io/projected/a8b3aa68-52ee-40cd-a059-6e410b826ce7-kube-api-access-db55w\") pod \"node-resolver-dlvj8\" (UID: \"a8b3aa68-52ee-40cd-a059-6e410b826ce7\") " pod="openshift-dns/node-resolver-dlvj8" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.326811 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3b2a3eb4-4de5-491b-b466-3a35b7d745ec-mcd-auth-proxy-config\") pod \"machine-config-daemon-kj9g2\" (UID: \"3b2a3eb4-4de5-491b-b466-3a35b7d745ec\") " pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.327571 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3b2a3eb4-4de5-491b-b466-3a35b7d745ec-mcd-auth-proxy-config\") pod \"machine-config-daemon-kj9g2\" (UID: \"3b2a3eb4-4de5-491b-b466-3a35b7d745ec\") " pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.327603 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3b2a3eb4-4de5-491b-b466-3a35b7d745ec-rootfs\") pod \"machine-config-daemon-kj9g2\" (UID: \"3b2a3eb4-4de5-491b-b466-3a35b7d745ec\") " pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.327570 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a8b3aa68-52ee-40cd-a059-6e410b826ce7-hosts-file\") pod \"node-resolver-dlvj8\" (UID: \"a8b3aa68-52ee-40cd-a059-6e410b826ce7\") " pod="openshift-dns/node-resolver-dlvj8" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.333109 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3b2a3eb4-4de5-491b-b466-3a35b7d745ec-proxy-tls\") pod \"machine-config-daemon-kj9g2\" (UID: \"3b2a3eb4-4de5-491b-b466-3a35b7d745ec\") " pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.350616 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzg5f\" (UniqueName: \"kubernetes.io/projected/3b2a3eb4-4de5-491b-b466-3a35b7d745ec-kube-api-access-rzg5f\") pod \"machine-config-daemon-kj9g2\" (UID: \"3b2a3eb4-4de5-491b-b466-3a35b7d745ec\") " pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.350685 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-db55w\" (UniqueName: \"kubernetes.io/projected/a8b3aa68-52ee-40cd-a059-6e410b826ce7-kube-api-access-db55w\") pod \"node-resolver-dlvj8\" (UID: \"a8b3aa68-52ee-40cd-a059-6e410b826ce7\") " pod="openshift-dns/node-resolver-dlvj8" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.409683 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-dlvj8" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.419122 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" Nov 28 12:35:58 crc kubenswrapper[4779]: W1128 12:35:58.419521 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8b3aa68_52ee_40cd_a059_6e410b826ce7.slice/crio-663d40120e210dda29070a86437233b8f566b580605e2e3b3a10cfb057f0e9af WatchSource:0}: Error finding container 663d40120e210dda29070a86437233b8f566b580605e2e3b3a10cfb057f0e9af: Status 404 returned error can't find the container with id 663d40120e210dda29070a86437233b8f566b580605e2e3b3a10cfb057f0e9af Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.426394 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-pzwdx"] Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.427685 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-2gg4m"] Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.427796 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-pzwdx" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.430354 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.431235 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-2gg4m" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.433204 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.433061 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.439117 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.439499 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.439624 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.443677 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.461916 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbbbf6f-004c-42ae-8a38-1bcc6cb88ac2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9cede79cbe4c47d953dfa702fe815cc14ee242dede33edec3c4617824c89b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4493f154b47a353308d54341114bbbd12157f9575b873e1648d1dae6a386a534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9d44446078a2bb53a5a9b0a3f7a87ecf24a8554fb968a0250fc3a4cfb2d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://123567b9e202a9aae6ab83bca1ea909a496c476395703ab65e855be02f7af06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c959e0d582f2f01523650db7c0a1d6483dda34c3fcdfaa29d2d25e4d0b0f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.471114 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.478524 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dlvj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8b3aa68-52ee-40cd-a059-6e410b826ce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db55w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dlvj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.486928 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.494261 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.502568 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.511979 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.519475 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.528319 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.528412 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1bd5bc7d-159f-4f4e-8647-8a373e47d35f-cni-binary-copy\") pod \"multus-additional-cni-plugins-2gg4m\" (UID: \"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\") " pod="openshift-multus/multus-additional-cni-plugins-2gg4m" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.528441 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ba664a9e-76d2-4d02-889a-e7062bfc903c-host-var-lib-cni-multus\") pod \"multus-pzwdx\" (UID: \"ba664a9e-76d2-4d02-889a-e7062bfc903c\") " pod="openshift-multus/multus-pzwdx" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.528477 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ba664a9e-76d2-4d02-889a-e7062bfc903c-cnibin\") pod \"multus-pzwdx\" (UID: \"ba664a9e-76d2-4d02-889a-e7062bfc903c\") " pod="openshift-multus/multus-pzwdx" Nov 28 12:35:58 crc kubenswrapper[4779]: E1128 12:35:58.528505 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 12:35:59.528478399 +0000 UTC m=+20.094153753 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.528545 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ba664a9e-76d2-4d02-889a-e7062bfc903c-host-run-k8s-cni-cncf-io\") pod \"multus-pzwdx\" (UID: \"ba664a9e-76d2-4d02-889a-e7062bfc903c\") " pod="openshift-multus/multus-pzwdx" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.528659 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ba664a9e-76d2-4d02-889a-e7062bfc903c-multus-cni-dir\") pod \"multus-pzwdx\" (UID: \"ba664a9e-76d2-4d02-889a-e7062bfc903c\") " pod="openshift-multus/multus-pzwdx" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.528737 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ba664a9e-76d2-4d02-889a-e7062bfc903c-multus-socket-dir-parent\") pod \"multus-pzwdx\" (UID: \"ba664a9e-76d2-4d02-889a-e7062bfc903c\") " pod="openshift-multus/multus-pzwdx" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.528790 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ba664a9e-76d2-4d02-889a-e7062bfc903c-system-cni-dir\") pod \"multus-pzwdx\" (UID: \"ba664a9e-76d2-4d02-889a-e7062bfc903c\") " pod="openshift-multus/multus-pzwdx" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.528835 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.528861 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfslc\" (UniqueName: \"kubernetes.io/projected/ba664a9e-76d2-4d02-889a-e7062bfc903c-kube-api-access-nfslc\") pod \"multus-pzwdx\" (UID: \"ba664a9e-76d2-4d02-889a-e7062bfc903c\") " pod="openshift-multus/multus-pzwdx" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.528883 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1bd5bc7d-159f-4f4e-8647-8a373e47d35f-system-cni-dir\") pod \"multus-additional-cni-plugins-2gg4m\" (UID: \"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\") " pod="openshift-multus/multus-additional-cni-plugins-2gg4m" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.528902 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ba664a9e-76d2-4d02-889a-e7062bfc903c-multus-conf-dir\") pod \"multus-pzwdx\" (UID: \"ba664a9e-76d2-4d02-889a-e7062bfc903c\") " pod="openshift-multus/multus-pzwdx" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.528940 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js6cp\" (UniqueName: \"kubernetes.io/projected/1bd5bc7d-159f-4f4e-8647-8a373e47d35f-kube-api-access-js6cp\") pod \"multus-additional-cni-plugins-2gg4m\" (UID: \"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\") " pod="openshift-multus/multus-additional-cni-plugins-2gg4m" Nov 28 12:35:58 crc kubenswrapper[4779]: E1128 12:35:58.529036 4779 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 12:35:58 crc kubenswrapper[4779]: E1128 12:35:58.529062 4779 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 12:35:58 crc kubenswrapper[4779]: E1128 12:35:58.529075 4779 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.529036 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ba664a9e-76d2-4d02-889a-e7062bfc903c-os-release\") pod \"multus-pzwdx\" (UID: \"ba664a9e-76d2-4d02-889a-e7062bfc903c\") " pod="openshift-multus/multus-pzwdx" Nov 28 12:35:58 crc kubenswrapper[4779]: E1128 12:35:58.529129 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 12:35:59.529115766 +0000 UTC m=+20.094791120 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.529150 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ba664a9e-76d2-4d02-889a-e7062bfc903c-host-run-netns\") pod \"multus-pzwdx\" (UID: \"ba664a9e-76d2-4d02-889a-e7062bfc903c\") " pod="openshift-multus/multus-pzwdx" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.529171 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ba664a9e-76d2-4d02-889a-e7062bfc903c-cni-binary-copy\") pod \"multus-pzwdx\" (UID: \"ba664a9e-76d2-4d02-889a-e7062bfc903c\") " pod="openshift-multus/multus-pzwdx" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.529195 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.529215 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ba664a9e-76d2-4d02-889a-e7062bfc903c-multus-daemon-config\") pod \"multus-pzwdx\" (UID: \"ba664a9e-76d2-4d02-889a-e7062bfc903c\") " pod="openshift-multus/multus-pzwdx" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.529233 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.529249 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ba664a9e-76d2-4d02-889a-e7062bfc903c-hostroot\") pod \"multus-pzwdx\" (UID: \"ba664a9e-76d2-4d02-889a-e7062bfc903c\") " pod="openshift-multus/multus-pzwdx" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.529267 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1bd5bc7d-159f-4f4e-8647-8a373e47d35f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-2gg4m\" (UID: \"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\") " pod="openshift-multus/multus-additional-cni-plugins-2gg4m" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.529283 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ba664a9e-76d2-4d02-889a-e7062bfc903c-host-run-multus-certs\") pod \"multus-pzwdx\" (UID: \"ba664a9e-76d2-4d02-889a-e7062bfc903c\") " pod="openshift-multus/multus-pzwdx" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.529299 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ba664a9e-76d2-4d02-889a-e7062bfc903c-etc-kubernetes\") pod \"multus-pzwdx\" (UID: \"ba664a9e-76d2-4d02-889a-e7062bfc903c\") " pod="openshift-multus/multus-pzwdx" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.529317 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1bd5bc7d-159f-4f4e-8647-8a373e47d35f-os-release\") pod \"multus-additional-cni-plugins-2gg4m\" (UID: \"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\") " pod="openshift-multus/multus-additional-cni-plugins-2gg4m" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.529333 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ba664a9e-76d2-4d02-889a-e7062bfc903c-host-var-lib-cni-bin\") pod \"multus-pzwdx\" (UID: \"ba664a9e-76d2-4d02-889a-e7062bfc903c\") " pod="openshift-multus/multus-pzwdx" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.529353 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1bd5bc7d-159f-4f4e-8647-8a373e47d35f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-2gg4m\" (UID: \"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\") " pod="openshift-multus/multus-additional-cni-plugins-2gg4m" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.529370 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1bd5bc7d-159f-4f4e-8647-8a373e47d35f-cnibin\") pod \"multus-additional-cni-plugins-2gg4m\" (UID: \"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\") " pod="openshift-multus/multus-additional-cni-plugins-2gg4m" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.529393 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.529411 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ba664a9e-76d2-4d02-889a-e7062bfc903c-host-var-lib-kubelet\") pod \"multus-pzwdx\" (UID: \"ba664a9e-76d2-4d02-889a-e7062bfc903c\") " pod="openshift-multus/multus-pzwdx" Nov 28 12:35:58 crc kubenswrapper[4779]: E1128 12:35:58.529494 4779 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 12:35:58 crc kubenswrapper[4779]: E1128 12:35:58.529505 4779 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 12:35:58 crc kubenswrapper[4779]: E1128 12:35:58.529512 4779 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 12:35:58 crc kubenswrapper[4779]: E1128 12:35:58.529534 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 12:35:59.529527057 +0000 UTC m=+20.095202411 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 12:35:58 crc kubenswrapper[4779]: E1128 12:35:58.529565 4779 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 12:35:58 crc kubenswrapper[4779]: E1128 12:35:58.529582 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 12:35:59.529577378 +0000 UTC m=+20.095252732 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 12:35:58 crc kubenswrapper[4779]: E1128 12:35:58.529681 4779 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 12:35:58 crc kubenswrapper[4779]: E1128 12:35:58.529712 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 12:35:59.529706841 +0000 UTC m=+20.095382195 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.531252 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373d4c2a-0b03-4671-945a-0583fa342b3d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e79e9cc7bdaacc427604d12cf94272c7ed3d93519b1d285ba336edded1b3642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0417da6607c0d549767642332fa4fb21bbef525d7073d0a352120092d3450f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b887fb78d1be13c77a88ce49c84ff0839a51056e29d59d571ab7da133dd0d897\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a538ac7a3b48f9c58a68688a95342fb3a9d26ee3e5d7c65f1e3b8d99993294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.542376 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b2a3eb4-4de5-491b-b466-3a35b7d745ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kj9g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.559274 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pzwdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba664a9e-76d2-4d02-889a-e7062bfc903c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfslc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pzwdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.568167 4779 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.588140 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbbbf6f-004c-42ae-8a38-1bcc6cb88ac2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9cede79cbe4c47d953dfa702fe815cc14ee242dede33edec3c4617824c89b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4493f154b47a353308d54341114bbbd12157f9575b873e1648d1dae6a386a534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9d44446078a2bb53a5a9b0a3f7a87ecf24a8554fb968a0250fc3a4cfb2d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://123567b9e202a9aae6ab83bca1ea909a496c476395703ab65e855be02f7af06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c959e0d582f2f01523650db7c0a1d6483dda34c3fcdfaa29d2d25e4d0b0f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.609126 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.620512 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dlvj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8b3aa68-52ee-40cd-a059-6e410b826ce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db55w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dlvj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.630479 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ba664a9e-76d2-4d02-889a-e7062bfc903c-host-var-lib-cni-bin\") pod \"multus-pzwdx\" (UID: \"ba664a9e-76d2-4d02-889a-e7062bfc903c\") " pod="openshift-multus/multus-pzwdx" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.630515 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1bd5bc7d-159f-4f4e-8647-8a373e47d35f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-2gg4m\" (UID: \"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\") " pod="openshift-multus/multus-additional-cni-plugins-2gg4m" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.630542 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1bd5bc7d-159f-4f4e-8647-8a373e47d35f-cnibin\") pod \"multus-additional-cni-plugins-2gg4m\" (UID: \"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\") " pod="openshift-multus/multus-additional-cni-plugins-2gg4m" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.630560 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ba664a9e-76d2-4d02-889a-e7062bfc903c-host-var-lib-kubelet\") pod \"multus-pzwdx\" (UID: \"ba664a9e-76d2-4d02-889a-e7062bfc903c\") " pod="openshift-multus/multus-pzwdx" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.630595 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1bd5bc7d-159f-4f4e-8647-8a373e47d35f-cni-binary-copy\") pod \"multus-additional-cni-plugins-2gg4m\" (UID: \"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\") " pod="openshift-multus/multus-additional-cni-plugins-2gg4m" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.630626 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ba664a9e-76d2-4d02-889a-e7062bfc903c-cnibin\") pod \"multus-pzwdx\" (UID: \"ba664a9e-76d2-4d02-889a-e7062bfc903c\") " pod="openshift-multus/multus-pzwdx" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.630642 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ba664a9e-76d2-4d02-889a-e7062bfc903c-host-run-k8s-cni-cncf-io\") pod \"multus-pzwdx\" (UID: \"ba664a9e-76d2-4d02-889a-e7062bfc903c\") " pod="openshift-multus/multus-pzwdx" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.630656 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ba664a9e-76d2-4d02-889a-e7062bfc903c-host-var-lib-cni-multus\") pod \"multus-pzwdx\" (UID: \"ba664a9e-76d2-4d02-889a-e7062bfc903c\") " pod="openshift-multus/multus-pzwdx" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.630672 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ba664a9e-76d2-4d02-889a-e7062bfc903c-multus-socket-dir-parent\") pod \"multus-pzwdx\" (UID: \"ba664a9e-76d2-4d02-889a-e7062bfc903c\") " pod="openshift-multus/multus-pzwdx" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.630689 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ba664a9e-76d2-4d02-889a-e7062bfc903c-system-cni-dir\") pod \"multus-pzwdx\" (UID: \"ba664a9e-76d2-4d02-889a-e7062bfc903c\") " pod="openshift-multus/multus-pzwdx" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.630706 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ba664a9e-76d2-4d02-889a-e7062bfc903c-multus-cni-dir\") pod \"multus-pzwdx\" (UID: \"ba664a9e-76d2-4d02-889a-e7062bfc903c\") " pod="openshift-multus/multus-pzwdx" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.630727 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfslc\" (UniqueName: \"kubernetes.io/projected/ba664a9e-76d2-4d02-889a-e7062bfc903c-kube-api-access-nfslc\") pod \"multus-pzwdx\" (UID: \"ba664a9e-76d2-4d02-889a-e7062bfc903c\") " pod="openshift-multus/multus-pzwdx" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.630743 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1bd5bc7d-159f-4f4e-8647-8a373e47d35f-system-cni-dir\") pod \"multus-additional-cni-plugins-2gg4m\" (UID: \"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\") " pod="openshift-multus/multus-additional-cni-plugins-2gg4m" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.630760 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js6cp\" (UniqueName: \"kubernetes.io/projected/1bd5bc7d-159f-4f4e-8647-8a373e47d35f-kube-api-access-js6cp\") pod \"multus-additional-cni-plugins-2gg4m\" (UID: \"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\") " pod="openshift-multus/multus-additional-cni-plugins-2gg4m" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.630775 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ba664a9e-76d2-4d02-889a-e7062bfc903c-os-release\") pod \"multus-pzwdx\" (UID: \"ba664a9e-76d2-4d02-889a-e7062bfc903c\") " pod="openshift-multus/multus-pzwdx" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.630788 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ba664a9e-76d2-4d02-889a-e7062bfc903c-host-run-netns\") pod \"multus-pzwdx\" (UID: \"ba664a9e-76d2-4d02-889a-e7062bfc903c\") " pod="openshift-multus/multus-pzwdx" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.630802 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ba664a9e-76d2-4d02-889a-e7062bfc903c-multus-conf-dir\") pod \"multus-pzwdx\" (UID: \"ba664a9e-76d2-4d02-889a-e7062bfc903c\") " pod="openshift-multus/multus-pzwdx" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.630815 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ba664a9e-76d2-4d02-889a-e7062bfc903c-cni-binary-copy\") pod \"multus-pzwdx\" (UID: \"ba664a9e-76d2-4d02-889a-e7062bfc903c\") " pod="openshift-multus/multus-pzwdx" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.630816 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ba664a9e-76d2-4d02-889a-e7062bfc903c-cnibin\") pod \"multus-pzwdx\" (UID: \"ba664a9e-76d2-4d02-889a-e7062bfc903c\") " pod="openshift-multus/multus-pzwdx" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.630860 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ba664a9e-76d2-4d02-889a-e7062bfc903c-hostroot\") pod \"multus-pzwdx\" (UID: \"ba664a9e-76d2-4d02-889a-e7062bfc903c\") " pod="openshift-multus/multus-pzwdx" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.630889 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ba664a9e-76d2-4d02-889a-e7062bfc903c-multus-daemon-config\") pod \"multus-pzwdx\" (UID: \"ba664a9e-76d2-4d02-889a-e7062bfc903c\") " pod="openshift-multus/multus-pzwdx" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.630896 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ba664a9e-76d2-4d02-889a-e7062bfc903c-host-var-lib-cni-bin\") pod \"multus-pzwdx\" (UID: \"ba664a9e-76d2-4d02-889a-e7062bfc903c\") " pod="openshift-multus/multus-pzwdx" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.630907 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1bd5bc7d-159f-4f4e-8647-8a373e47d35f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-2gg4m\" (UID: \"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\") " pod="openshift-multus/multus-additional-cni-plugins-2gg4m" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.630925 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ba664a9e-76d2-4d02-889a-e7062bfc903c-host-run-multus-certs\") pod \"multus-pzwdx\" (UID: \"ba664a9e-76d2-4d02-889a-e7062bfc903c\") " pod="openshift-multus/multus-pzwdx" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.630944 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ba664a9e-76d2-4d02-889a-e7062bfc903c-etc-kubernetes\") pod \"multus-pzwdx\" (UID: \"ba664a9e-76d2-4d02-889a-e7062bfc903c\") " pod="openshift-multus/multus-pzwdx" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.630971 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1bd5bc7d-159f-4f4e-8647-8a373e47d35f-os-release\") pod \"multus-additional-cni-plugins-2gg4m\" (UID: \"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\") " pod="openshift-multus/multus-additional-cni-plugins-2gg4m" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.631155 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1bd5bc7d-159f-4f4e-8647-8a373e47d35f-os-release\") pod \"multus-additional-cni-plugins-2gg4m\" (UID: \"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\") " pod="openshift-multus/multus-additional-cni-plugins-2gg4m" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.631198 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ba664a9e-76d2-4d02-889a-e7062bfc903c-host-run-k8s-cni-cncf-io\") pod \"multus-pzwdx\" (UID: \"ba664a9e-76d2-4d02-889a-e7062bfc903c\") " pod="openshift-multus/multus-pzwdx" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.631221 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ba664a9e-76d2-4d02-889a-e7062bfc903c-host-var-lib-cni-multus\") pod \"multus-pzwdx\" (UID: \"ba664a9e-76d2-4d02-889a-e7062bfc903c\") " pod="openshift-multus/multus-pzwdx" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.631226 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1bd5bc7d-159f-4f4e-8647-8a373e47d35f-cnibin\") pod \"multus-additional-cni-plugins-2gg4m\" (UID: \"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\") " pod="openshift-multus/multus-additional-cni-plugins-2gg4m" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.631298 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ba664a9e-76d2-4d02-889a-e7062bfc903c-multus-socket-dir-parent\") pod \"multus-pzwdx\" (UID: \"ba664a9e-76d2-4d02-889a-e7062bfc903c\") " pod="openshift-multus/multus-pzwdx" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.631321 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ba664a9e-76d2-4d02-889a-e7062bfc903c-host-var-lib-kubelet\") pod \"multus-pzwdx\" (UID: \"ba664a9e-76d2-4d02-889a-e7062bfc903c\") " pod="openshift-multus/multus-pzwdx" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.631350 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ba664a9e-76d2-4d02-889a-e7062bfc903c-system-cni-dir\") pod \"multus-pzwdx\" (UID: \"ba664a9e-76d2-4d02-889a-e7062bfc903c\") " pod="openshift-multus/multus-pzwdx" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.631426 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1bd5bc7d-159f-4f4e-8647-8a373e47d35f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-2gg4m\" (UID: \"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\") " pod="openshift-multus/multus-additional-cni-plugins-2gg4m" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.631447 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ba664a9e-76d2-4d02-889a-e7062bfc903c-multus-cni-dir\") pod \"multus-pzwdx\" (UID: \"ba664a9e-76d2-4d02-889a-e7062bfc903c\") " pod="openshift-multus/multus-pzwdx" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.631485 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ba664a9e-76d2-4d02-889a-e7062bfc903c-multus-conf-dir\") pod \"multus-pzwdx\" (UID: \"ba664a9e-76d2-4d02-889a-e7062bfc903c\") " pod="openshift-multus/multus-pzwdx" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.631817 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1bd5bc7d-159f-4f4e-8647-8a373e47d35f-system-cni-dir\") pod \"multus-additional-cni-plugins-2gg4m\" (UID: \"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\") " pod="openshift-multus/multus-additional-cni-plugins-2gg4m" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.632024 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ba664a9e-76d2-4d02-889a-e7062bfc903c-os-release\") pod \"multus-pzwdx\" (UID: \"ba664a9e-76d2-4d02-889a-e7062bfc903c\") " pod="openshift-multus/multus-pzwdx" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.632061 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ba664a9e-76d2-4d02-889a-e7062bfc903c-host-run-netns\") pod \"multus-pzwdx\" (UID: \"ba664a9e-76d2-4d02-889a-e7062bfc903c\") " pod="openshift-multus/multus-pzwdx" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.632352 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ba664a9e-76d2-4d02-889a-e7062bfc903c-cni-binary-copy\") pod \"multus-pzwdx\" (UID: \"ba664a9e-76d2-4d02-889a-e7062bfc903c\") " pod="openshift-multus/multus-pzwdx" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.632407 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ba664a9e-76d2-4d02-889a-e7062bfc903c-host-run-multus-certs\") pod \"multus-pzwdx\" (UID: \"ba664a9e-76d2-4d02-889a-e7062bfc903c\") " pod="openshift-multus/multus-pzwdx" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.632434 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ba664a9e-76d2-4d02-889a-e7062bfc903c-etc-kubernetes\") pod \"multus-pzwdx\" (UID: \"ba664a9e-76d2-4d02-889a-e7062bfc903c\") " pod="openshift-multus/multus-pzwdx" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.632456 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ba664a9e-76d2-4d02-889a-e7062bfc903c-hostroot\") pod \"multus-pzwdx\" (UID: \"ba664a9e-76d2-4d02-889a-e7062bfc903c\") " pod="openshift-multus/multus-pzwdx" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.632791 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1bd5bc7d-159f-4f4e-8647-8a373e47d35f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-2gg4m\" (UID: \"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\") " pod="openshift-multus/multus-additional-cni-plugins-2gg4m" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.632949 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ba664a9e-76d2-4d02-889a-e7062bfc903c-multus-daemon-config\") pod \"multus-pzwdx\" (UID: \"ba664a9e-76d2-4d02-889a-e7062bfc903c\") " pod="openshift-multus/multus-pzwdx" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.632958 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1bd5bc7d-159f-4f4e-8647-8a373e47d35f-cni-binary-copy\") pod \"multus-additional-cni-plugins-2gg4m\" (UID: \"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\") " pod="openshift-multus/multus-additional-cni-plugins-2gg4m" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.644410 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.656258 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfslc\" (UniqueName: \"kubernetes.io/projected/ba664a9e-76d2-4d02-889a-e7062bfc903c-kube-api-access-nfslc\") pod \"multus-pzwdx\" (UID: \"ba664a9e-76d2-4d02-889a-e7062bfc903c\") " pod="openshift-multus/multus-pzwdx" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.660355 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.662183 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js6cp\" (UniqueName: \"kubernetes.io/projected/1bd5bc7d-159f-4f4e-8647-8a373e47d35f-kube-api-access-js6cp\") pod \"multus-additional-cni-plugins-2gg4m\" (UID: \"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\") " pod="openshift-multus/multus-additional-cni-plugins-2gg4m" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.674010 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.694395 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.705773 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.723521 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2gg4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2gg4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.725602 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:35:58 crc kubenswrapper[4779]: E1128 12:35:58.725726 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.726085 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.726296 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:35:58 crc kubenswrapper[4779]: E1128 12:35:58.726464 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:35:58 crc kubenswrapper[4779]: E1128 12:35:58.726572 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.732919 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b2a3eb4-4de5-491b-b466-3a35b7d745ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kj9g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.743479 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pzwdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba664a9e-76d2-4d02-889a-e7062bfc903c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfslc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pzwdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.755132 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-pzwdx" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.760964 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373d4c2a-0b03-4671-945a-0583fa342b3d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e79e9cc7bdaacc427604d12cf94272c7ed3d93519b1d285ba336edded1b3642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0417da6607c0d549767642332fa4fb21bbef525d7073d0a352120092d3450f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b887fb78d1be13c77a88ce49c84ff0839a51056e29d59d571ab7da133dd0d897\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a538ac7a3b48f9c58a68688a95342fb3a9d26ee3e5d7c65f1e3b8d99993294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.763018 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-2gg4m" Nov 28 12:35:58 crc kubenswrapper[4779]: W1128 12:35:58.771467 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba664a9e_76d2_4d02_889a_e7062bfc903c.slice/crio-e25674f3e17b029a56cb8f75a738f36b79e39585e228576613605ca75cdc2f64 WatchSource:0}: Error finding container e25674f3e17b029a56cb8f75a738f36b79e39585e228576613605ca75cdc2f64: Status 404 returned error can't find the container with id e25674f3e17b029a56cb8f75a738f36b79e39585e228576613605ca75cdc2f64 Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.781581 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pbmbn"] Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.782293 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.791314 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.792749 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.795204 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.795638 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.795729 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.795730 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.796018 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.804173 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pzwdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba664a9e-76d2-4d02-889a-e7062bfc903c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfslc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pzwdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.816045 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373d4c2a-0b03-4671-945a-0583fa342b3d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e79e9cc7bdaacc427604d12cf94272c7ed3d93519b1d285ba336edded1b3642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0417da6607c0d549767642332fa4fb21bbef525d7073d0a352120092d3450f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b887fb78d1be13c77a88ce49c84ff0839a51056e29d59d571ab7da133dd0d897\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a538ac7a3b48f9c58a68688a95342fb3a9d26ee3e5d7c65f1e3b8d99993294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.831348 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b2a3eb4-4de5-491b-b466-3a35b7d745ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kj9g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.851496 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" event={"ID":"3b2a3eb4-4de5-491b-b466-3a35b7d745ec","Type":"ContainerStarted","Data":"23df7a96829b4103254d6da3740caab05538ddbd3235ce16e8d768e681041c56"} Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.851535 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" event={"ID":"3b2a3eb4-4de5-491b-b466-3a35b7d745ec","Type":"ContainerStarted","Data":"5f92b1378efd9146ee3cb61fef14092136e47b318d132a400c768bedf50d034e"} Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.851544 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" event={"ID":"3b2a3eb4-4de5-491b-b466-3a35b7d745ec","Type":"ContainerStarted","Data":"48968c9506036732e50c04f4ca97e04c2e08dcb7b60d517cb5f44f2fe999e607"} Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.853055 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"eff858e5c7b0ed81321c11bd8d36e1ea57dcf9e8d4614a1534eaf6cdf2376661"} Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.854479 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbbbf6f-004c-42ae-8a38-1bcc6cb88ac2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9cede79cbe4c47d953dfa702fe815cc14ee242dede33edec3c4617824c89b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4493f154b47a353308d54341114bbbd12157f9575b873e1648d1dae6a386a534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9d44446078a2bb53a5a9b0a3f7a87ecf24a8554fb968a0250fc3a4cfb2d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://123567b9e202a9aae6ab83bca1ea909a496c476395703ab65e855be02f7af06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c959e0d582f2f01523650db7c0a1d6483dda34c3fcdfaa29d2d25e4d0b0f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.855586 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"7c9857379117d130ce02fa4a153dfc01c9f41ba65663ae918bd82c9b14291e34"} Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.855617 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"3ae1121f084b43e5f3bf52a59539671f3945d3c4cd073f57e91968fbc3f1c7e5"} Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.856478 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-pzwdx" event={"ID":"ba664a9e-76d2-4d02-889a-e7062bfc903c","Type":"ContainerStarted","Data":"e25674f3e17b029a56cb8f75a738f36b79e39585e228576613605ca75cdc2f64"} Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.857468 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-dlvj8" event={"ID":"a8b3aa68-52ee-40cd-a059-6e410b826ce7","Type":"ContainerStarted","Data":"4b2e852aeb571e85a95f4581550ee5f911d9c67fbbc4fc699e9af667a9c4b531"} Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.857501 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-dlvj8" event={"ID":"a8b3aa68-52ee-40cd-a059-6e410b826ce7","Type":"ContainerStarted","Data":"663d40120e210dda29070a86437233b8f566b580605e2e3b3a10cfb057f0e9af"} Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.858661 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3544f7f72339878b2314fde813e8a92a8341fb05a34a4440c7c37b983d8d23f4"} Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.858685 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"19dcc5041b0cbae9167c41c808ece2651eac928f93422722ae28825b5ea4f242"} Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.858696 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"f992f314742469edfa915810254b0d24fcfb3790989a102e944897f817c848f9"} Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.860755 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.862854 4779 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9026b47ba3a0076e3f66e452bc9a223292a17659f2b80d04ef6eb6a5c0448710" exitCode=255 Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.862929 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"9026b47ba3a0076e3f66e452bc9a223292a17659f2b80d04ef6eb6a5c0448710"} Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.864209 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2gg4m" event={"ID":"1bd5bc7d-159f-4f4e-8647-8a373e47d35f","Type":"ContainerStarted","Data":"932c836cb91f422192522ddb5efad42c8abc6efdc133f25a5039e4772e5beefe"} Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.906503 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f4f43e-a921-41b2-aa88-506055daff60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pbmbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:35:58Z is after 2025-08-24T17:21:41Z" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.934401 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-node-log\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.934438 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/35f4f43e-a921-41b2-aa88-506055daff60-ovnkube-config\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.934461 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5msg\" (UniqueName: \"kubernetes.io/projected/35f4f43e-a921-41b2-aa88-506055daff60-kube-api-access-q5msg\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.934589 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-run-systemd\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.934642 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-run-ovn\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.934665 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/35f4f43e-a921-41b2-aa88-506055daff60-ovn-node-metrics-cert\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.934715 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-host-kubelet\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.934739 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/35f4f43e-a921-41b2-aa88-506055daff60-ovnkube-script-lib\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.934775 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-run-openvswitch\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.934810 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-host-run-ovn-kubernetes\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.934852 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-var-lib-openvswitch\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.934883 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-host-run-netns\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.934900 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-host-cni-bin\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.934927 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-host-slash\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.934947 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.934971 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/35f4f43e-a921-41b2-aa88-506055daff60-env-overrides\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.935018 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-host-cni-netd\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.935071 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-etc-openvswitch\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.935120 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-log-socket\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.935146 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-systemd-units\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.937797 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:35:58Z is after 2025-08-24T17:21:41Z" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.954069 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dlvj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8b3aa68-52ee-40cd-a059-6e410b826ce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db55w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dlvj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:35:58Z is after 2025-08-24T17:21:41Z" Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.954391 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 28 12:35:58 crc kubenswrapper[4779]: I1128 12:35:58.954537 4779 scope.go:117] "RemoveContainer" containerID="9026b47ba3a0076e3f66e452bc9a223292a17659f2b80d04ef6eb6a5c0448710" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.009276 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:35:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.031105 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:35:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.036541 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-host-kubelet\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.036573 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/35f4f43e-a921-41b2-aa88-506055daff60-ovnkube-script-lib\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.036605 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-var-lib-openvswitch\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.036622 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-run-openvswitch\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.036638 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-host-run-ovn-kubernetes\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.036635 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-host-kubelet\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.036655 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-host-run-netns\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.036669 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-host-cni-bin\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.036685 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.036694 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-run-openvswitch\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.036704 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/35f4f43e-a921-41b2-aa88-506055daff60-env-overrides\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.036724 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-var-lib-openvswitch\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.036723 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-host-slash\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.036763 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-host-slash\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.036776 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-host-cni-netd\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.036798 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-host-cni-netd\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.036809 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-etc-openvswitch\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.036810 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.036847 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-log-socket\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.036846 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-host-run-netns\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.036830 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-log-socket\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.036918 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-systemd-units\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.036956 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-node-log\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.036809 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-host-cni-bin\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.036971 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/35f4f43e-a921-41b2-aa88-506055daff60-ovnkube-config\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.036827 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-etc-openvswitch\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.036991 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5msg\" (UniqueName: \"kubernetes.io/projected/35f4f43e-a921-41b2-aa88-506055daff60-kube-api-access-q5msg\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.036994 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-systemd-units\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.037027 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-run-systemd\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.037008 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-run-systemd\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.037002 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-node-log\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.037054 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-run-ovn\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.037069 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/35f4f43e-a921-41b2-aa88-506055daff60-ovn-node-metrics-cert\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.037113 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-run-ovn\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.037397 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/35f4f43e-a921-41b2-aa88-506055daff60-env-overrides\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.037453 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-host-run-ovn-kubernetes\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.037619 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/35f4f43e-a921-41b2-aa88-506055daff60-ovnkube-config\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.037745 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/35f4f43e-a921-41b2-aa88-506055daff60-ovnkube-script-lib\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.041429 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/35f4f43e-a921-41b2-aa88-506055daff60-ovn-node-metrics-cert\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.060710 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:35:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.069667 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5msg\" (UniqueName: \"kubernetes.io/projected/35f4f43e-a921-41b2-aa88-506055daff60-kube-api-access-q5msg\") pod \"ovnkube-node-pbmbn\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.075264 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:35:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.095672 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2gg4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2gg4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:35:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.107614 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.111491 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:35:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.126905 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2gg4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2gg4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:35:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.150199 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:35:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.164321 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:35:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.175860 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3544f7f72339878b2314fde813e8a92a8341fb05a34a4440c7c37b983d8d23f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19dcc5041b0cbae9167c41c808ece2651eac928f93422722ae28825b5ea4f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:35:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:35:59 crc kubenswrapper[4779]: W1128 12:35:59.188740 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35f4f43e_a921_41b2_aa88_506055daff60.slice/crio-b865dc8e82285005474063520e288759e283d7dcbb76e1e9245a86914efd1e56 WatchSource:0}: Error finding container b865dc8e82285005474063520e288759e283d7dcbb76e1e9245a86914efd1e56: Status 404 returned error can't find the container with id b865dc8e82285005474063520e288759e283d7dcbb76e1e9245a86914efd1e56 Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.192465 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:35:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.204866 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:35:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.222577 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373d4c2a-0b03-4671-945a-0583fa342b3d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e79e9cc7bdaacc427604d12cf94272c7ed3d93519b1d285ba336edded1b3642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0417da6607c0d549767642332fa4fb21bbef525d7073d0a352120092d3450f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b887fb78d1be13c77a88ce49c84ff0839a51056e29d59d571ab7da133dd0d897\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a538ac7a3b48f9c58a68688a95342fb3a9d26ee3e5d7c65f1e3b8d99993294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:35:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.234158 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b2a3eb4-4de5-491b-b466-3a35b7d745ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23df7a96829b4103254d6da3740caab05538ddbd3235ce16e8d768e681041c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f92b1378efd9146ee3cb61fef14092136e47b318d132a400c768bedf50d034e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kj9g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:35:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.247968 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pzwdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba664a9e-76d2-4d02-889a-e7062bfc903c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfslc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pzwdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:35:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.272386 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbbbf6f-004c-42ae-8a38-1bcc6cb88ac2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9cede79cbe4c47d953dfa702fe815cc14ee242dede33edec3c4617824c89b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4493f154b47a353308d54341114bbbd12157f9575b873e1648d1dae6a386a534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9d44446078a2bb53a5a9b0a3f7a87ecf24a8554fb968a0250fc3a4cfb2d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://123567b9e202a9aae6ab83bca1ea909a496c476395703ab65e855be02f7af06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c959e0d582f2f01523650db7c0a1d6483dda34c3fcdfaa29d2d25e4d0b0f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:35:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.297787 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f4f43e-a921-41b2-aa88-506055daff60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pbmbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:35:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.310949 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b303d954-23c9-4fc9-8e79-981009172099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6912a42c418059dabf07c7d940bf1c4102c8dcf91cd4dd6ca0b177f4acd276ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf14e5e2229156dc442c92253ef1f23c75a5a6f5dec2d2537cddcdd1df54b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a76dbc5b41ebf68792cd449e4a245678be24151f0c980eedd06f956674b2435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9026b47ba3a0076e3f66e452bc9a223292a17659f2b80d04ef6eb6a5c0448710\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9026b47ba3a0076e3f66e452bc9a223292a17659f2b80d04ef6eb6a5c0448710\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 12:35:52.373678 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 12:35:52.376135 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3230331060/tls.crt::/tmp/serving-cert-3230331060/tls.key\\\\\\\"\\\\nI1128 12:35:57.821147 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 12:35:57.824398 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 12:35:57.824424 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 12:35:57.824444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 12:35:57.824450 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 12:35:57.831411 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 12:35:57.831445 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831467 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 12:35:57.831472 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 12:35:57.831476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 12:35:57.831480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 12:35:57.831686 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 12:35:57.839127 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bafddd2d81f67f1445e3714d50eba5cfd6f75d60c2cb47d16f2086861a10bd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:35:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.324641 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c9857379117d130ce02fa4a153dfc01c9f41ba65663ae918bd82c9b14291e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:35:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.357245 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dlvj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8b3aa68-52ee-40cd-a059-6e410b826ce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2e852aeb571e85a95f4581550ee5f911d9c67fbbc4fc699e9af667a9c4b531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db55w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dlvj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:35:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.542281 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.542382 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.542411 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:35:59 crc kubenswrapper[4779]: E1128 12:35:59.542458 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 12:36:01.542416135 +0000 UTC m=+22.108091489 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.542505 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.542566 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:35:59 crc kubenswrapper[4779]: E1128 12:35:59.542510 4779 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 12:35:59 crc kubenswrapper[4779]: E1128 12:35:59.542599 4779 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 12:35:59 crc kubenswrapper[4779]: E1128 12:35:59.542612 4779 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 12:35:59 crc kubenswrapper[4779]: E1128 12:35:59.542661 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 12:36:01.542644791 +0000 UTC m=+22.108320255 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 12:35:59 crc kubenswrapper[4779]: E1128 12:35:59.542687 4779 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 12:35:59 crc kubenswrapper[4779]: E1128 12:35:59.542721 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 12:36:01.542714343 +0000 UTC m=+22.108389697 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 12:35:59 crc kubenswrapper[4779]: E1128 12:35:59.542545 4779 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 12:35:59 crc kubenswrapper[4779]: E1128 12:35:59.542749 4779 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 12:35:59 crc kubenswrapper[4779]: E1128 12:35:59.542761 4779 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 12:35:59 crc kubenswrapper[4779]: E1128 12:35:59.542785 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 12:36:01.542779655 +0000 UTC m=+22.108455009 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 12:35:59 crc kubenswrapper[4779]: E1128 12:35:59.542575 4779 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 12:35:59 crc kubenswrapper[4779]: E1128 12:35:59.542810 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 12:36:01.542805446 +0000 UTC m=+22.108480800 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.729689 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.730653 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.731332 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.731972 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.732638 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.733216 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.733812 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.734415 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.735046 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.736977 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.737653 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.738873 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.739604 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.740726 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.741386 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.742987 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.743868 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.744325 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.744957 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.745765 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.745974 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c9857379117d130ce02fa4a153dfc01c9f41ba65663ae918bd82c9b14291e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:35:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.746483 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.748211 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.748759 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.750066 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.750658 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.751427 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.752634 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.753212 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.754175 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.754649 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.755486 4779 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.755586 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.756739 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dlvj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8b3aa68-52ee-40cd-a059-6e410b826ce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2e852aeb571e85a95f4581550ee5f911d9c67fbbc4fc699e9af667a9c4b531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db55w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dlvj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:35:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.757509 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.758821 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.759529 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.761071 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.761736 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.762658 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.763528 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.764679 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.765183 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.766368 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.766984 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.768004 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.768470 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.769743 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.770912 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.771235 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b303d954-23c9-4fc9-8e79-981009172099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6912a42c418059dabf07c7d940bf1c4102c8dcf91cd4dd6ca0b177f4acd276ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf14e5e2229156dc442c92253ef1f23c75a5a6f5dec2d2537cddcdd1df54b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a76dbc5b41ebf68792cd449e4a245678be24151f0c980eedd06f956674b2435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9026b47ba3a0076e3f66e452bc9a223292a17659f2b80d04ef6eb6a5c0448710\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9026b47ba3a0076e3f66e452bc9a223292a17659f2b80d04ef6eb6a5c0448710\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 12:35:52.373678 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 12:35:52.376135 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3230331060/tls.crt::/tmp/serving-cert-3230331060/tls.key\\\\\\\"\\\\nI1128 12:35:57.821147 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 12:35:57.824398 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 12:35:57.824424 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 12:35:57.824444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 12:35:57.824450 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 12:35:57.831411 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 12:35:57.831445 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831467 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 12:35:57.831472 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 12:35:57.831476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 12:35:57.831480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 12:35:57.831686 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 12:35:57.839127 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bafddd2d81f67f1445e3714d50eba5cfd6f75d60c2cb47d16f2086861a10bd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:35:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.772243 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.772819 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.773863 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.774478 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.775586 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.776247 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.776720 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.784279 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:35:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.797534 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3544f7f72339878b2314fde813e8a92a8341fb05a34a4440c7c37b983d8d23f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19dcc5041b0cbae9167c41c808ece2651eac928f93422722ae28825b5ea4f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:35:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.813591 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:35:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.826172 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:35:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.845158 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2gg4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2gg4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:35:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.860973 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:35:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.866254 4779 generic.go:334] "Generic (PLEG): container finished" podID="1bd5bc7d-159f-4f4e-8647-8a373e47d35f" containerID="ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba" exitCode=0 Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.866339 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2gg4m" event={"ID":"1bd5bc7d-159f-4f4e-8647-8a373e47d35f","Type":"ContainerDied","Data":"ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba"} Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.867750 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" event={"ID":"35f4f43e-a921-41b2-aa88-506055daff60","Type":"ContainerStarted","Data":"b865dc8e82285005474063520e288759e283d7dcbb76e1e9245a86914efd1e56"} Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.870006 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-pzwdx" event={"ID":"ba664a9e-76d2-4d02-889a-e7062bfc903c","Type":"ContainerStarted","Data":"5598fdba6afba30cd00c8abdae6c80300fb10dfcde40afab0f15f848addddd47"} Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.914986 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pzwdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba664a9e-76d2-4d02-889a-e7062bfc903c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfslc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pzwdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:35:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.939344 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373d4c2a-0b03-4671-945a-0583fa342b3d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e79e9cc7bdaacc427604d12cf94272c7ed3d93519b1d285ba336edded1b3642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0417da6607c0d549767642332fa4fb21bbef525d7073d0a352120092d3450f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b887fb78d1be13c77a88ce49c84ff0839a51056e29d59d571ab7da133dd0d897\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a538ac7a3b48f9c58a68688a95342fb3a9d26ee3e5d7c65f1e3b8d99993294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:35:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.966476 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b2a3eb4-4de5-491b-b466-3a35b7d745ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23df7a96829b4103254d6da3740caab05538ddbd3235ce16e8d768e681041c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f92b1378efd9146ee3cb61fef14092136e47b318d132a400c768bedf50d034e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kj9g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:35:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.972246 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-dwgdn"] Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.972571 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-dwgdn" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.973797 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.975688 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.975795 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.975868 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 28 12:35:59 crc kubenswrapper[4779]: I1128 12:35:59.997683 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbbbf6f-004c-42ae-8a38-1bcc6cb88ac2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9cede79cbe4c47d953dfa702fe815cc14ee242dede33edec3c4617824c89b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4493f154b47a353308d54341114bbbd12157f9575b873e1648d1dae6a386a534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9d44446078a2bb53a5a9b0a3f7a87ecf24a8554fb968a0250fc3a4cfb2d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://123567b9e202a9aae6ab83bca1ea909a496c476395703ab65e855be02f7af06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c959e0d582f2f01523650db7c0a1d6483dda34c3fcdfaa29d2d25e4d0b0f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:35:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:00 crc kubenswrapper[4779]: I1128 12:36:00.017662 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f4f43e-a921-41b2-aa88-506055daff60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pbmbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:00Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:00 crc kubenswrapper[4779]: I1128 12:36:00.042664 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:00Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:00 crc kubenswrapper[4779]: I1128 12:36:00.047168 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6zn6\" (UniqueName: \"kubernetes.io/projected/13786eba-201c-40ca-89b7-174795999a9d-kube-api-access-v6zn6\") pod \"node-ca-dwgdn\" (UID: \"13786eba-201c-40ca-89b7-174795999a9d\") " pod="openshift-image-registry/node-ca-dwgdn" Nov 28 12:36:00 crc kubenswrapper[4779]: I1128 12:36:00.047235 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/13786eba-201c-40ca-89b7-174795999a9d-host\") pod \"node-ca-dwgdn\" (UID: \"13786eba-201c-40ca-89b7-174795999a9d\") " pod="openshift-image-registry/node-ca-dwgdn" Nov 28 12:36:00 crc kubenswrapper[4779]: I1128 12:36:00.047268 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/13786eba-201c-40ca-89b7-174795999a9d-serviceca\") pod \"node-ca-dwgdn\" (UID: \"13786eba-201c-40ca-89b7-174795999a9d\") " pod="openshift-image-registry/node-ca-dwgdn" Nov 28 12:36:00 crc kubenswrapper[4779]: I1128 12:36:00.082265 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:00Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:00 crc kubenswrapper[4779]: I1128 12:36:00.120976 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3544f7f72339878b2314fde813e8a92a8341fb05a34a4440c7c37b983d8d23f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19dcc5041b0cbae9167c41c808ece2651eac928f93422722ae28825b5ea4f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:00Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:00 crc kubenswrapper[4779]: I1128 12:36:00.148650 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6zn6\" (UniqueName: \"kubernetes.io/projected/13786eba-201c-40ca-89b7-174795999a9d-kube-api-access-v6zn6\") pod \"node-ca-dwgdn\" (UID: \"13786eba-201c-40ca-89b7-174795999a9d\") " pod="openshift-image-registry/node-ca-dwgdn" Nov 28 12:36:00 crc kubenswrapper[4779]: I1128 12:36:00.148699 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/13786eba-201c-40ca-89b7-174795999a9d-host\") pod \"node-ca-dwgdn\" (UID: \"13786eba-201c-40ca-89b7-174795999a9d\") " pod="openshift-image-registry/node-ca-dwgdn" Nov 28 12:36:00 crc kubenswrapper[4779]: I1128 12:36:00.148724 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/13786eba-201c-40ca-89b7-174795999a9d-serviceca\") pod \"node-ca-dwgdn\" (UID: \"13786eba-201c-40ca-89b7-174795999a9d\") " pod="openshift-image-registry/node-ca-dwgdn" Nov 28 12:36:00 crc kubenswrapper[4779]: I1128 12:36:00.148859 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/13786eba-201c-40ca-89b7-174795999a9d-host\") pod \"node-ca-dwgdn\" (UID: \"13786eba-201c-40ca-89b7-174795999a9d\") " pod="openshift-image-registry/node-ca-dwgdn" Nov 28 12:36:00 crc kubenswrapper[4779]: I1128 12:36:00.149590 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/13786eba-201c-40ca-89b7-174795999a9d-serviceca\") pod \"node-ca-dwgdn\" (UID: \"13786eba-201c-40ca-89b7-174795999a9d\") " pod="openshift-image-registry/node-ca-dwgdn" Nov 28 12:36:00 crc kubenswrapper[4779]: I1128 12:36:00.159564 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:00Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:00 crc kubenswrapper[4779]: I1128 12:36:00.195379 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6zn6\" (UniqueName: \"kubernetes.io/projected/13786eba-201c-40ca-89b7-174795999a9d-kube-api-access-v6zn6\") pod \"node-ca-dwgdn\" (UID: \"13786eba-201c-40ca-89b7-174795999a9d\") " pod="openshift-image-registry/node-ca-dwgdn" Nov 28 12:36:00 crc kubenswrapper[4779]: I1128 12:36:00.223215 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:00Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:00 crc kubenswrapper[4779]: I1128 12:36:00.262355 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2gg4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2gg4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:00Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:00 crc kubenswrapper[4779]: I1128 12:36:00.297954 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b2a3eb4-4de5-491b-b466-3a35b7d745ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23df7a96829b4103254d6da3740caab05538ddbd3235ce16e8d768e681041c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f92b1378efd9146ee3cb61fef14092136e47b318d132a400c768bedf50d034e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kj9g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:00Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:00 crc kubenswrapper[4779]: I1128 12:36:00.326652 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-dwgdn" Nov 28 12:36:00 crc kubenswrapper[4779]: W1128 12:36:00.341694 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13786eba_201c_40ca_89b7_174795999a9d.slice/crio-d28e580844c63394f5745467a29a47175840245787057e4e3bf8a158b642e319 WatchSource:0}: Error finding container d28e580844c63394f5745467a29a47175840245787057e4e3bf8a158b642e319: Status 404 returned error can't find the container with id d28e580844c63394f5745467a29a47175840245787057e4e3bf8a158b642e319 Nov 28 12:36:00 crc kubenswrapper[4779]: I1128 12:36:00.347437 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pzwdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba664a9e-76d2-4d02-889a-e7062bfc903c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5598fdba6afba30cd00c8abdae6c80300fb10dfcde40afab0f15f848addddd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfslc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pzwdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:00Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:00 crc kubenswrapper[4779]: I1128 12:36:00.386145 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373d4c2a-0b03-4671-945a-0583fa342b3d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e79e9cc7bdaacc427604d12cf94272c7ed3d93519b1d285ba336edded1b3642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0417da6607c0d549767642332fa4fb21bbef525d7073d0a352120092d3450f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b887fb78d1be13c77a88ce49c84ff0839a51056e29d59d571ab7da133dd0d897\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a538ac7a3b48f9c58a68688a95342fb3a9d26ee3e5d7c65f1e3b8d99993294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:00Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:00 crc kubenswrapper[4779]: I1128 12:36:00.424936 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f4f43e-a921-41b2-aa88-506055daff60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pbmbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:00Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:00 crc kubenswrapper[4779]: I1128 12:36:00.458309 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwgdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13786eba-201c-40ca-89b7-174795999a9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwgdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:00Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:00 crc kubenswrapper[4779]: I1128 12:36:00.515383 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbbbf6f-004c-42ae-8a38-1bcc6cb88ac2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9cede79cbe4c47d953dfa702fe815cc14ee242dede33edec3c4617824c89b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4493f154b47a353308d54341114bbbd12157f9575b873e1648d1dae6a386a534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9d44446078a2bb53a5a9b0a3f7a87ecf24a8554fb968a0250fc3a4cfb2d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://123567b9e202a9aae6ab83bca1ea909a496c476395703ab65e855be02f7af06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c959e0d582f2f01523650db7c0a1d6483dda34c3fcdfaa29d2d25e4d0b0f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:00Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:00 crc kubenswrapper[4779]: I1128 12:36:00.542189 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b303d954-23c9-4fc9-8e79-981009172099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6912a42c418059dabf07c7d940bf1c4102c8dcf91cd4dd6ca0b177f4acd276ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf14e5e2229156dc442c92253ef1f23c75a5a6f5dec2d2537cddcdd1df54b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a76dbc5b41ebf68792cd449e4a245678be24151f0c980eedd06f956674b2435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9026b47ba3a0076e3f66e452bc9a223292a17659f2b80d04ef6eb6a5c0448710\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9026b47ba3a0076e3f66e452bc9a223292a17659f2b80d04ef6eb6a5c0448710\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 12:35:52.373678 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 12:35:52.376135 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3230331060/tls.crt::/tmp/serving-cert-3230331060/tls.key\\\\\\\"\\\\nI1128 12:35:57.821147 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 12:35:57.824398 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 12:35:57.824424 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 12:35:57.824444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 12:35:57.824450 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 12:35:57.831411 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 12:35:57.831445 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831467 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 12:35:57.831472 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 12:35:57.831476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 12:35:57.831480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 12:35:57.831686 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 12:35:57.839127 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bafddd2d81f67f1445e3714d50eba5cfd6f75d60c2cb47d16f2086861a10bd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:00Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:00 crc kubenswrapper[4779]: I1128 12:36:00.578857 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c9857379117d130ce02fa4a153dfc01c9f41ba65663ae918bd82c9b14291e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:00Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:00 crc kubenswrapper[4779]: I1128 12:36:00.622045 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dlvj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8b3aa68-52ee-40cd-a059-6e410b826ce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2e852aeb571e85a95f4581550ee5f911d9c67fbbc4fc699e9af667a9c4b531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db55w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dlvj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:00Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:00 crc kubenswrapper[4779]: I1128 12:36:00.725808 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:36:00 crc kubenswrapper[4779]: I1128 12:36:00.725848 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:36:00 crc kubenswrapper[4779]: E1128 12:36:00.725926 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:36:00 crc kubenswrapper[4779]: E1128 12:36:00.726272 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:36:00 crc kubenswrapper[4779]: I1128 12:36:00.726358 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:36:00 crc kubenswrapper[4779]: E1128 12:36:00.726420 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:36:00 crc kubenswrapper[4779]: I1128 12:36:00.876638 4779 generic.go:334] "Generic (PLEG): container finished" podID="1bd5bc7d-159f-4f4e-8647-8a373e47d35f" containerID="e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f" exitCode=0 Nov 28 12:36:00 crc kubenswrapper[4779]: I1128 12:36:00.876731 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2gg4m" event={"ID":"1bd5bc7d-159f-4f4e-8647-8a373e47d35f","Type":"ContainerDied","Data":"e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f"} Nov 28 12:36:00 crc kubenswrapper[4779]: I1128 12:36:00.879620 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"d290cf8678216cdf66a68b32edea2be30af7f7fa4ff7ccac629d9e690b23b13e"} Nov 28 12:36:00 crc kubenswrapper[4779]: I1128 12:36:00.881435 4779 generic.go:334] "Generic (PLEG): container finished" podID="35f4f43e-a921-41b2-aa88-506055daff60" containerID="397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29" exitCode=0 Nov 28 12:36:00 crc kubenswrapper[4779]: I1128 12:36:00.881503 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" event={"ID":"35f4f43e-a921-41b2-aa88-506055daff60","Type":"ContainerDied","Data":"397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29"} Nov 28 12:36:00 crc kubenswrapper[4779]: I1128 12:36:00.882878 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-dwgdn" event={"ID":"13786eba-201c-40ca-89b7-174795999a9d","Type":"ContainerStarted","Data":"ec60bab90c7fee1fd38c00da4f84d5133876ad8f2817e5447795fcab4feb2942"} Nov 28 12:36:00 crc kubenswrapper[4779]: I1128 12:36:00.882924 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-dwgdn" event={"ID":"13786eba-201c-40ca-89b7-174795999a9d","Type":"ContainerStarted","Data":"d28e580844c63394f5745467a29a47175840245787057e4e3bf8a158b642e319"} Nov 28 12:36:00 crc kubenswrapper[4779]: I1128 12:36:00.885806 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 28 12:36:00 crc kubenswrapper[4779]: I1128 12:36:00.887656 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a3db38b748527004df103120db865f7848491344dfdf5c89a6db10f4d15e6a74"} Nov 28 12:36:00 crc kubenswrapper[4779]: I1128 12:36:00.887981 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 12:36:00 crc kubenswrapper[4779]: I1128 12:36:00.899707 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373d4c2a-0b03-4671-945a-0583fa342b3d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e79e9cc7bdaacc427604d12cf94272c7ed3d93519b1d285ba336edded1b3642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0417da6607c0d549767642332fa4fb21bbef525d7073d0a352120092d3450f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b887fb78d1be13c77a88ce49c84ff0839a51056e29d59d571ab7da133dd0d897\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a538ac7a3b48f9c58a68688a95342fb3a9d26ee3e5d7c65f1e3b8d99993294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:00Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:00 crc kubenswrapper[4779]: I1128 12:36:00.920430 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b2a3eb4-4de5-491b-b466-3a35b7d745ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23df7a96829b4103254d6da3740caab05538ddbd3235ce16e8d768e681041c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f92b1378efd9146ee3cb61fef14092136e47b318d132a400c768bedf50d034e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kj9g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:00Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:00 crc kubenswrapper[4779]: I1128 12:36:00.942547 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pzwdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba664a9e-76d2-4d02-889a-e7062bfc903c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5598fdba6afba30cd00c8abdae6c80300fb10dfcde40afab0f15f848addddd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfslc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pzwdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:00Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:00 crc kubenswrapper[4779]: I1128 12:36:00.980022 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbbbf6f-004c-42ae-8a38-1bcc6cb88ac2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9cede79cbe4c47d953dfa702fe815cc14ee242dede33edec3c4617824c89b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4493f154b47a353308d54341114bbbd12157f9575b873e1648d1dae6a386a534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9d44446078a2bb53a5a9b0a3f7a87ecf24a8554fb968a0250fc3a4cfb2d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://123567b9e202a9aae6ab83bca1ea909a496c476395703ab65e855be02f7af06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c959e0d582f2f01523650db7c0a1d6483dda34c3fcdfaa29d2d25e4d0b0f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:00Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.010379 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f4f43e-a921-41b2-aa88-506055daff60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pbmbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:01Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.011036 4779 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.015729 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.015811 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.015835 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.016037 4779 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.024734 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwgdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13786eba-201c-40ca-89b7-174795999a9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwgdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:01Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.029911 4779 kubelet_node_status.go:115] "Node was previously registered" node="crc" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.030127 4779 kubelet_node_status.go:79] "Successfully registered node" node="crc" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.031398 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.031434 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.031447 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.031468 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.031481 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:01Z","lastTransitionTime":"2025-11-28T12:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.046039 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b303d954-23c9-4fc9-8e79-981009172099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6912a42c418059dabf07c7d940bf1c4102c8dcf91cd4dd6ca0b177f4acd276ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf14e5e2229156dc442c92253ef1f23c75a5a6f5dec2d2537cddcdd1df54b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a76dbc5b41ebf68792cd449e4a245678be24151f0c980eedd06f956674b2435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9026b47ba3a0076e3f66e452bc9a223292a17659f2b80d04ef6eb6a5c0448710\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9026b47ba3a0076e3f66e452bc9a223292a17659f2b80d04ef6eb6a5c0448710\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 12:35:52.373678 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 12:35:52.376135 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3230331060/tls.crt::/tmp/serving-cert-3230331060/tls.key\\\\\\\"\\\\nI1128 12:35:57.821147 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 12:35:57.824398 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 12:35:57.824424 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 12:35:57.824444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 12:35:57.824450 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 12:35:57.831411 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 12:35:57.831445 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831467 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 12:35:57.831472 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 12:35:57.831476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 12:35:57.831480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 12:35:57.831686 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 12:35:57.839127 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bafddd2d81f67f1445e3714d50eba5cfd6f75d60c2cb47d16f2086861a10bd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:01Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:01 crc kubenswrapper[4779]: E1128 12:36:01.056521 4779 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"78a2023c-0feb-4049-a56a-d55919a84d1c\\\",\\\"systemUUID\\\":\\\"232cf3c8-8956-4a87-8900-bbd0298775e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:01Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.061993 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c9857379117d130ce02fa4a153dfc01c9f41ba65663ae918bd82c9b14291e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:01Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.072979 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dlvj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8b3aa68-52ee-40cd-a059-6e410b826ce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2e852aeb571e85a95f4581550ee5f911d9c67fbbc4fc699e9af667a9c4b531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db55w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dlvj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:01Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.088619 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:01Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.093277 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.093365 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.093381 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.093404 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.093417 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:01Z","lastTransitionTime":"2025-11-28T12:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.141306 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2gg4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2gg4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:01Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:01 crc kubenswrapper[4779]: E1128 12:36:01.152705 4779 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"78a2023c-0feb-4049-a56a-d55919a84d1c\\\",\\\"systemUUID\\\":\\\"232cf3c8-8956-4a87-8900-bbd0298775e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:01Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.158981 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.159008 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.159016 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.159028 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.159037 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:01Z","lastTransitionTime":"2025-11-28T12:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.160328 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:01Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:01 crc kubenswrapper[4779]: E1128 12:36:01.182941 4779 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"78a2023c-0feb-4049-a56a-d55919a84d1c\\\",\\\"systemUUID\\\":\\\"232cf3c8-8956-4a87-8900-bbd0298775e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:01Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.189849 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.189879 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.189888 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.189902 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.189910 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:01Z","lastTransitionTime":"2025-11-28T12:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.191794 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:01Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:01 crc kubenswrapper[4779]: E1128 12:36:01.203884 4779 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"78a2023c-0feb-4049-a56a-d55919a84d1c\\\",\\\"systemUUID\\\":\\\"232cf3c8-8956-4a87-8900-bbd0298775e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:01Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.207924 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.207953 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.207960 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.207974 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.207983 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:01Z","lastTransitionTime":"2025-11-28T12:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:01 crc kubenswrapper[4779]: E1128 12:36:01.220397 4779 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"78a2023c-0feb-4049-a56a-d55919a84d1c\\\",\\\"systemUUID\\\":\\\"232cf3c8-8956-4a87-8900-bbd0298775e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:01Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:01 crc kubenswrapper[4779]: E1128 12:36:01.220500 4779 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.221981 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.222009 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.222024 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.222043 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.222051 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:01Z","lastTransitionTime":"2025-11-28T12:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.222366 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3544f7f72339878b2314fde813e8a92a8341fb05a34a4440c7c37b983d8d23f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19dcc5041b0cbae9167c41c808ece2651eac928f93422722ae28825b5ea4f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:01Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.260525 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:01Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.302545 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:01Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.323916 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.323955 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.323966 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.323982 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.323994 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:01Z","lastTransitionTime":"2025-11-28T12:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.341366 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:01Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.381354 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3544f7f72339878b2314fde813e8a92a8341fb05a34a4440c7c37b983d8d23f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19dcc5041b0cbae9167c41c808ece2651eac928f93422722ae28825b5ea4f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:01Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.423411 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:01Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.425883 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.426006 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.426068 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.426436 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.426497 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:01Z","lastTransitionTime":"2025-11-28T12:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.460391 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d290cf8678216cdf66a68b32edea2be30af7f7fa4ff7ccac629d9e690b23b13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:01Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.504685 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2gg4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2gg4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:01Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.529437 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.529474 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.529483 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.529502 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.529514 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:01Z","lastTransitionTime":"2025-11-28T12:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.542785 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373d4c2a-0b03-4671-945a-0583fa342b3d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e79e9cc7bdaacc427604d12cf94272c7ed3d93519b1d285ba336edded1b3642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0417da6607c0d549767642332fa4fb21bbef525d7073d0a352120092d3450f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b887fb78d1be13c77a88ce49c84ff0839a51056e29d59d571ab7da133dd0d897\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a538ac7a3b48f9c58a68688a95342fb3a9d26ee3e5d7c65f1e3b8d99993294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:01Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.564310 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:36:01 crc kubenswrapper[4779]: E1128 12:36:01.564486 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 12:36:05.564460813 +0000 UTC m=+26.130136157 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.564714 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.564833 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.564936 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.565045 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:36:01 crc kubenswrapper[4779]: E1128 12:36:01.564905 4779 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 12:36:01 crc kubenswrapper[4779]: E1128 12:36:01.565290 4779 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 12:36:01 crc kubenswrapper[4779]: E1128 12:36:01.565393 4779 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 12:36:01 crc kubenswrapper[4779]: E1128 12:36:01.565010 4779 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 12:36:01 crc kubenswrapper[4779]: E1128 12:36:01.565117 4779 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 12:36:01 crc kubenswrapper[4779]: E1128 12:36:01.565611 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 12:36:05.565582083 +0000 UTC m=+26.131257447 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 12:36:01 crc kubenswrapper[4779]: E1128 12:36:01.565174 4779 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 12:36:01 crc kubenswrapper[4779]: E1128 12:36:01.565646 4779 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 12:36:01 crc kubenswrapper[4779]: E1128 12:36:01.565660 4779 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 12:36:01 crc kubenswrapper[4779]: E1128 12:36:01.565638 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 12:36:05.565628204 +0000 UTC m=+26.131303568 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 12:36:01 crc kubenswrapper[4779]: E1128 12:36:01.565708 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 12:36:05.565697866 +0000 UTC m=+26.131373220 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 12:36:01 crc kubenswrapper[4779]: E1128 12:36:01.565912 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 12:36:05.565889311 +0000 UTC m=+26.131564705 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.580979 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b2a3eb4-4de5-491b-b466-3a35b7d745ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23df7a96829b4103254d6da3740caab05538ddbd3235ce16e8d768e681041c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f92b1378efd9146ee3cb61fef14092136e47b318d132a400c768bedf50d034e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kj9g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:01Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.625699 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pzwdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba664a9e-76d2-4d02-889a-e7062bfc903c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5598fdba6afba30cd00c8abdae6c80300fb10dfcde40afab0f15f848addddd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfslc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pzwdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:01Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.633523 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.633556 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.633565 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.633580 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.633590 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:01Z","lastTransitionTime":"2025-11-28T12:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.665284 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbbbf6f-004c-42ae-8a38-1bcc6cb88ac2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9cede79cbe4c47d953dfa702fe815cc14ee242dede33edec3c4617824c89b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4493f154b47a353308d54341114bbbd12157f9575b873e1648d1dae6a386a534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9d44446078a2bb53a5a9b0a3f7a87ecf24a8554fb968a0250fc3a4cfb2d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://123567b9e202a9aae6ab83bca1ea909a496c476395703ab65e855be02f7af06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c959e0d582f2f01523650db7c0a1d6483dda34c3fcdfaa29d2d25e4d0b0f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:01Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.711444 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f4f43e-a921-41b2-aa88-506055daff60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pbmbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:01Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.735213 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.735461 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.735558 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.735644 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.735731 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:01Z","lastTransitionTime":"2025-11-28T12:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.739508 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwgdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13786eba-201c-40ca-89b7-174795999a9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec60bab90c7fee1fd38c00da4f84d5133876ad8f2817e5447795fcab4feb2942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwgdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:01Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.785481 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b303d954-23c9-4fc9-8e79-981009172099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6912a42c418059dabf07c7d940bf1c4102c8dcf91cd4dd6ca0b177f4acd276ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf14e5e2229156dc442c92253ef1f23c75a5a6f5dec2d2537cddcdd1df54b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a76dbc5b41ebf68792cd449e4a245678be24151f0c980eedd06f956674b2435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3db38b748527004df103120db865f7848491344dfdf5c89a6db10f4d15e6a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9026b47ba3a0076e3f66e452bc9a223292a17659f2b80d04ef6eb6a5c0448710\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 12:35:52.373678 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 12:35:52.376135 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3230331060/tls.crt::/tmp/serving-cert-3230331060/tls.key\\\\\\\"\\\\nI1128 12:35:57.821147 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 12:35:57.824398 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 12:35:57.824424 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 12:35:57.824444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 12:35:57.824450 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 12:35:57.831411 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 12:35:57.831445 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831467 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 12:35:57.831472 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 12:35:57.831476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 12:35:57.831480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 12:35:57.831686 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 12:35:57.839127 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bafddd2d81f67f1445e3714d50eba5cfd6f75d60c2cb47d16f2086861a10bd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:01Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.825029 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c9857379117d130ce02fa4a153dfc01c9f41ba65663ae918bd82c9b14291e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:01Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.839260 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.839311 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.839331 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.839355 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.839375 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:01Z","lastTransitionTime":"2025-11-28T12:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.858457 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dlvj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8b3aa68-52ee-40cd-a059-6e410b826ce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2e852aeb571e85a95f4581550ee5f911d9c67fbbc4fc699e9af667a9c4b531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db55w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dlvj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:01Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.896330 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" event={"ID":"35f4f43e-a921-41b2-aa88-506055daff60","Type":"ContainerStarted","Data":"bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71"} Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.896781 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" event={"ID":"35f4f43e-a921-41b2-aa88-506055daff60","Type":"ContainerStarted","Data":"192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e"} Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.897133 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" event={"ID":"35f4f43e-a921-41b2-aa88-506055daff60","Type":"ContainerStarted","Data":"d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9"} Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.897356 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" event={"ID":"35f4f43e-a921-41b2-aa88-506055daff60","Type":"ContainerStarted","Data":"759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620"} Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.897954 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" event={"ID":"35f4f43e-a921-41b2-aa88-506055daff60","Type":"ContainerStarted","Data":"683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2"} Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.898679 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" event={"ID":"35f4f43e-a921-41b2-aa88-506055daff60","Type":"ContainerStarted","Data":"0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868"} Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.899355 4779 generic.go:334] "Generic (PLEG): container finished" podID="1bd5bc7d-159f-4f4e-8647-8a373e47d35f" containerID="95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5" exitCode=0 Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.899414 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2gg4m" event={"ID":"1bd5bc7d-159f-4f4e-8647-8a373e47d35f","Type":"ContainerDied","Data":"95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5"} Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.918683 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b303d954-23c9-4fc9-8e79-981009172099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6912a42c418059dabf07c7d940bf1c4102c8dcf91cd4dd6ca0b177f4acd276ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf14e5e2229156dc442c92253ef1f23c75a5a6f5dec2d2537cddcdd1df54b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a76dbc5b41ebf68792cd449e4a245678be24151f0c980eedd06f956674b2435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3db38b748527004df103120db865f7848491344dfdf5c89a6db10f4d15e6a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9026b47ba3a0076e3f66e452bc9a223292a17659f2b80d04ef6eb6a5c0448710\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 12:35:52.373678 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 12:35:52.376135 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3230331060/tls.crt::/tmp/serving-cert-3230331060/tls.key\\\\\\\"\\\\nI1128 12:35:57.821147 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 12:35:57.824398 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 12:35:57.824424 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 12:35:57.824444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 12:35:57.824450 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 12:35:57.831411 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 12:35:57.831445 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831467 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 12:35:57.831472 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 12:35:57.831476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 12:35:57.831480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 12:35:57.831686 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 12:35:57.839127 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bafddd2d81f67f1445e3714d50eba5cfd6f75d60c2cb47d16f2086861a10bd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:01Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.943115 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.943332 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.943401 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.943492 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.943560 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:01Z","lastTransitionTime":"2025-11-28T12:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.944963 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c9857379117d130ce02fa4a153dfc01c9f41ba65663ae918bd82c9b14291e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:01Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:01 crc kubenswrapper[4779]: I1128 12:36:01.980429 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dlvj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8b3aa68-52ee-40cd-a059-6e410b826ce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2e852aeb571e85a95f4581550ee5f911d9c67fbbc4fc699e9af667a9c4b531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db55w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dlvj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:01Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.020150 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d290cf8678216cdf66a68b32edea2be30af7f7fa4ff7ccac629d9e690b23b13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:02Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.046043 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.046082 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.046113 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.046130 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.046143 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:02Z","lastTransitionTime":"2025-11-28T12:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.061395 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2gg4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2gg4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:02Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.099206 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:02Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.138722 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:02Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.148049 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.148143 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.148159 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.148183 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.148198 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:02Z","lastTransitionTime":"2025-11-28T12:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.179301 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3544f7f72339878b2314fde813e8a92a8341fb05a34a4440c7c37b983d8d23f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19dcc5041b0cbae9167c41c808ece2651eac928f93422722ae28825b5ea4f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:02Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.219901 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:02Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.250933 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.250978 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.250988 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.251003 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.251011 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:02Z","lastTransitionTime":"2025-11-28T12:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.266597 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373d4c2a-0b03-4671-945a-0583fa342b3d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e79e9cc7bdaacc427604d12cf94272c7ed3d93519b1d285ba336edded1b3642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0417da6607c0d549767642332fa4fb21bbef525d7073d0a352120092d3450f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b887fb78d1be13c77a88ce49c84ff0839a51056e29d59d571ab7da133dd0d897\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a538ac7a3b48f9c58a68688a95342fb3a9d26ee3e5d7c65f1e3b8d99993294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:02Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.300304 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b2a3eb4-4de5-491b-b466-3a35b7d745ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23df7a96829b4103254d6da3740caab05538ddbd3235ce16e8d768e681041c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f92b1378efd9146ee3cb61fef14092136e47b318d132a400c768bedf50d034e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kj9g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:02Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.342484 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pzwdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba664a9e-76d2-4d02-889a-e7062bfc903c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5598fdba6afba30cd00c8abdae6c80300fb10dfcde40afab0f15f848addddd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfslc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pzwdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:02Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.354471 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.354530 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.354549 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.354572 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.354590 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:02Z","lastTransitionTime":"2025-11-28T12:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.395353 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbbbf6f-004c-42ae-8a38-1bcc6cb88ac2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9cede79cbe4c47d953dfa702fe815cc14ee242dede33edec3c4617824c89b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4493f154b47a353308d54341114bbbd12157f9575b873e1648d1dae6a386a534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9d44446078a2bb53a5a9b0a3f7a87ecf24a8554fb968a0250fc3a4cfb2d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://123567b9e202a9aae6ab83bca1ea909a496c476395703ab65e855be02f7af06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c959e0d582f2f01523650db7c0a1d6483dda34c3fcdfaa29d2d25e4d0b0f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:02Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.432660 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f4f43e-a921-41b2-aa88-506055daff60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pbmbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:02Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.457538 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.457602 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.457619 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.457646 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.457664 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:02Z","lastTransitionTime":"2025-11-28T12:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.464266 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwgdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13786eba-201c-40ca-89b7-174795999a9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec60bab90c7fee1fd38c00da4f84d5133876ad8f2817e5447795fcab4feb2942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwgdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:02Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.561218 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.561285 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.561300 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.561329 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.561345 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:02Z","lastTransitionTime":"2025-11-28T12:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.665133 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.665202 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.665225 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.665254 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.665275 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:02Z","lastTransitionTime":"2025-11-28T12:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.725544 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.725655 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:36:02 crc kubenswrapper[4779]: E1128 12:36:02.725724 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:36:02 crc kubenswrapper[4779]: E1128 12:36:02.725886 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.725664 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:36:02 crc kubenswrapper[4779]: E1128 12:36:02.726125 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.767828 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.767890 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.767901 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.767923 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.767936 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:02Z","lastTransitionTime":"2025-11-28T12:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.871412 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.871458 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.871476 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.871495 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.871504 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:02Z","lastTransitionTime":"2025-11-28T12:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.909823 4779 generic.go:334] "Generic (PLEG): container finished" podID="1bd5bc7d-159f-4f4e-8647-8a373e47d35f" containerID="e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc" exitCode=0 Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.909877 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2gg4m" event={"ID":"1bd5bc7d-159f-4f4e-8647-8a373e47d35f","Type":"ContainerDied","Data":"e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc"} Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.932475 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbbbf6f-004c-42ae-8a38-1bcc6cb88ac2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9cede79cbe4c47d953dfa702fe815cc14ee242dede33edec3c4617824c89b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4493f154b47a353308d54341114bbbd12157f9575b873e1648d1dae6a386a534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9d44446078a2bb53a5a9b0a3f7a87ecf24a8554fb968a0250fc3a4cfb2d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://123567b9e202a9aae6ab83bca1ea909a496c476395703ab65e855be02f7af06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c959e0d582f2f01523650db7c0a1d6483dda34c3fcdfaa29d2d25e4d0b0f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:02Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.954892 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f4f43e-a921-41b2-aa88-506055daff60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pbmbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:02Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.971012 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwgdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13786eba-201c-40ca-89b7-174795999a9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec60bab90c7fee1fd38c00da4f84d5133876ad8f2817e5447795fcab4feb2942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwgdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:02Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.973830 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.973873 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.973890 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.973914 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.973932 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:02Z","lastTransitionTime":"2025-11-28T12:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:02 crc kubenswrapper[4779]: I1128 12:36:02.991664 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b303d954-23c9-4fc9-8e79-981009172099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6912a42c418059dabf07c7d940bf1c4102c8dcf91cd4dd6ca0b177f4acd276ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf14e5e2229156dc442c92253ef1f23c75a5a6f5dec2d2537cddcdd1df54b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a76dbc5b41ebf68792cd449e4a245678be24151f0c980eedd06f956674b2435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3db38b748527004df103120db865f7848491344dfdf5c89a6db10f4d15e6a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9026b47ba3a0076e3f66e452bc9a223292a17659f2b80d04ef6eb6a5c0448710\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 12:35:52.373678 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 12:35:52.376135 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3230331060/tls.crt::/tmp/serving-cert-3230331060/tls.key\\\\\\\"\\\\nI1128 12:35:57.821147 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 12:35:57.824398 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 12:35:57.824424 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 12:35:57.824444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 12:35:57.824450 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 12:35:57.831411 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 12:35:57.831445 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831467 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 12:35:57.831472 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 12:35:57.831476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 12:35:57.831480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 12:35:57.831686 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 12:35:57.839127 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bafddd2d81f67f1445e3714d50eba5cfd6f75d60c2cb47d16f2086861a10bd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:02Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.009673 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c9857379117d130ce02fa4a153dfc01c9f41ba65663ae918bd82c9b14291e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:03Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.023153 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dlvj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8b3aa68-52ee-40cd-a059-6e410b826ce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2e852aeb571e85a95f4581550ee5f911d9c67fbbc4fc699e9af667a9c4b531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db55w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dlvj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:03Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.039807 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d290cf8678216cdf66a68b32edea2be30af7f7fa4ff7ccac629d9e690b23b13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:03Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.060554 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2gg4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2gg4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:03Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.077992 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.078027 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.078056 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.078086 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.078111 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:03Z","lastTransitionTime":"2025-11-28T12:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.078247 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:03Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.093460 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:03Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.111772 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3544f7f72339878b2314fde813e8a92a8341fb05a34a4440c7c37b983d8d23f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19dcc5041b0cbae9167c41c808ece2651eac928f93422722ae28825b5ea4f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:03Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.125300 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:03Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.142821 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373d4c2a-0b03-4671-945a-0583fa342b3d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e79e9cc7bdaacc427604d12cf94272c7ed3d93519b1d285ba336edded1b3642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0417da6607c0d549767642332fa4fb21bbef525d7073d0a352120092d3450f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b887fb78d1be13c77a88ce49c84ff0839a51056e29d59d571ab7da133dd0d897\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a538ac7a3b48f9c58a68688a95342fb3a9d26ee3e5d7c65f1e3b8d99993294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:03Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.156742 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b2a3eb4-4de5-491b-b466-3a35b7d745ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23df7a96829b4103254d6da3740caab05538ddbd3235ce16e8d768e681041c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f92b1378efd9146ee3cb61fef14092136e47b318d132a400c768bedf50d034e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kj9g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:03Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.169184 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pzwdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba664a9e-76d2-4d02-889a-e7062bfc903c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5598fdba6afba30cd00c8abdae6c80300fb10dfcde40afab0f15f848addddd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfslc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pzwdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:03Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.180369 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.180405 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.180419 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.180459 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.180472 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:03Z","lastTransitionTime":"2025-11-28T12:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.283368 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.283418 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.283431 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.283453 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.283469 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:03Z","lastTransitionTime":"2025-11-28T12:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.386536 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.386573 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.386581 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.386596 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.386606 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:03Z","lastTransitionTime":"2025-11-28T12:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.489662 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.489731 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.489757 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.489788 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.489813 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:03Z","lastTransitionTime":"2025-11-28T12:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.591876 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.591955 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.591980 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.592012 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.592034 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:03Z","lastTransitionTime":"2025-11-28T12:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.695126 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.695159 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.695196 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.695213 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.695224 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:03Z","lastTransitionTime":"2025-11-28T12:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.799148 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.799209 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.799226 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.799252 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.799274 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:03Z","lastTransitionTime":"2025-11-28T12:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.903199 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.903241 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.903255 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.903287 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.903324 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:03Z","lastTransitionTime":"2025-11-28T12:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.917905 4779 generic.go:334] "Generic (PLEG): container finished" podID="1bd5bc7d-159f-4f4e-8647-8a373e47d35f" containerID="c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f" exitCode=0 Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.917989 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2gg4m" event={"ID":"1bd5bc7d-159f-4f4e-8647-8a373e47d35f","Type":"ContainerDied","Data":"c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f"} Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.927821 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" event={"ID":"35f4f43e-a921-41b2-aa88-506055daff60","Type":"ContainerStarted","Data":"514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0"} Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.940405 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b303d954-23c9-4fc9-8e79-981009172099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6912a42c418059dabf07c7d940bf1c4102c8dcf91cd4dd6ca0b177f4acd276ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf14e5e2229156dc442c92253ef1f23c75a5a6f5dec2d2537cddcdd1df54b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a76dbc5b41ebf68792cd449e4a245678be24151f0c980eedd06f956674b2435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3db38b748527004df103120db865f7848491344dfdf5c89a6db10f4d15e6a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9026b47ba3a0076e3f66e452bc9a223292a17659f2b80d04ef6eb6a5c0448710\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 12:35:52.373678 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 12:35:52.376135 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3230331060/tls.crt::/tmp/serving-cert-3230331060/tls.key\\\\\\\"\\\\nI1128 12:35:57.821147 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 12:35:57.824398 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 12:35:57.824424 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 12:35:57.824444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 12:35:57.824450 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 12:35:57.831411 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 12:35:57.831445 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831467 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 12:35:57.831472 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 12:35:57.831476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 12:35:57.831480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 12:35:57.831686 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 12:35:57.839127 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bafddd2d81f67f1445e3714d50eba5cfd6f75d60c2cb47d16f2086861a10bd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:03Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.962524 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c9857379117d130ce02fa4a153dfc01c9f41ba65663ae918bd82c9b14291e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:03Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:03 crc kubenswrapper[4779]: I1128 12:36:03.979663 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dlvj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8b3aa68-52ee-40cd-a059-6e410b826ce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2e852aeb571e85a95f4581550ee5f911d9c67fbbc4fc699e9af667a9c4b531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db55w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dlvj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:03Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.000183 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:03Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.005665 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.005708 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.005720 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.005739 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.005750 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:04Z","lastTransitionTime":"2025-11-28T12:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.019697 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d290cf8678216cdf66a68b32edea2be30af7f7fa4ff7ccac629d9e690b23b13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:04Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.054504 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2gg4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2gg4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:04Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.072578 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:04Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.091499 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:04Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.109781 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.109827 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.109844 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.109867 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.109886 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:04Z","lastTransitionTime":"2025-11-28T12:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.112974 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3544f7f72339878b2314fde813e8a92a8341fb05a34a4440c7c37b983d8d23f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19dcc5041b0cbae9167c41c808ece2651eac928f93422722ae28825b5ea4f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:04Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.132532 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373d4c2a-0b03-4671-945a-0583fa342b3d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e79e9cc7bdaacc427604d12cf94272c7ed3d93519b1d285ba336edded1b3642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0417da6607c0d549767642332fa4fb21bbef525d7073d0a352120092d3450f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b887fb78d1be13c77a88ce49c84ff0839a51056e29d59d571ab7da133dd0d897\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a538ac7a3b48f9c58a68688a95342fb3a9d26ee3e5d7c65f1e3b8d99993294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:04Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.148665 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b2a3eb4-4de5-491b-b466-3a35b7d745ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23df7a96829b4103254d6da3740caab05538ddbd3235ce16e8d768e681041c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f92b1378efd9146ee3cb61fef14092136e47b318d132a400c768bedf50d034e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kj9g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:04Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.170838 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pzwdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba664a9e-76d2-4d02-889a-e7062bfc903c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5598fdba6afba30cd00c8abdae6c80300fb10dfcde40afab0f15f848addddd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfslc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pzwdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:04Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.201527 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbbbf6f-004c-42ae-8a38-1bcc6cb88ac2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9cede79cbe4c47d953dfa702fe815cc14ee242dede33edec3c4617824c89b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4493f154b47a353308d54341114bbbd12157f9575b873e1648d1dae6a386a534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9d44446078a2bb53a5a9b0a3f7a87ecf24a8554fb968a0250fc3a4cfb2d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://123567b9e202a9aae6ab83bca1ea909a496c476395703ab65e855be02f7af06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c959e0d582f2f01523650db7c0a1d6483dda34c3fcdfaa29d2d25e4d0b0f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:04Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.212762 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.212794 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.212803 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.212817 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.212827 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:04Z","lastTransitionTime":"2025-11-28T12:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.228261 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f4f43e-a921-41b2-aa88-506055daff60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pbmbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:04Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.242572 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwgdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13786eba-201c-40ca-89b7-174795999a9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec60bab90c7fee1fd38c00da4f84d5133876ad8f2817e5447795fcab4feb2942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwgdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:04Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.315454 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.315515 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.315527 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.315544 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.315557 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:04Z","lastTransitionTime":"2025-11-28T12:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.418123 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.418486 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.418502 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.418525 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.418541 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:04Z","lastTransitionTime":"2025-11-28T12:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.522037 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.522074 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.522084 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.522119 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.522130 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:04Z","lastTransitionTime":"2025-11-28T12:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.624748 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.624801 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.624819 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.624839 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.624850 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:04Z","lastTransitionTime":"2025-11-28T12:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.726346 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:36:04 crc kubenswrapper[4779]: E1128 12:36:04.726467 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.726906 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:36:04 crc kubenswrapper[4779]: E1128 12:36:04.726978 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.727025 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:36:04 crc kubenswrapper[4779]: E1128 12:36:04.727078 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.728307 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.728337 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.728348 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.728361 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.728373 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:04Z","lastTransitionTime":"2025-11-28T12:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.830547 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.830578 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.830585 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.830597 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.830605 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:04Z","lastTransitionTime":"2025-11-28T12:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.932938 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.933030 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.933063 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.933142 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.933166 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:04Z","lastTransitionTime":"2025-11-28T12:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.938965 4779 generic.go:334] "Generic (PLEG): container finished" podID="1bd5bc7d-159f-4f4e-8647-8a373e47d35f" containerID="ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f" exitCode=0 Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.939015 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2gg4m" event={"ID":"1bd5bc7d-159f-4f4e-8647-8a373e47d35f","Type":"ContainerDied","Data":"ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f"} Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.959511 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373d4c2a-0b03-4671-945a-0583fa342b3d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e79e9cc7bdaacc427604d12cf94272c7ed3d93519b1d285ba336edded1b3642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0417da6607c0d549767642332fa4fb21bbef525d7073d0a352120092d3450f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b887fb78d1be13c77a88ce49c84ff0839a51056e29d59d571ab7da133dd0d897\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a538ac7a3b48f9c58a68688a95342fb3a9d26ee3e5d7c65f1e3b8d99993294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:04Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:04 crc kubenswrapper[4779]: I1128 12:36:04.979528 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b2a3eb4-4de5-491b-b466-3a35b7d745ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23df7a96829b4103254d6da3740caab05538ddbd3235ce16e8d768e681041c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f92b1378efd9146ee3cb61fef14092136e47b318d132a400c768bedf50d034e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kj9g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:04Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.003943 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pzwdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba664a9e-76d2-4d02-889a-e7062bfc903c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5598fdba6afba30cd00c8abdae6c80300fb10dfcde40afab0f15f848addddd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfslc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pzwdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:05Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.037849 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbbbf6f-004c-42ae-8a38-1bcc6cb88ac2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9cede79cbe4c47d953dfa702fe815cc14ee242dede33edec3c4617824c89b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4493f154b47a353308d54341114bbbd12157f9575b873e1648d1dae6a386a534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9d44446078a2bb53a5a9b0a3f7a87ecf24a8554fb968a0250fc3a4cfb2d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://123567b9e202a9aae6ab83bca1ea909a496c476395703ab65e855be02f7af06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c959e0d582f2f01523650db7c0a1d6483dda34c3fcdfaa29d2d25e4d0b0f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:05Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.038176 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.038217 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.038234 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.038273 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.038291 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:05Z","lastTransitionTime":"2025-11-28T12:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.074040 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f4f43e-a921-41b2-aa88-506055daff60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pbmbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:05Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.090203 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwgdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13786eba-201c-40ca-89b7-174795999a9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec60bab90c7fee1fd38c00da4f84d5133876ad8f2817e5447795fcab4feb2942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwgdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:05Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.108537 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dlvj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8b3aa68-52ee-40cd-a059-6e410b826ce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2e852aeb571e85a95f4581550ee5f911d9c67fbbc4fc699e9af667a9c4b531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db55w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dlvj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:05Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.127192 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b303d954-23c9-4fc9-8e79-981009172099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6912a42c418059dabf07c7d940bf1c4102c8dcf91cd4dd6ca0b177f4acd276ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf14e5e2229156dc442c92253ef1f23c75a5a6f5dec2d2537cddcdd1df54b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a76dbc5b41ebf68792cd449e4a245678be24151f0c980eedd06f956674b2435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3db38b748527004df103120db865f7848491344dfdf5c89a6db10f4d15e6a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9026b47ba3a0076e3f66e452bc9a223292a17659f2b80d04ef6eb6a5c0448710\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 12:35:52.373678 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 12:35:52.376135 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3230331060/tls.crt::/tmp/serving-cert-3230331060/tls.key\\\\\\\"\\\\nI1128 12:35:57.821147 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 12:35:57.824398 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 12:35:57.824424 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 12:35:57.824444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 12:35:57.824450 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 12:35:57.831411 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 12:35:57.831445 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831467 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 12:35:57.831472 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 12:35:57.831476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 12:35:57.831480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 12:35:57.831686 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 12:35:57.839127 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bafddd2d81f67f1445e3714d50eba5cfd6f75d60c2cb47d16f2086861a10bd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:05Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.142640 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.142688 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.142701 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.142807 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.142824 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:05Z","lastTransitionTime":"2025-11-28T12:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.148331 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c9857379117d130ce02fa4a153dfc01c9f41ba65663ae918bd82c9b14291e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:05Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.166834 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3544f7f72339878b2314fde813e8a92a8341fb05a34a4440c7c37b983d8d23f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19dcc5041b0cbae9167c41c808ece2651eac928f93422722ae28825b5ea4f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:05Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.182322 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:05Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.205324 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d290cf8678216cdf66a68b32edea2be30af7f7fa4ff7ccac629d9e690b23b13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:05Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.228232 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2gg4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2gg4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:05Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.247671 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:05Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.249292 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.249421 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.249449 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.249493 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.249534 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:05Z","lastTransitionTime":"2025-11-28T12:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.266746 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:05Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.352849 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.352893 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.352907 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.352931 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.352944 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:05Z","lastTransitionTime":"2025-11-28T12:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.456003 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.456088 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.456161 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.456194 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.456215 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:05Z","lastTransitionTime":"2025-11-28T12:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.559385 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.559469 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.559494 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.559526 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.559547 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:05Z","lastTransitionTime":"2025-11-28T12:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.619416 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:36:05 crc kubenswrapper[4779]: E1128 12:36:05.619651 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 12:36:13.619606572 +0000 UTC m=+34.185281986 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.619747 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.619859 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.619924 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.620004 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:36:05 crc kubenswrapper[4779]: E1128 12:36:05.620035 4779 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 12:36:05 crc kubenswrapper[4779]: E1128 12:36:05.620078 4779 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 12:36:05 crc kubenswrapper[4779]: E1128 12:36:05.620146 4779 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 12:36:05 crc kubenswrapper[4779]: E1128 12:36:05.620160 4779 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 12:36:05 crc kubenswrapper[4779]: E1128 12:36:05.620206 4779 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 12:36:05 crc kubenswrapper[4779]: E1128 12:36:05.620219 4779 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 12:36:05 crc kubenswrapper[4779]: E1128 12:36:05.620260 4779 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 12:36:05 crc kubenswrapper[4779]: E1128 12:36:05.620229 4779 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 12:36:05 crc kubenswrapper[4779]: E1128 12:36:05.620240 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 12:36:13.620209808 +0000 UTC m=+34.185885202 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 12:36:05 crc kubenswrapper[4779]: E1128 12:36:05.620431 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 12:36:13.620404053 +0000 UTC m=+34.186079447 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 12:36:05 crc kubenswrapper[4779]: E1128 12:36:05.620465 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 12:36:13.620445124 +0000 UTC m=+34.186120598 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 12:36:05 crc kubenswrapper[4779]: E1128 12:36:05.620724 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 12:36:13.620696721 +0000 UTC m=+34.186372115 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.663074 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.663143 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.663155 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.663175 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.663188 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:05Z","lastTransitionTime":"2025-11-28T12:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.785762 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.785855 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.785877 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.785905 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.785928 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:05Z","lastTransitionTime":"2025-11-28T12:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.888891 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.888982 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.889056 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.889120 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.889143 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:05Z","lastTransitionTime":"2025-11-28T12:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.948531 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2gg4m" event={"ID":"1bd5bc7d-159f-4f4e-8647-8a373e47d35f","Type":"ContainerStarted","Data":"ea9e9a74657b078824a5614dc894178aed5ca4cb11445b900485e9a6c4378f2b"} Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.971519 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:05Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.990624 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3544f7f72339878b2314fde813e8a92a8341fb05a34a4440c7c37b983d8d23f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19dcc5041b0cbae9167c41c808ece2651eac928f93422722ae28825b5ea4f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:05Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.992849 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.992893 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.992909 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.992972 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:05 crc kubenswrapper[4779]: I1128 12:36:05.992991 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:05Z","lastTransitionTime":"2025-11-28T12:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.014568 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:06Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.032841 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d290cf8678216cdf66a68b32edea2be30af7f7fa4ff7ccac629d9e690b23b13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:06Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.060030 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2gg4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9e9a74657b078824a5614dc894178aed5ca4cb11445b900485e9a6c4378f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2gg4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:06Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.076465 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:06Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.096388 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.096429 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.096448 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.096476 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.096497 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:06Z","lastTransitionTime":"2025-11-28T12:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.102220 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pzwdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba664a9e-76d2-4d02-889a-e7062bfc903c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5598fdba6afba30cd00c8abdae6c80300fb10dfcde40afab0f15f848addddd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfslc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pzwdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:06Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.123294 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373d4c2a-0b03-4671-945a-0583fa342b3d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e79e9cc7bdaacc427604d12cf94272c7ed3d93519b1d285ba336edded1b3642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0417da6607c0d549767642332fa4fb21bbef525d7073d0a352120092d3450f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b887fb78d1be13c77a88ce49c84ff0839a51056e29d59d571ab7da133dd0d897\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a538ac7a3b48f9c58a68688a95342fb3a9d26ee3e5d7c65f1e3b8d99993294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:06Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.138466 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b2a3eb4-4de5-491b-b466-3a35b7d745ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23df7a96829b4103254d6da3740caab05538ddbd3235ce16e8d768e681041c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f92b1378efd9146ee3cb61fef14092136e47b318d132a400c768bedf50d034e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kj9g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:06Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.152916 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwgdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13786eba-201c-40ca-89b7-174795999a9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec60bab90c7fee1fd38c00da4f84d5133876ad8f2817e5447795fcab4feb2942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwgdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:06Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.185837 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbbbf6f-004c-42ae-8a38-1bcc6cb88ac2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9cede79cbe4c47d953dfa702fe815cc14ee242dede33edec3c4617824c89b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4493f154b47a353308d54341114bbbd12157f9575b873e1648d1dae6a386a534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9d44446078a2bb53a5a9b0a3f7a87ecf24a8554fb968a0250fc3a4cfb2d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://123567b9e202a9aae6ab83bca1ea909a496c476395703ab65e855be02f7af06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c959e0d582f2f01523650db7c0a1d6483dda34c3fcdfaa29d2d25e4d0b0f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:06Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.199589 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.199653 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.199672 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.199701 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.199725 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:06Z","lastTransitionTime":"2025-11-28T12:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.209774 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f4f43e-a921-41b2-aa88-506055daff60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pbmbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:06Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.237906 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c9857379117d130ce02fa4a153dfc01c9f41ba65663ae918bd82c9b14291e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:06Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.256142 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dlvj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8b3aa68-52ee-40cd-a059-6e410b826ce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2e852aeb571e85a95f4581550ee5f911d9c67fbbc4fc699e9af667a9c4b531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db55w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dlvj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:06Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.284207 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b303d954-23c9-4fc9-8e79-981009172099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6912a42c418059dabf07c7d940bf1c4102c8dcf91cd4dd6ca0b177f4acd276ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf14e5e2229156dc442c92253ef1f23c75a5a6f5dec2d2537cddcdd1df54b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a76dbc5b41ebf68792cd449e4a245678be24151f0c980eedd06f956674b2435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3db38b748527004df103120db865f7848491344dfdf5c89a6db10f4d15e6a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9026b47ba3a0076e3f66e452bc9a223292a17659f2b80d04ef6eb6a5c0448710\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 12:35:52.373678 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 12:35:52.376135 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3230331060/tls.crt::/tmp/serving-cert-3230331060/tls.key\\\\\\\"\\\\nI1128 12:35:57.821147 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 12:35:57.824398 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 12:35:57.824424 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 12:35:57.824444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 12:35:57.824450 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 12:35:57.831411 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 12:35:57.831445 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831467 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 12:35:57.831472 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 12:35:57.831476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 12:35:57.831480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 12:35:57.831686 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 12:35:57.839127 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bafddd2d81f67f1445e3714d50eba5cfd6f75d60c2cb47d16f2086861a10bd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:06Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.303400 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.303457 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.303473 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.303495 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.303511 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:06Z","lastTransitionTime":"2025-11-28T12:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.407128 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.407166 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.407175 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.407190 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.407199 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:06Z","lastTransitionTime":"2025-11-28T12:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.510026 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.510129 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.510147 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.510170 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.510190 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:06Z","lastTransitionTime":"2025-11-28T12:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.613380 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.613429 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.613443 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.613468 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.613476 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:06Z","lastTransitionTime":"2025-11-28T12:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.717569 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.717605 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.717618 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.717634 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.717647 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:06Z","lastTransitionTime":"2025-11-28T12:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.726201 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.726269 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.726312 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:36:06 crc kubenswrapper[4779]: E1128 12:36:06.726392 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:36:06 crc kubenswrapper[4779]: E1128 12:36:06.726495 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:36:06 crc kubenswrapper[4779]: E1128 12:36:06.726676 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.820365 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.820426 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.820443 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.820470 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.820489 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:06Z","lastTransitionTime":"2025-11-28T12:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.923732 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.923800 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.923816 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.923843 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.923859 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:06Z","lastTransitionTime":"2025-11-28T12:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.959486 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" event={"ID":"35f4f43e-a921-41b2-aa88-506055daff60","Type":"ContainerStarted","Data":"73d25c753f9447edb42849b2859bd37c3eef9869522bee22ade5af4a2d4abfbd"} Nov 28 12:36:06 crc kubenswrapper[4779]: I1128 12:36:06.993708 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f4f43e-a921-41b2-aa88-506055daff60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73d25c753f9447edb42849b2859bd37c3eef9869522bee22ade5af4a2d4abfbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pbmbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:06Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.009916 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwgdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13786eba-201c-40ca-89b7-174795999a9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec60bab90c7fee1fd38c00da4f84d5133876ad8f2817e5447795fcab4feb2942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwgdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:07Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.027066 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.027179 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.027197 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.027221 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.027239 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:07Z","lastTransitionTime":"2025-11-28T12:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.046035 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbbbf6f-004c-42ae-8a38-1bcc6cb88ac2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9cede79cbe4c47d953dfa702fe815cc14ee242dede33edec3c4617824c89b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4493f154b47a353308d54341114bbbd12157f9575b873e1648d1dae6a386a534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9d44446078a2bb53a5a9b0a3f7a87ecf24a8554fb968a0250fc3a4cfb2d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://123567b9e202a9aae6ab83bca1ea909a496c476395703ab65e855be02f7af06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c959e0d582f2f01523650db7c0a1d6483dda34c3fcdfaa29d2d25e4d0b0f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:07Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.069258 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b303d954-23c9-4fc9-8e79-981009172099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6912a42c418059dabf07c7d940bf1c4102c8dcf91cd4dd6ca0b177f4acd276ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf14e5e2229156dc442c92253ef1f23c75a5a6f5dec2d2537cddcdd1df54b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a76dbc5b41ebf68792cd449e4a245678be24151f0c980eedd06f956674b2435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3db38b748527004df103120db865f7848491344dfdf5c89a6db10f4d15e6a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9026b47ba3a0076e3f66e452bc9a223292a17659f2b80d04ef6eb6a5c0448710\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 12:35:52.373678 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 12:35:52.376135 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3230331060/tls.crt::/tmp/serving-cert-3230331060/tls.key\\\\\\\"\\\\nI1128 12:35:57.821147 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 12:35:57.824398 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 12:35:57.824424 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 12:35:57.824444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 12:35:57.824450 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 12:35:57.831411 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 12:35:57.831445 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831467 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 12:35:57.831472 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 12:35:57.831476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 12:35:57.831480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 12:35:57.831686 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 12:35:57.839127 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bafddd2d81f67f1445e3714d50eba5cfd6f75d60c2cb47d16f2086861a10bd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:07Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.090504 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c9857379117d130ce02fa4a153dfc01c9f41ba65663ae918bd82c9b14291e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:07Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.106889 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dlvj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8b3aa68-52ee-40cd-a059-6e410b826ce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2e852aeb571e85a95f4581550ee5f911d9c67fbbc4fc699e9af667a9c4b531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db55w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dlvj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:07Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.128923 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:07Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.130364 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.130416 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.130434 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.130458 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.130477 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:07Z","lastTransitionTime":"2025-11-28T12:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.150163 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:07Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.170415 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3544f7f72339878b2314fde813e8a92a8341fb05a34a4440c7c37b983d8d23f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19dcc5041b0cbae9167c41c808ece2651eac928f93422722ae28825b5ea4f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:07Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.190599 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:07Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.208372 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d290cf8678216cdf66a68b32edea2be30af7f7fa4ff7ccac629d9e690b23b13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:07Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.231165 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2gg4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9e9a74657b078824a5614dc894178aed5ca4cb11445b900485e9a6c4378f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2gg4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:07Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.233374 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.233418 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.233434 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.233457 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.233475 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:07Z","lastTransitionTime":"2025-11-28T12:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.252240 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b2a3eb4-4de5-491b-b466-3a35b7d745ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23df7a96829b4103254d6da3740caab05538ddbd3235ce16e8d768e681041c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f92b1378efd9146ee3cb61fef14092136e47b318d132a400c768bedf50d034e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kj9g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:07Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.275392 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pzwdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba664a9e-76d2-4d02-889a-e7062bfc903c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5598fdba6afba30cd00c8abdae6c80300fb10dfcde40afab0f15f848addddd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfslc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pzwdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:07Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.299029 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373d4c2a-0b03-4671-945a-0583fa342b3d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e79e9cc7bdaacc427604d12cf94272c7ed3d93519b1d285ba336edded1b3642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0417da6607c0d549767642332fa4fb21bbef525d7073d0a352120092d3450f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b887fb78d1be13c77a88ce49c84ff0839a51056e29d59d571ab7da133dd0d897\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a538ac7a3b48f9c58a68688a95342fb3a9d26ee3e5d7c65f1e3b8d99993294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:07Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.336151 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.336236 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.336262 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.336295 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.336318 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:07Z","lastTransitionTime":"2025-11-28T12:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.438813 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.438879 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.438898 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.438924 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.438942 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:07Z","lastTransitionTime":"2025-11-28T12:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.541772 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.541829 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.541846 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.541870 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.541888 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:07Z","lastTransitionTime":"2025-11-28T12:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.644580 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.644648 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.644668 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.644696 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.644714 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:07Z","lastTransitionTime":"2025-11-28T12:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.747550 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.747598 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.747611 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.747632 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.747646 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:07Z","lastTransitionTime":"2025-11-28T12:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.850569 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.850936 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.850949 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.850966 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.850978 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:07Z","lastTransitionTime":"2025-11-28T12:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.953334 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.953371 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.953381 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.953396 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.953405 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:07Z","lastTransitionTime":"2025-11-28T12:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.962383 4779 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.962781 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:36:07 crc kubenswrapper[4779]: I1128 12:36:07.962809 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.054005 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.054870 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.054909 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.054922 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.054942 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.054953 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:08Z","lastTransitionTime":"2025-11-28T12:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.057578 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.067058 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b303d954-23c9-4fc9-8e79-981009172099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6912a42c418059dabf07c7d940bf1c4102c8dcf91cd4dd6ca0b177f4acd276ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf14e5e2229156dc442c92253ef1f23c75a5a6f5dec2d2537cddcdd1df54b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a76dbc5b41ebf68792cd449e4a245678be24151f0c980eedd06f956674b2435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3db38b748527004df103120db865f7848491344dfdf5c89a6db10f4d15e6a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9026b47ba3a0076e3f66e452bc9a223292a17659f2b80d04ef6eb6a5c0448710\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 12:35:52.373678 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 12:35:52.376135 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3230331060/tls.crt::/tmp/serving-cert-3230331060/tls.key\\\\\\\"\\\\nI1128 12:35:57.821147 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 12:35:57.824398 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 12:35:57.824424 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 12:35:57.824444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 12:35:57.824450 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 12:35:57.831411 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 12:35:57.831445 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831467 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 12:35:57.831472 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 12:35:57.831476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 12:35:57.831480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 12:35:57.831686 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 12:35:57.839127 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bafddd2d81f67f1445e3714d50eba5cfd6f75d60c2cb47d16f2086861a10bd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:08Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.080507 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c9857379117d130ce02fa4a153dfc01c9f41ba65663ae918bd82c9b14291e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:08Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.093873 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dlvj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8b3aa68-52ee-40cd-a059-6e410b826ce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2e852aeb571e85a95f4581550ee5f911d9c67fbbc4fc699e9af667a9c4b531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db55w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dlvj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:08Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.115737 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:08Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.129672 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:08Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.143964 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3544f7f72339878b2314fde813e8a92a8341fb05a34a4440c7c37b983d8d23f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19dcc5041b0cbae9167c41c808ece2651eac928f93422722ae28825b5ea4f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:08Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.157000 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:08Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.157851 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.157983 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.158086 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.158234 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.158346 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:08Z","lastTransitionTime":"2025-11-28T12:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.174162 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d290cf8678216cdf66a68b32edea2be30af7f7fa4ff7ccac629d9e690b23b13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:08Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.191918 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2gg4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9e9a74657b078824a5614dc894178aed5ca4cb11445b900485e9a6c4378f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2gg4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:08Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.206128 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b2a3eb4-4de5-491b-b466-3a35b7d745ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23df7a96829b4103254d6da3740caab05538ddbd3235ce16e8d768e681041c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f92b1378efd9146ee3cb61fef14092136e47b318d132a400c768bedf50d034e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kj9g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:08Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.222580 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pzwdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba664a9e-76d2-4d02-889a-e7062bfc903c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5598fdba6afba30cd00c8abdae6c80300fb10dfcde40afab0f15f848addddd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfslc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pzwdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:08Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.235139 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373d4c2a-0b03-4671-945a-0583fa342b3d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e79e9cc7bdaacc427604d12cf94272c7ed3d93519b1d285ba336edded1b3642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0417da6607c0d549767642332fa4fb21bbef525d7073d0a352120092d3450f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b887fb78d1be13c77a88ce49c84ff0839a51056e29d59d571ab7da133dd0d897\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a538ac7a3b48f9c58a68688a95342fb3a9d26ee3e5d7c65f1e3b8d99993294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:08Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.256357 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f4f43e-a921-41b2-aa88-506055daff60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73d25c753f9447edb42849b2859bd37c3eef9869522bee22ade5af4a2d4abfbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pbmbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:08Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.260871 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.260919 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.260930 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.260955 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.260968 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:08Z","lastTransitionTime":"2025-11-28T12:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.267590 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwgdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13786eba-201c-40ca-89b7-174795999a9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec60bab90c7fee1fd38c00da4f84d5133876ad8f2817e5447795fcab4feb2942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwgdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:08Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.289446 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbbbf6f-004c-42ae-8a38-1bcc6cb88ac2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9cede79cbe4c47d953dfa702fe815cc14ee242dede33edec3c4617824c89b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4493f154b47a353308d54341114bbbd12157f9575b873e1648d1dae6a386a534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9d44446078a2bb53a5a9b0a3f7a87ecf24a8554fb968a0250fc3a4cfb2d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://123567b9e202a9aae6ab83bca1ea909a496c476395703ab65e855be02f7af06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c959e0d582f2f01523650db7c0a1d6483dda34c3fcdfaa29d2d25e4d0b0f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:08Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.308035 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b303d954-23c9-4fc9-8e79-981009172099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6912a42c418059dabf07c7d940bf1c4102c8dcf91cd4dd6ca0b177f4acd276ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf14e5e2229156dc442c92253ef1f23c75a5a6f5dec2d2537cddcdd1df54b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a76dbc5b41ebf68792cd449e4a245678be24151f0c980eedd06f956674b2435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3db38b748527004df103120db865f7848491344dfdf5c89a6db10f4d15e6a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9026b47ba3a0076e3f66e452bc9a223292a17659f2b80d04ef6eb6a5c0448710\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 12:35:52.373678 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 12:35:52.376135 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3230331060/tls.crt::/tmp/serving-cert-3230331060/tls.key\\\\\\\"\\\\nI1128 12:35:57.821147 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 12:35:57.824398 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 12:35:57.824424 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 12:35:57.824444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 12:35:57.824450 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 12:35:57.831411 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 12:35:57.831445 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831467 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 12:35:57.831472 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 12:35:57.831476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 12:35:57.831480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 12:35:57.831686 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 12:35:57.839127 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bafddd2d81f67f1445e3714d50eba5cfd6f75d60c2cb47d16f2086861a10bd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:08Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.328860 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c9857379117d130ce02fa4a153dfc01c9f41ba65663ae918bd82c9b14291e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:08Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.341298 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dlvj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8b3aa68-52ee-40cd-a059-6e410b826ce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2e852aeb571e85a95f4581550ee5f911d9c67fbbc4fc699e9af667a9c4b531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db55w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dlvj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:08Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.359675 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:08Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.363727 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.363788 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.363806 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.363829 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.363844 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:08Z","lastTransitionTime":"2025-11-28T12:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.374558 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:08Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.387276 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3544f7f72339878b2314fde813e8a92a8341fb05a34a4440c7c37b983d8d23f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19dcc5041b0cbae9167c41c808ece2651eac928f93422722ae28825b5ea4f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:08Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.399505 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:08Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.413929 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d290cf8678216cdf66a68b32edea2be30af7f7fa4ff7ccac629d9e690b23b13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:08Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.427229 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2gg4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9e9a74657b078824a5614dc894178aed5ca4cb11445b900485e9a6c4378f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2gg4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:08Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.439666 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373d4c2a-0b03-4671-945a-0583fa342b3d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e79e9cc7bdaacc427604d12cf94272c7ed3d93519b1d285ba336edded1b3642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0417da6607c0d549767642332fa4fb21bbef525d7073d0a352120092d3450f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b887fb78d1be13c77a88ce49c84ff0839a51056e29d59d571ab7da133dd0d897\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a538ac7a3b48f9c58a68688a95342fb3a9d26ee3e5d7c65f1e3b8d99993294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:08Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.453373 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b2a3eb4-4de5-491b-b466-3a35b7d745ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23df7a96829b4103254d6da3740caab05538ddbd3235ce16e8d768e681041c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f92b1378efd9146ee3cb61fef14092136e47b318d132a400c768bedf50d034e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kj9g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:08Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.467085 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.467214 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.467234 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.467264 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.467286 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:08Z","lastTransitionTime":"2025-11-28T12:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.472885 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pzwdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba664a9e-76d2-4d02-889a-e7062bfc903c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5598fdba6afba30cd00c8abdae6c80300fb10dfcde40afab0f15f848addddd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfslc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pzwdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:08Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.493049 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbbbf6f-004c-42ae-8a38-1bcc6cb88ac2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9cede79cbe4c47d953dfa702fe815cc14ee242dede33edec3c4617824c89b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4493f154b47a353308d54341114bbbd12157f9575b873e1648d1dae6a386a534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9d44446078a2bb53a5a9b0a3f7a87ecf24a8554fb968a0250fc3a4cfb2d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://123567b9e202a9aae6ab83bca1ea909a496c476395703ab65e855be02f7af06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c959e0d582f2f01523650db7c0a1d6483dda34c3fcdfaa29d2d25e4d0b0f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:08Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.529116 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f4f43e-a921-41b2-aa88-506055daff60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73d25c753f9447edb42849b2859bd37c3eef9869522bee22ade5af4a2d4abfbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pbmbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:08Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.542489 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwgdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13786eba-201c-40ca-89b7-174795999a9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec60bab90c7fee1fd38c00da4f84d5133876ad8f2817e5447795fcab4feb2942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwgdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:08Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.569289 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.569330 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.569340 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.569356 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.569369 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:08Z","lastTransitionTime":"2025-11-28T12:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.671831 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.671888 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.671907 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.671932 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.671949 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:08Z","lastTransitionTime":"2025-11-28T12:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.725881 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.725901 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.726009 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:36:08 crc kubenswrapper[4779]: E1128 12:36:08.726239 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:36:08 crc kubenswrapper[4779]: E1128 12:36:08.726395 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:36:08 crc kubenswrapper[4779]: E1128 12:36:08.726523 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.774591 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.774635 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.774651 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.774671 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.774687 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:08Z","lastTransitionTime":"2025-11-28T12:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.877637 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.877700 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.877709 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.877723 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.877733 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:08Z","lastTransitionTime":"2025-11-28T12:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.969517 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pbmbn_35f4f43e-a921-41b2-aa88-506055daff60/ovnkube-controller/0.log" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.974526 4779 generic.go:334] "Generic (PLEG): container finished" podID="35f4f43e-a921-41b2-aa88-506055daff60" containerID="73d25c753f9447edb42849b2859bd37c3eef9869522bee22ade5af4a2d4abfbd" exitCode=1 Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.974607 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" event={"ID":"35f4f43e-a921-41b2-aa88-506055daff60","Type":"ContainerDied","Data":"73d25c753f9447edb42849b2859bd37c3eef9869522bee22ade5af4a2d4abfbd"} Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.976214 4779 scope.go:117] "RemoveContainer" containerID="73d25c753f9447edb42849b2859bd37c3eef9869522bee22ade5af4a2d4abfbd" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.980659 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.980729 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.980753 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.980787 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:08 crc kubenswrapper[4779]: I1128 12:36:08.980812 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:08Z","lastTransitionTime":"2025-11-28T12:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.000918 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c9857379117d130ce02fa4a153dfc01c9f41ba65663ae918bd82c9b14291e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:08Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.021292 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dlvj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8b3aa68-52ee-40cd-a059-6e410b826ce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2e852aeb571e85a95f4581550ee5f911d9c67fbbc4fc699e9af667a9c4b531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db55w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dlvj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:09Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.046889 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b303d954-23c9-4fc9-8e79-981009172099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6912a42c418059dabf07c7d940bf1c4102c8dcf91cd4dd6ca0b177f4acd276ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf14e5e2229156dc442c92253ef1f23c75a5a6f5dec2d2537cddcdd1df54b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a76dbc5b41ebf68792cd449e4a245678be24151f0c980eedd06f956674b2435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3db38b748527004df103120db865f7848491344dfdf5c89a6db10f4d15e6a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9026b47ba3a0076e3f66e452bc9a223292a17659f2b80d04ef6eb6a5c0448710\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 12:35:52.373678 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 12:35:52.376135 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3230331060/tls.crt::/tmp/serving-cert-3230331060/tls.key\\\\\\\"\\\\nI1128 12:35:57.821147 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 12:35:57.824398 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 12:35:57.824424 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 12:35:57.824444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 12:35:57.824450 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 12:35:57.831411 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 12:35:57.831445 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831467 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 12:35:57.831472 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 12:35:57.831476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 12:35:57.831480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 12:35:57.831686 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 12:35:57.839127 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bafddd2d81f67f1445e3714d50eba5cfd6f75d60c2cb47d16f2086861a10bd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:09Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.070054 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.070827 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:09Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.089654 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.089718 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.089733 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.089754 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.089767 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:09Z","lastTransitionTime":"2025-11-28T12:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.091955 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3544f7f72339878b2314fde813e8a92a8341fb05a34a4440c7c37b983d8d23f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19dcc5041b0cbae9167c41c808ece2651eac928f93422722ae28825b5ea4f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:09Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.114870 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:09Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.131621 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d290cf8678216cdf66a68b32edea2be30af7f7fa4ff7ccac629d9e690b23b13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:09Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.156381 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2gg4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9e9a74657b078824a5614dc894178aed5ca4cb11445b900485e9a6c4378f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2gg4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:09Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.179327 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:09Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.192740 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.192782 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.192800 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.192824 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.192843 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:09Z","lastTransitionTime":"2025-11-28T12:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.202493 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pzwdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba664a9e-76d2-4d02-889a-e7062bfc903c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5598fdba6afba30cd00c8abdae6c80300fb10dfcde40afab0f15f848addddd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfslc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pzwdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:09Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.225716 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373d4c2a-0b03-4671-945a-0583fa342b3d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e79e9cc7bdaacc427604d12cf94272c7ed3d93519b1d285ba336edded1b3642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0417da6607c0d549767642332fa4fb21bbef525d7073d0a352120092d3450f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b887fb78d1be13c77a88ce49c84ff0839a51056e29d59d571ab7da133dd0d897\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a538ac7a3b48f9c58a68688a95342fb3a9d26ee3e5d7c65f1e3b8d99993294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:09Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.249517 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b2a3eb4-4de5-491b-b466-3a35b7d745ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23df7a96829b4103254d6da3740caab05538ddbd3235ce16e8d768e681041c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f92b1378efd9146ee3cb61fef14092136e47b318d132a400c768bedf50d034e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kj9g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:09Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.258957 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwgdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13786eba-201c-40ca-89b7-174795999a9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec60bab90c7fee1fd38c00da4f84d5133876ad8f2817e5447795fcab4feb2942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwgdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:09Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.278145 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbbbf6f-004c-42ae-8a38-1bcc6cb88ac2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9cede79cbe4c47d953dfa702fe815cc14ee242dede33edec3c4617824c89b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4493f154b47a353308d54341114bbbd12157f9575b873e1648d1dae6a386a534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9d44446078a2bb53a5a9b0a3f7a87ecf24a8554fb968a0250fc3a4cfb2d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://123567b9e202a9aae6ab83bca1ea909a496c476395703ab65e855be02f7af06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c959e0d582f2f01523650db7c0a1d6483dda34c3fcdfaa29d2d25e4d0b0f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:09Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.295878 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.295924 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.295935 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.295954 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.295965 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:09Z","lastTransitionTime":"2025-11-28T12:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.300768 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f4f43e-a921-41b2-aa88-506055daff60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73d25c753f9447edb42849b2859bd37c3eef9869522bee22ade5af4a2d4abfbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73d25c753f9447edb42849b2859bd37c3eef9869522bee22ade5af4a2d4abfbd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T12:36:08Z\\\",\\\"message\\\":\\\"y (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 12:36:08.214262 6125 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 12:36:08.214316 6125 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1128 12:36:08.214325 6125 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1128 12:36:08.214342 6125 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 12:36:08.214349 6125 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 12:36:08.214375 6125 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1128 12:36:08.214436 6125 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1128 12:36:08.214469 6125 factory.go:656] Stopping watch factory\\\\nI1128 12:36:08.214491 6125 ovnkube.go:599] Stopped ovnkube\\\\nI1128 12:36:08.214534 6125 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 12:36:08.214552 6125 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 12:36:08.214563 6125 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1128 12:36:08.214573 6125 handler.go:208] Removed *v1.Node event handler 2\\\\nI1128 12:36:08.214583 6125 handler.go:208] Removed *v1.Node event handler 7\\\\nI1128 12:36:08.214603 6125 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1128 12:36:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pbmbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:09Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.399790 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.399851 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.399869 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.399891 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.399911 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:09Z","lastTransitionTime":"2025-11-28T12:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.502812 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.502874 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.502891 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.502915 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.502933 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:09Z","lastTransitionTime":"2025-11-28T12:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.605604 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.605652 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.605663 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.605680 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.605694 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:09Z","lastTransitionTime":"2025-11-28T12:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.708412 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.708456 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.708470 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.708487 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.708500 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:09Z","lastTransitionTime":"2025-11-28T12:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.749906 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b303d954-23c9-4fc9-8e79-981009172099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6912a42c418059dabf07c7d940bf1c4102c8dcf91cd4dd6ca0b177f4acd276ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf14e5e2229156dc442c92253ef1f23c75a5a6f5dec2d2537cddcdd1df54b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a76dbc5b41ebf68792cd449e4a245678be24151f0c980eedd06f956674b2435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3db38b748527004df103120db865f7848491344dfdf5c89a6db10f4d15e6a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9026b47ba3a0076e3f66e452bc9a223292a17659f2b80d04ef6eb6a5c0448710\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 12:35:52.373678 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 12:35:52.376135 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3230331060/tls.crt::/tmp/serving-cert-3230331060/tls.key\\\\\\\"\\\\nI1128 12:35:57.821147 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 12:35:57.824398 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 12:35:57.824424 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 12:35:57.824444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 12:35:57.824450 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 12:35:57.831411 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 12:35:57.831445 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831467 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 12:35:57.831472 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 12:35:57.831476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 12:35:57.831480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 12:35:57.831686 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 12:35:57.839127 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bafddd2d81f67f1445e3714d50eba5cfd6f75d60c2cb47d16f2086861a10bd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:09Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.773146 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c9857379117d130ce02fa4a153dfc01c9f41ba65663ae918bd82c9b14291e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:09Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.790212 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dlvj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8b3aa68-52ee-40cd-a059-6e410b826ce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2e852aeb571e85a95f4581550ee5f911d9c67fbbc4fc699e9af667a9c4b531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db55w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dlvj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:09Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.807477 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d290cf8678216cdf66a68b32edea2be30af7f7fa4ff7ccac629d9e690b23b13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:09Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.811015 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.811054 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.811070 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.811101 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.811143 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:09Z","lastTransitionTime":"2025-11-28T12:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.830490 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2gg4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9e9a74657b078824a5614dc894178aed5ca4cb11445b900485e9a6c4378f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2gg4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:09Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.849223 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:09Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.864944 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:09Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.880416 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3544f7f72339878b2314fde813e8a92a8341fb05a34a4440c7c37b983d8d23f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19dcc5041b0cbae9167c41c808ece2651eac928f93422722ae28825b5ea4f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:09Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.903310 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:09Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.917380 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.917444 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.917464 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.917495 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.917518 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:09Z","lastTransitionTime":"2025-11-28T12:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.924470 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373d4c2a-0b03-4671-945a-0583fa342b3d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e79e9cc7bdaacc427604d12cf94272c7ed3d93519b1d285ba336edded1b3642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0417da6607c0d549767642332fa4fb21bbef525d7073d0a352120092d3450f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b887fb78d1be13c77a88ce49c84ff0839a51056e29d59d571ab7da133dd0d897\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a538ac7a3b48f9c58a68688a95342fb3a9d26ee3e5d7c65f1e3b8d99993294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:09Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.944854 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b2a3eb4-4de5-491b-b466-3a35b7d745ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23df7a96829b4103254d6da3740caab05538ddbd3235ce16e8d768e681041c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f92b1378efd9146ee3cb61fef14092136e47b318d132a400c768bedf50d034e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kj9g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:09Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.967767 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pzwdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba664a9e-76d2-4d02-889a-e7062bfc903c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5598fdba6afba30cd00c8abdae6c80300fb10dfcde40afab0f15f848addddd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfslc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pzwdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:09Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.981829 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pbmbn_35f4f43e-a921-41b2-aa88-506055daff60/ovnkube-controller/0.log" Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.985728 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" event={"ID":"35f4f43e-a921-41b2-aa88-506055daff60","Type":"ContainerStarted","Data":"2e0392cb9de3bc430e1d54372b710e29ea04b2316517c6e5aa17282ee1ba5201"} Nov 28 12:36:09 crc kubenswrapper[4779]: I1128 12:36:09.986301 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.006086 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbbbf6f-004c-42ae-8a38-1bcc6cb88ac2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9cede79cbe4c47d953dfa702fe815cc14ee242dede33edec3c4617824c89b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4493f154b47a353308d54341114bbbd12157f9575b873e1648d1dae6a386a534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9d44446078a2bb53a5a9b0a3f7a87ecf24a8554fb968a0250fc3a4cfb2d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://123567b9e202a9aae6ab83bca1ea909a496c476395703ab65e855be02f7af06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c959e0d582f2f01523650db7c0a1d6483dda34c3fcdfaa29d2d25e4d0b0f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:10Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.020415 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.020465 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.020480 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.020498 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.020515 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:10Z","lastTransitionTime":"2025-11-28T12:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.057166 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f4f43e-a921-41b2-aa88-506055daff60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73d25c753f9447edb42849b2859bd37c3eef9869522bee22ade5af4a2d4abfbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73d25c753f9447edb42849b2859bd37c3eef9869522bee22ade5af4a2d4abfbd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T12:36:08Z\\\",\\\"message\\\":\\\"y (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 12:36:08.214262 6125 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 12:36:08.214316 6125 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1128 12:36:08.214325 6125 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1128 12:36:08.214342 6125 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 12:36:08.214349 6125 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 12:36:08.214375 6125 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1128 12:36:08.214436 6125 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1128 12:36:08.214469 6125 factory.go:656] Stopping watch factory\\\\nI1128 12:36:08.214491 6125 ovnkube.go:599] Stopped ovnkube\\\\nI1128 12:36:08.214534 6125 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 12:36:08.214552 6125 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 12:36:08.214563 6125 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1128 12:36:08.214573 6125 handler.go:208] Removed *v1.Node event handler 2\\\\nI1128 12:36:08.214583 6125 handler.go:208] Removed *v1.Node event handler 7\\\\nI1128 12:36:08.214603 6125 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1128 12:36:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pbmbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:10Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.080885 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwgdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13786eba-201c-40ca-89b7-174795999a9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec60bab90c7fee1fd38c00da4f84d5133876ad8f2817e5447795fcab4feb2942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwgdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:10Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.105711 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pzwdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba664a9e-76d2-4d02-889a-e7062bfc903c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5598fdba6afba30cd00c8abdae6c80300fb10dfcde40afab0f15f848addddd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfslc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pzwdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:10Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.123129 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.123161 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.123172 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.123188 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.123200 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:10Z","lastTransitionTime":"2025-11-28T12:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.139326 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373d4c2a-0b03-4671-945a-0583fa342b3d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e79e9cc7bdaacc427604d12cf94272c7ed3d93519b1d285ba336edded1b3642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0417da6607c0d549767642332fa4fb21bbef525d7073d0a352120092d3450f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b887fb78d1be13c77a88ce49c84ff0839a51056e29d59d571ab7da133dd0d897\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a538ac7a3b48f9c58a68688a95342fb3a9d26ee3e5d7c65f1e3b8d99993294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:10Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.154031 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b2a3eb4-4de5-491b-b466-3a35b7d745ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23df7a96829b4103254d6da3740caab05538ddbd3235ce16e8d768e681041c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f92b1378efd9146ee3cb61fef14092136e47b318d132a400c768bedf50d034e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kj9g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:10Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.162733 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwgdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13786eba-201c-40ca-89b7-174795999a9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec60bab90c7fee1fd38c00da4f84d5133876ad8f2817e5447795fcab4feb2942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwgdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:10Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.183013 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbbbf6f-004c-42ae-8a38-1bcc6cb88ac2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9cede79cbe4c47d953dfa702fe815cc14ee242dede33edec3c4617824c89b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4493f154b47a353308d54341114bbbd12157f9575b873e1648d1dae6a386a534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9d44446078a2bb53a5a9b0a3f7a87ecf24a8554fb968a0250fc3a4cfb2d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://123567b9e202a9aae6ab83bca1ea909a496c476395703ab65e855be02f7af06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c959e0d582f2f01523650db7c0a1d6483dda34c3fcdfaa29d2d25e4d0b0f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:10Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.202376 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f4f43e-a921-41b2-aa88-506055daff60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e0392cb9de3bc430e1d54372b710e29ea04b2316517c6e5aa17282ee1ba5201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73d25c753f9447edb42849b2859bd37c3eef9869522bee22ade5af4a2d4abfbd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T12:36:08Z\\\",\\\"message\\\":\\\"y (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 12:36:08.214262 6125 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 12:36:08.214316 6125 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1128 12:36:08.214325 6125 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1128 12:36:08.214342 6125 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 12:36:08.214349 6125 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 12:36:08.214375 6125 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1128 12:36:08.214436 6125 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1128 12:36:08.214469 6125 factory.go:656] Stopping watch factory\\\\nI1128 12:36:08.214491 6125 ovnkube.go:599] Stopped ovnkube\\\\nI1128 12:36:08.214534 6125 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 12:36:08.214552 6125 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 12:36:08.214563 6125 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1128 12:36:08.214573 6125 handler.go:208] Removed *v1.Node event handler 2\\\\nI1128 12:36:08.214583 6125 handler.go:208] Removed *v1.Node event handler 7\\\\nI1128 12:36:08.214603 6125 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1128 12:36:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pbmbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:10Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.214579 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c9857379117d130ce02fa4a153dfc01c9f41ba65663ae918bd82c9b14291e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:10Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.223222 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dlvj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8b3aa68-52ee-40cd-a059-6e410b826ce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2e852aeb571e85a95f4581550ee5f911d9c67fbbc4fc699e9af667a9c4b531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db55w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dlvj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:10Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.224727 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.224754 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.224762 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.224775 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.224783 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:10Z","lastTransitionTime":"2025-11-28T12:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.237239 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b303d954-23c9-4fc9-8e79-981009172099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6912a42c418059dabf07c7d940bf1c4102c8dcf91cd4dd6ca0b177f4acd276ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf14e5e2229156dc442c92253ef1f23c75a5a6f5dec2d2537cddcdd1df54b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a76dbc5b41ebf68792cd449e4a245678be24151f0c980eedd06f956674b2435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3db38b748527004df103120db865f7848491344dfdf5c89a6db10f4d15e6a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9026b47ba3a0076e3f66e452bc9a223292a17659f2b80d04ef6eb6a5c0448710\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 12:35:52.373678 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 12:35:52.376135 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3230331060/tls.crt::/tmp/serving-cert-3230331060/tls.key\\\\\\\"\\\\nI1128 12:35:57.821147 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 12:35:57.824398 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 12:35:57.824424 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 12:35:57.824444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 12:35:57.824450 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 12:35:57.831411 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 12:35:57.831445 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831467 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 12:35:57.831472 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 12:35:57.831476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 12:35:57.831480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 12:35:57.831686 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 12:35:57.839127 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bafddd2d81f67f1445e3714d50eba5cfd6f75d60c2cb47d16f2086861a10bd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:10Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.248219 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:10Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.262342 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3544f7f72339878b2314fde813e8a92a8341fb05a34a4440c7c37b983d8d23f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19dcc5041b0cbae9167c41c808ece2651eac928f93422722ae28825b5ea4f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:10Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.279224 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:10Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.288803 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d290cf8678216cdf66a68b32edea2be30af7f7fa4ff7ccac629d9e690b23b13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:10Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.312249 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2gg4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9e9a74657b078824a5614dc894178aed5ca4cb11445b900485e9a6c4378f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2gg4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:10Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.323993 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:10Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.327150 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.327189 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.327202 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.327218 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.327226 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:10Z","lastTransitionTime":"2025-11-28T12:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.429743 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.429811 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.429829 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.429856 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.429874 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:10Z","lastTransitionTime":"2025-11-28T12:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.532458 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.532511 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.532528 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.532551 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.532568 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:10Z","lastTransitionTime":"2025-11-28T12:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.635207 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.635257 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.635275 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.635296 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.635312 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:10Z","lastTransitionTime":"2025-11-28T12:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.725443 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.725544 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.725579 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:36:10 crc kubenswrapper[4779]: E1128 12:36:10.726203 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:36:10 crc kubenswrapper[4779]: E1128 12:36:10.726720 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:36:10 crc kubenswrapper[4779]: E1128 12:36:10.727187 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.738515 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.738583 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.738603 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.738634 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.738655 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:10Z","lastTransitionTime":"2025-11-28T12:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.841936 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.842020 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.842046 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.842130 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.842157 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:10Z","lastTransitionTime":"2025-11-28T12:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.945336 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.945399 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.945422 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.945451 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:10 crc kubenswrapper[4779]: I1128 12:36:10.945474 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:10Z","lastTransitionTime":"2025-11-28T12:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.048058 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.048167 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.048188 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.048213 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.048231 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:11Z","lastTransitionTime":"2025-11-28T12:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.151377 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.151428 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.151445 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.151466 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.151482 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:11Z","lastTransitionTime":"2025-11-28T12:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.254512 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.254584 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.254609 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.254638 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.254659 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:11Z","lastTransitionTime":"2025-11-28T12:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.357588 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.357670 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.357688 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.357716 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.357739 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:11Z","lastTransitionTime":"2025-11-28T12:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.385906 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.385948 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.386029 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.386115 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.386136 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:11Z","lastTransitionTime":"2025-11-28T12:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:11 crc kubenswrapper[4779]: E1128 12:36:11.406070 4779 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"78a2023c-0feb-4049-a56a-d55919a84d1c\\\",\\\"systemUUID\\\":\\\"232cf3c8-8956-4a87-8900-bbd0298775e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:11Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.412058 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.412132 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.412149 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.412169 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.412183 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:11Z","lastTransitionTime":"2025-11-28T12:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:11 crc kubenswrapper[4779]: E1128 12:36:11.430212 4779 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"78a2023c-0feb-4049-a56a-d55919a84d1c\\\",\\\"systemUUID\\\":\\\"232cf3c8-8956-4a87-8900-bbd0298775e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:11Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.434889 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.434938 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.434951 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.434971 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.434983 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:11Z","lastTransitionTime":"2025-11-28T12:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:11 crc kubenswrapper[4779]: E1128 12:36:11.450780 4779 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"78a2023c-0feb-4049-a56a-d55919a84d1c\\\",\\\"systemUUID\\\":\\\"232cf3c8-8956-4a87-8900-bbd0298775e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:11Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.456706 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.456767 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.456780 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.456799 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.456811 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:11Z","lastTransitionTime":"2025-11-28T12:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:11 crc kubenswrapper[4779]: E1128 12:36:11.474071 4779 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"78a2023c-0feb-4049-a56a-d55919a84d1c\\\",\\\"systemUUID\\\":\\\"232cf3c8-8956-4a87-8900-bbd0298775e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:11Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.479651 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.479679 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.479687 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.479701 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.479711 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:11Z","lastTransitionTime":"2025-11-28T12:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:11 crc kubenswrapper[4779]: E1128 12:36:11.499196 4779 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"78a2023c-0feb-4049-a56a-d55919a84d1c\\\",\\\"systemUUID\\\":\\\"232cf3c8-8956-4a87-8900-bbd0298775e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:11Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:11 crc kubenswrapper[4779]: E1128 12:36:11.499537 4779 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.502275 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.502353 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.502380 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.502412 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.502437 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:11Z","lastTransitionTime":"2025-11-28T12:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.606133 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.606184 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.606198 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.606221 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.606242 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:11Z","lastTransitionTime":"2025-11-28T12:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.708443 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.708492 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.708503 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.708521 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.708532 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:11Z","lastTransitionTime":"2025-11-28T12:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.812448 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.812505 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.812523 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.812548 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.812565 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:11Z","lastTransitionTime":"2025-11-28T12:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.910284 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jf46d"] Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.911767 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jf46d" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.915059 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.915274 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.915301 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.915385 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.915411 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:11Z","lastTransitionTime":"2025-11-28T12:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.916187 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.917334 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.942376 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pzwdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba664a9e-76d2-4d02-889a-e7062bfc903c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5598fdba6afba30cd00c8abdae6c80300fb10dfcde40afab0f15f848addddd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfslc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pzwdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:11Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.964251 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373d4c2a-0b03-4671-945a-0583fa342b3d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e79e9cc7bdaacc427604d12cf94272c7ed3d93519b1d285ba336edded1b3642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0417da6607c0d549767642332fa4fb21bbef525d7073d0a352120092d3450f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b887fb78d1be13c77a88ce49c84ff0839a51056e29d59d571ab7da133dd0d897\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a538ac7a3b48f9c58a68688a95342fb3a9d26ee3e5d7c65f1e3b8d99993294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:11Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.986206 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b2a3eb4-4de5-491b-b466-3a35b7d745ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23df7a96829b4103254d6da3740caab05538ddbd3235ce16e8d768e681041c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f92b1378efd9146ee3cb61fef14092136e47b318d132a400c768bedf50d034e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kj9g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:11Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.997629 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pbmbn_35f4f43e-a921-41b2-aa88-506055daff60/ovnkube-controller/1.log" Nov 28 12:36:11 crc kubenswrapper[4779]: I1128 12:36:11.998897 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pbmbn_35f4f43e-a921-41b2-aa88-506055daff60/ovnkube-controller/0.log" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.005752 4779 generic.go:334] "Generic (PLEG): container finished" podID="35f4f43e-a921-41b2-aa88-506055daff60" containerID="2e0392cb9de3bc430e1d54372b710e29ea04b2316517c6e5aa17282ee1ba5201" exitCode=1 Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.005815 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" event={"ID":"35f4f43e-a921-41b2-aa88-506055daff60","Type":"ContainerDied","Data":"2e0392cb9de3bc430e1d54372b710e29ea04b2316517c6e5aa17282ee1ba5201"} Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.005921 4779 scope.go:117] "RemoveContainer" containerID="73d25c753f9447edb42849b2859bd37c3eef9869522bee22ade5af4a2d4abfbd" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.007026 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwgdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13786eba-201c-40ca-89b7-174795999a9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec60bab90c7fee1fd38c00da4f84d5133876ad8f2817e5447795fcab4feb2942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwgdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:12Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.007438 4779 scope.go:117] "RemoveContainer" containerID="2e0392cb9de3bc430e1d54372b710e29ea04b2316517c6e5aa17282ee1ba5201" Nov 28 12:36:12 crc kubenswrapper[4779]: E1128 12:36:12.007741 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pbmbn_openshift-ovn-kubernetes(35f4f43e-a921-41b2-aa88-506055daff60)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" podUID="35f4f43e-a921-41b2-aa88-506055daff60" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.019618 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.019684 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.019702 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.019732 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.019751 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:12Z","lastTransitionTime":"2025-11-28T12:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.044433 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbbbf6f-004c-42ae-8a38-1bcc6cb88ac2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9cede79cbe4c47d953dfa702fe815cc14ee242dede33edec3c4617824c89b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4493f154b47a353308d54341114bbbd12157f9575b873e1648d1dae6a386a534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9d44446078a2bb53a5a9b0a3f7a87ecf24a8554fb968a0250fc3a4cfb2d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://123567b9e202a9aae6ab83bca1ea909a496c476395703ab65e855be02f7af06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c959e0d582f2f01523650db7c0a1d6483dda34c3fcdfaa29d2d25e4d0b0f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:12Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.079499 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f4f43e-a921-41b2-aa88-506055daff60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e0392cb9de3bc430e1d54372b710e29ea04b2316517c6e5aa17282ee1ba5201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73d25c753f9447edb42849b2859bd37c3eef9869522bee22ade5af4a2d4abfbd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T12:36:08Z\\\",\\\"message\\\":\\\"y (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 12:36:08.214262 6125 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 12:36:08.214316 6125 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1128 12:36:08.214325 6125 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1128 12:36:08.214342 6125 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 12:36:08.214349 6125 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 12:36:08.214375 6125 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1128 12:36:08.214436 6125 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1128 12:36:08.214469 6125 factory.go:656] Stopping watch factory\\\\nI1128 12:36:08.214491 6125 ovnkube.go:599] Stopped ovnkube\\\\nI1128 12:36:08.214534 6125 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 12:36:08.214552 6125 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 12:36:08.214563 6125 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1128 12:36:08.214573 6125 handler.go:208] Removed *v1.Node event handler 2\\\\nI1128 12:36:08.214583 6125 handler.go:208] Removed *v1.Node event handler 7\\\\nI1128 12:36:08.214603 6125 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1128 12:36:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pbmbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:12Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.087217 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fd0b81f7-c868-4f90-b20d-9d1b53f5216f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-jf46d\" (UID: \"fd0b81f7-c868-4f90-b20d-9d1b53f5216f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jf46d" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.087411 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fd0b81f7-c868-4f90-b20d-9d1b53f5216f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-jf46d\" (UID: \"fd0b81f7-c868-4f90-b20d-9d1b53f5216f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jf46d" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.087578 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smlr4\" (UniqueName: \"kubernetes.io/projected/fd0b81f7-c868-4f90-b20d-9d1b53f5216f-kube-api-access-smlr4\") pod \"ovnkube-control-plane-749d76644c-jf46d\" (UID: \"fd0b81f7-c868-4f90-b20d-9d1b53f5216f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jf46d" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.087706 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fd0b81f7-c868-4f90-b20d-9d1b53f5216f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-jf46d\" (UID: \"fd0b81f7-c868-4f90-b20d-9d1b53f5216f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jf46d" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.101049 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c9857379117d130ce02fa4a153dfc01c9f41ba65663ae918bd82c9b14291e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:12Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.117581 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dlvj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8b3aa68-52ee-40cd-a059-6e410b826ce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2e852aeb571e85a95f4581550ee5f911d9c67fbbc4fc699e9af667a9c4b531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db55w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dlvj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:12Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.123684 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.123801 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.123863 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.123941 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.123999 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:12Z","lastTransitionTime":"2025-11-28T12:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.137455 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jf46d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd0b81f7-c868-4f90-b20d-9d1b53f5216f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smlr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smlr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:36:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jf46d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:12Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.155704 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b303d954-23c9-4fc9-8e79-981009172099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6912a42c418059dabf07c7d940bf1c4102c8dcf91cd4dd6ca0b177f4acd276ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf14e5e2229156dc442c92253ef1f23c75a5a6f5dec2d2537cddcdd1df54b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a76dbc5b41ebf68792cd449e4a245678be24151f0c980eedd06f956674b2435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3db38b748527004df103120db865f7848491344dfdf5c89a6db10f4d15e6a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9026b47ba3a0076e3f66e452bc9a223292a17659f2b80d04ef6eb6a5c0448710\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 12:35:52.373678 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 12:35:52.376135 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3230331060/tls.crt::/tmp/serving-cert-3230331060/tls.key\\\\\\\"\\\\nI1128 12:35:57.821147 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 12:35:57.824398 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 12:35:57.824424 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 12:35:57.824444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 12:35:57.824450 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 12:35:57.831411 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 12:35:57.831445 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831467 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 12:35:57.831472 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 12:35:57.831476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 12:35:57.831480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 12:35:57.831686 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 12:35:57.839127 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bafddd2d81f67f1445e3714d50eba5cfd6f75d60c2cb47d16f2086861a10bd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:12Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.177909 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:12Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.189384 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fd0b81f7-c868-4f90-b20d-9d1b53f5216f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-jf46d\" (UID: \"fd0b81f7-c868-4f90-b20d-9d1b53f5216f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jf46d" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.189452 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fd0b81f7-c868-4f90-b20d-9d1b53f5216f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-jf46d\" (UID: \"fd0b81f7-c868-4f90-b20d-9d1b53f5216f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jf46d" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.189508 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smlr4\" (UniqueName: \"kubernetes.io/projected/fd0b81f7-c868-4f90-b20d-9d1b53f5216f-kube-api-access-smlr4\") pod \"ovnkube-control-plane-749d76644c-jf46d\" (UID: \"fd0b81f7-c868-4f90-b20d-9d1b53f5216f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jf46d" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.189600 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fd0b81f7-c868-4f90-b20d-9d1b53f5216f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-jf46d\" (UID: \"fd0b81f7-c868-4f90-b20d-9d1b53f5216f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jf46d" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.191599 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fd0b81f7-c868-4f90-b20d-9d1b53f5216f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-jf46d\" (UID: \"fd0b81f7-c868-4f90-b20d-9d1b53f5216f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jf46d" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.191807 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fd0b81f7-c868-4f90-b20d-9d1b53f5216f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-jf46d\" (UID: \"fd0b81f7-c868-4f90-b20d-9d1b53f5216f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jf46d" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.196820 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3544f7f72339878b2314fde813e8a92a8341fb05a34a4440c7c37b983d8d23f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19dcc5041b0cbae9167c41c808ece2651eac928f93422722ae28825b5ea4f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:12Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.201343 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fd0b81f7-c868-4f90-b20d-9d1b53f5216f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-jf46d\" (UID: \"fd0b81f7-c868-4f90-b20d-9d1b53f5216f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jf46d" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.217793 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smlr4\" (UniqueName: \"kubernetes.io/projected/fd0b81f7-c868-4f90-b20d-9d1b53f5216f-kube-api-access-smlr4\") pod \"ovnkube-control-plane-749d76644c-jf46d\" (UID: \"fd0b81f7-c868-4f90-b20d-9d1b53f5216f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jf46d" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.217981 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:12Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.228508 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jf46d" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.229063 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.229159 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.229185 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.229216 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.229242 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:12Z","lastTransitionTime":"2025-11-28T12:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.237987 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d290cf8678216cdf66a68b32edea2be30af7f7fa4ff7ccac629d9e690b23b13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:12Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.268056 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2gg4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9e9a74657b078824a5614dc894178aed5ca4cb11445b900485e9a6c4378f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2gg4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:12Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.289683 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:12Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.315643 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:12Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.333835 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.333911 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.333935 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.333959 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.333976 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:12Z","lastTransitionTime":"2025-11-28T12:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.336317 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:12Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.346685 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.359541 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3544f7f72339878b2314fde813e8a92a8341fb05a34a4440c7c37b983d8d23f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19dcc5041b0cbae9167c41c808ece2651eac928f93422722ae28825b5ea4f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:12Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.382704 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:12Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.403098 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d290cf8678216cdf66a68b32edea2be30af7f7fa4ff7ccac629d9e690b23b13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:12Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.425014 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2gg4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9e9a74657b078824a5614dc894178aed5ca4cb11445b900485e9a6c4378f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2gg4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:12Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.437686 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.437737 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.437751 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.437772 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.437788 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:12Z","lastTransitionTime":"2025-11-28T12:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.446019 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373d4c2a-0b03-4671-945a-0583fa342b3d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e79e9cc7bdaacc427604d12cf94272c7ed3d93519b1d285ba336edded1b3642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0417da6607c0d549767642332fa4fb21bbef525d7073d0a352120092d3450f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b887fb78d1be13c77a88ce49c84ff0839a51056e29d59d571ab7da133dd0d897\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a538ac7a3b48f9c58a68688a95342fb3a9d26ee3e5d7c65f1e3b8d99993294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:12Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.461223 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b2a3eb4-4de5-491b-b466-3a35b7d745ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23df7a96829b4103254d6da3740caab05538ddbd3235ce16e8d768e681041c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f92b1378efd9146ee3cb61fef14092136e47b318d132a400c768bedf50d034e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kj9g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:12Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.478351 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pzwdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba664a9e-76d2-4d02-889a-e7062bfc903c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5598fdba6afba30cd00c8abdae6c80300fb10dfcde40afab0f15f848addddd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfslc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pzwdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:12Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.499281 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbbbf6f-004c-42ae-8a38-1bcc6cb88ac2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9cede79cbe4c47d953dfa702fe815cc14ee242dede33edec3c4617824c89b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4493f154b47a353308d54341114bbbd12157f9575b873e1648d1dae6a386a534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9d44446078a2bb53a5a9b0a3f7a87ecf24a8554fb968a0250fc3a4cfb2d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://123567b9e202a9aae6ab83bca1ea909a496c476395703ab65e855be02f7af06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c959e0d582f2f01523650db7c0a1d6483dda34c3fcdfaa29d2d25e4d0b0f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:12Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.517926 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f4f43e-a921-41b2-aa88-506055daff60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e0392cb9de3bc430e1d54372b710e29ea04b2316517c6e5aa17282ee1ba5201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73d25c753f9447edb42849b2859bd37c3eef9869522bee22ade5af4a2d4abfbd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T12:36:08Z\\\",\\\"message\\\":\\\"y (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 12:36:08.214262 6125 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 12:36:08.214316 6125 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1128 12:36:08.214325 6125 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1128 12:36:08.214342 6125 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 12:36:08.214349 6125 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 12:36:08.214375 6125 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1128 12:36:08.214436 6125 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1128 12:36:08.214469 6125 factory.go:656] Stopping watch factory\\\\nI1128 12:36:08.214491 6125 ovnkube.go:599] Stopped ovnkube\\\\nI1128 12:36:08.214534 6125 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 12:36:08.214552 6125 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 12:36:08.214563 6125 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1128 12:36:08.214573 6125 handler.go:208] Removed *v1.Node event handler 2\\\\nI1128 12:36:08.214583 6125 handler.go:208] Removed *v1.Node event handler 7\\\\nI1128 12:36:08.214603 6125 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1128 12:36:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e0392cb9de3bc430e1d54372b710e29ea04b2316517c6e5aa17282ee1ba5201\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"message\\\":\\\"or removal\\\\nI1128 12:36:10.134712 6246 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1128 12:36:10.134716 6246 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1128 12:36:10.134740 6246 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1128 12:36:10.134747 6246 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1128 12:36:10.134749 6246 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 12:36:10.134785 6246 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1128 12:36:10.134810 6246 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1128 12:36:10.135195 6246 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 12:36:10.135267 6246 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1128 12:36:10.135346 6246 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 12:36:10.135400 6246 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 12:36:10.135435 6246 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 12:36:10.135471 6246 handler.go:208] Removed *v1.Node event handler 2\\\\nI1128 12:36:10.135421 6246 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1128 12:36:10.135482 6246 factory.go:656] Stopping watch factory\\\\nI1128 12:36:10.135558 6246 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pbmbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:12Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.531298 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwgdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13786eba-201c-40ca-89b7-174795999a9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec60bab90c7fee1fd38c00da4f84d5133876ad8f2817e5447795fcab4feb2942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwgdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:12Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.544354 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.544430 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.544457 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.544485 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.544504 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:12Z","lastTransitionTime":"2025-11-28T12:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.550149 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b303d954-23c9-4fc9-8e79-981009172099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6912a42c418059dabf07c7d940bf1c4102c8dcf91cd4dd6ca0b177f4acd276ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf14e5e2229156dc442c92253ef1f23c75a5a6f5dec2d2537cddcdd1df54b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a76dbc5b41ebf68792cd449e4a245678be24151f0c980eedd06f956674b2435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3db38b748527004df103120db865f7848491344dfdf5c89a6db10f4d15e6a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9026b47ba3a0076e3f66e452bc9a223292a17659f2b80d04ef6eb6a5c0448710\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 12:35:52.373678 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 12:35:52.376135 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3230331060/tls.crt::/tmp/serving-cert-3230331060/tls.key\\\\\\\"\\\\nI1128 12:35:57.821147 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 12:35:57.824398 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 12:35:57.824424 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 12:35:57.824444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 12:35:57.824450 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 12:35:57.831411 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 12:35:57.831445 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831467 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 12:35:57.831472 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 12:35:57.831476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 12:35:57.831480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 12:35:57.831686 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 12:35:57.839127 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bafddd2d81f67f1445e3714d50eba5cfd6f75d60c2cb47d16f2086861a10bd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:12Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.568985 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c9857379117d130ce02fa4a153dfc01c9f41ba65663ae918bd82c9b14291e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:12Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.580798 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dlvj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8b3aa68-52ee-40cd-a059-6e410b826ce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2e852aeb571e85a95f4581550ee5f911d9c67fbbc4fc699e9af667a9c4b531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db55w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dlvj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:12Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.592928 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jf46d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd0b81f7-c868-4f90-b20d-9d1b53f5216f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smlr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smlr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:36:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jf46d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:12Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.606961 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jf46d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd0b81f7-c868-4f90-b20d-9d1b53f5216f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smlr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smlr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:36:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jf46d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:12Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.622899 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b303d954-23c9-4fc9-8e79-981009172099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6912a42c418059dabf07c7d940bf1c4102c8dcf91cd4dd6ca0b177f4acd276ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf14e5e2229156dc442c92253ef1f23c75a5a6f5dec2d2537cddcdd1df54b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a76dbc5b41ebf68792cd449e4a245678be24151f0c980eedd06f956674b2435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3db38b748527004df103120db865f7848491344dfdf5c89a6db10f4d15e6a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9026b47ba3a0076e3f66e452bc9a223292a17659f2b80d04ef6eb6a5c0448710\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 12:35:52.373678 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 12:35:52.376135 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3230331060/tls.crt::/tmp/serving-cert-3230331060/tls.key\\\\\\\"\\\\nI1128 12:35:57.821147 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 12:35:57.824398 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 12:35:57.824424 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 12:35:57.824444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 12:35:57.824450 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 12:35:57.831411 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 12:35:57.831445 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831467 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 12:35:57.831472 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 12:35:57.831476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 12:35:57.831480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 12:35:57.831686 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 12:35:57.839127 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bafddd2d81f67f1445e3714d50eba5cfd6f75d60c2cb47d16f2086861a10bd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:12Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.640008 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c9857379117d130ce02fa4a153dfc01c9f41ba65663ae918bd82c9b14291e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:12Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.652831 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.652878 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.652896 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.652926 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.652946 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:12Z","lastTransitionTime":"2025-11-28T12:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.656241 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dlvj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8b3aa68-52ee-40cd-a059-6e410b826ce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2e852aeb571e85a95f4581550ee5f911d9c67fbbc4fc699e9af667a9c4b531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db55w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dlvj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:12Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.668473 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:12Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.683145 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d290cf8678216cdf66a68b32edea2be30af7f7fa4ff7ccac629d9e690b23b13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:12Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.693595 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-c2psj"] Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.694608 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:36:12 crc kubenswrapper[4779]: E1128 12:36:12.694819 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c2psj" podUID="2d9943eb-ea06-476d-8736-0a45e588d9f4" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.702829 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2gg4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9e9a74657b078824a5614dc894178aed5ca4cb11445b900485e9a6c4378f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2gg4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:12Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.716053 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:12Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.725437 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:36:12 crc kubenswrapper[4779]: E1128 12:36:12.725578 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.725741 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.725959 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:36:12 crc kubenswrapper[4779]: E1128 12:36:12.725995 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:36:12 crc kubenswrapper[4779]: E1128 12:36:12.726465 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.731830 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:12Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.755283 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3544f7f72339878b2314fde813e8a92a8341fb05a34a4440c7c37b983d8d23f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19dcc5041b0cbae9167c41c808ece2651eac928f93422722ae28825b5ea4f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:12Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.755487 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.755520 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.755532 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.755550 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.755562 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:12Z","lastTransitionTime":"2025-11-28T12:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.773283 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373d4c2a-0b03-4671-945a-0583fa342b3d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e79e9cc7bdaacc427604d12cf94272c7ed3d93519b1d285ba336edded1b3642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0417da6607c0d549767642332fa4fb21bbef525d7073d0a352120092d3450f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b887fb78d1be13c77a88ce49c84ff0839a51056e29d59d571ab7da133dd0d897\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a538ac7a3b48f9c58a68688a95342fb3a9d26ee3e5d7c65f1e3b8d99993294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:12Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.795426 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2d9943eb-ea06-476d-8736-0a45e588d9f4-metrics-certs\") pod \"network-metrics-daemon-c2psj\" (UID: \"2d9943eb-ea06-476d-8736-0a45e588d9f4\") " pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.795464 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8vbz\" (UniqueName: \"kubernetes.io/projected/2d9943eb-ea06-476d-8736-0a45e588d9f4-kube-api-access-l8vbz\") pod \"network-metrics-daemon-c2psj\" (UID: \"2d9943eb-ea06-476d-8736-0a45e588d9f4\") " pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.796762 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b2a3eb4-4de5-491b-b466-3a35b7d745ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23df7a96829b4103254d6da3740caab05538ddbd3235ce16e8d768e681041c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f92b1378efd9146ee3cb61fef14092136e47b318d132a400c768bedf50d034e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kj9g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:12Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.818829 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pzwdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba664a9e-76d2-4d02-889a-e7062bfc903c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5598fdba6afba30cd00c8abdae6c80300fb10dfcde40afab0f15f848addddd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfslc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pzwdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:12Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.843711 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbbbf6f-004c-42ae-8a38-1bcc6cb88ac2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9cede79cbe4c47d953dfa702fe815cc14ee242dede33edec3c4617824c89b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4493f154b47a353308d54341114bbbd12157f9575b873e1648d1dae6a386a534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9d44446078a2bb53a5a9b0a3f7a87ecf24a8554fb968a0250fc3a4cfb2d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://123567b9e202a9aae6ab83bca1ea909a496c476395703ab65e855be02f7af06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c959e0d582f2f01523650db7c0a1d6483dda34c3fcdfaa29d2d25e4d0b0f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:12Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.857986 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.858282 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.858457 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.858534 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.858553 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:12Z","lastTransitionTime":"2025-11-28T12:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.868934 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f4f43e-a921-41b2-aa88-506055daff60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e0392cb9de3bc430e1d54372b710e29ea04b2316517c6e5aa17282ee1ba5201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73d25c753f9447edb42849b2859bd37c3eef9869522bee22ade5af4a2d4abfbd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T12:36:08Z\\\",\\\"message\\\":\\\"y (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 12:36:08.214262 6125 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 12:36:08.214316 6125 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1128 12:36:08.214325 6125 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1128 12:36:08.214342 6125 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 12:36:08.214349 6125 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 12:36:08.214375 6125 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1128 12:36:08.214436 6125 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1128 12:36:08.214469 6125 factory.go:656] Stopping watch factory\\\\nI1128 12:36:08.214491 6125 ovnkube.go:599] Stopped ovnkube\\\\nI1128 12:36:08.214534 6125 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 12:36:08.214552 6125 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 12:36:08.214563 6125 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1128 12:36:08.214573 6125 handler.go:208] Removed *v1.Node event handler 2\\\\nI1128 12:36:08.214583 6125 handler.go:208] Removed *v1.Node event handler 7\\\\nI1128 12:36:08.214603 6125 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1128 12:36:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e0392cb9de3bc430e1d54372b710e29ea04b2316517c6e5aa17282ee1ba5201\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"message\\\":\\\"or removal\\\\nI1128 12:36:10.134712 6246 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1128 12:36:10.134716 6246 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1128 12:36:10.134740 6246 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1128 12:36:10.134747 6246 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1128 12:36:10.134749 6246 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 12:36:10.134785 6246 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1128 12:36:10.134810 6246 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1128 12:36:10.135195 6246 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 12:36:10.135267 6246 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1128 12:36:10.135346 6246 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 12:36:10.135400 6246 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 12:36:10.135435 6246 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 12:36:10.135471 6246 handler.go:208] Removed *v1.Node event handler 2\\\\nI1128 12:36:10.135421 6246 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1128 12:36:10.135482 6246 factory.go:656] Stopping watch factory\\\\nI1128 12:36:10.135558 6246 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pbmbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:12Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.882633 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwgdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13786eba-201c-40ca-89b7-174795999a9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec60bab90c7fee1fd38c00da4f84d5133876ad8f2817e5447795fcab4feb2942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwgdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:12Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.897008 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2d9943eb-ea06-476d-8736-0a45e588d9f4-metrics-certs\") pod \"network-metrics-daemon-c2psj\" (UID: \"2d9943eb-ea06-476d-8736-0a45e588d9f4\") " pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.897298 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8vbz\" (UniqueName: \"kubernetes.io/projected/2d9943eb-ea06-476d-8736-0a45e588d9f4-kube-api-access-l8vbz\") pod \"network-metrics-daemon-c2psj\" (UID: \"2d9943eb-ea06-476d-8736-0a45e588d9f4\") " pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:36:12 crc kubenswrapper[4779]: E1128 12:36:12.897168 4779 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 12:36:12 crc kubenswrapper[4779]: E1128 12:36:12.897633 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d9943eb-ea06-476d-8736-0a45e588d9f4-metrics-certs podName:2d9943eb-ea06-476d-8736-0a45e588d9f4 nodeName:}" failed. No retries permitted until 2025-11-28 12:36:13.397609931 +0000 UTC m=+33.963285295 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2d9943eb-ea06-476d-8736-0a45e588d9f4-metrics-certs") pod "network-metrics-daemon-c2psj" (UID: "2d9943eb-ea06-476d-8736-0a45e588d9f4") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.897020 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:12Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.913170 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d290cf8678216cdf66a68b32edea2be30af7f7fa4ff7ccac629d9e690b23b13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:12Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.927921 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8vbz\" (UniqueName: \"kubernetes.io/projected/2d9943eb-ea06-476d-8736-0a45e588d9f4-kube-api-access-l8vbz\") pod \"network-metrics-daemon-c2psj\" (UID: \"2d9943eb-ea06-476d-8736-0a45e588d9f4\") " pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.936606 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2gg4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9e9a74657b078824a5614dc894178aed5ca4cb11445b900485e9a6c4378f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2gg4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:12Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.958342 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:12Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.960438 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.960521 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.960551 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.960583 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.960608 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:12Z","lastTransitionTime":"2025-11-28T12:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.973885 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:12Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:12 crc kubenswrapper[4779]: I1128 12:36:12.991777 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3544f7f72339878b2314fde813e8a92a8341fb05a34a4440c7c37b983d8d23f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19dcc5041b0cbae9167c41c808ece2651eac928f93422722ae28825b5ea4f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:12Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.006343 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373d4c2a-0b03-4671-945a-0583fa342b3d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e79e9cc7bdaacc427604d12cf94272c7ed3d93519b1d285ba336edded1b3642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0417da6607c0d549767642332fa4fb21bbef525d7073d0a352120092d3450f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b887fb78d1be13c77a88ce49c84ff0839a51056e29d59d571ab7da133dd0d897\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a538ac7a3b48f9c58a68688a95342fb3a9d26ee3e5d7c65f1e3b8d99993294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:13Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.013467 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pbmbn_35f4f43e-a921-41b2-aa88-506055daff60/ovnkube-controller/1.log" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.018139 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jf46d" event={"ID":"fd0b81f7-c868-4f90-b20d-9d1b53f5216f","Type":"ContainerStarted","Data":"383fc6deecc04584b130b3fdc9c1fded751c521513ce60898fdf1927748cd4ad"} Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.018279 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jf46d" event={"ID":"fd0b81f7-c868-4f90-b20d-9d1b53f5216f","Type":"ContainerStarted","Data":"d8e8508450f924b6b8509b5d06c78535915557c5a7362b50c41515ad15f35e99"} Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.018444 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jf46d" event={"ID":"fd0b81f7-c868-4f90-b20d-9d1b53f5216f","Type":"ContainerStarted","Data":"65f4b2f8cc802501c567aa7d29530a5bbc035e830d93c84867ad801690f04b4c"} Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.027521 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b2a3eb4-4de5-491b-b466-3a35b7d745ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23df7a96829b4103254d6da3740caab05538ddbd3235ce16e8d768e681041c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f92b1378efd9146ee3cb61fef14092136e47b318d132a400c768bedf50d034e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kj9g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:13Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.045019 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pzwdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba664a9e-76d2-4d02-889a-e7062bfc903c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5598fdba6afba30cd00c8abdae6c80300fb10dfcde40afab0f15f848addddd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfslc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pzwdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:13Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.063985 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.064226 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.064374 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.064507 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.064621 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:13Z","lastTransitionTime":"2025-11-28T12:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.075602 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbbbf6f-004c-42ae-8a38-1bcc6cb88ac2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9cede79cbe4c47d953dfa702fe815cc14ee242dede33edec3c4617824c89b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4493f154b47a353308d54341114bbbd12157f9575b873e1648d1dae6a386a534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9d44446078a2bb53a5a9b0a3f7a87ecf24a8554fb968a0250fc3a4cfb2d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://123567b9e202a9aae6ab83bca1ea909a496c476395703ab65e855be02f7af06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c959e0d582f2f01523650db7c0a1d6483dda34c3fcdfaa29d2d25e4d0b0f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:13Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.101554 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f4f43e-a921-41b2-aa88-506055daff60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e0392cb9de3bc430e1d54372b710e29ea04b2316517c6e5aa17282ee1ba5201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73d25c753f9447edb42849b2859bd37c3eef9869522bee22ade5af4a2d4abfbd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T12:36:08Z\\\",\\\"message\\\":\\\"y (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 12:36:08.214262 6125 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 12:36:08.214316 6125 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1128 12:36:08.214325 6125 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1128 12:36:08.214342 6125 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 12:36:08.214349 6125 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 12:36:08.214375 6125 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1128 12:36:08.214436 6125 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1128 12:36:08.214469 6125 factory.go:656] Stopping watch factory\\\\nI1128 12:36:08.214491 6125 ovnkube.go:599] Stopped ovnkube\\\\nI1128 12:36:08.214534 6125 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 12:36:08.214552 6125 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 12:36:08.214563 6125 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1128 12:36:08.214573 6125 handler.go:208] Removed *v1.Node event handler 2\\\\nI1128 12:36:08.214583 6125 handler.go:208] Removed *v1.Node event handler 7\\\\nI1128 12:36:08.214603 6125 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1128 12:36:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e0392cb9de3bc430e1d54372b710e29ea04b2316517c6e5aa17282ee1ba5201\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"message\\\":\\\"or removal\\\\nI1128 12:36:10.134712 6246 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1128 12:36:10.134716 6246 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1128 12:36:10.134740 6246 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1128 12:36:10.134747 6246 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1128 12:36:10.134749 6246 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 12:36:10.134785 6246 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1128 12:36:10.134810 6246 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1128 12:36:10.135195 6246 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 12:36:10.135267 6246 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1128 12:36:10.135346 6246 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 12:36:10.135400 6246 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 12:36:10.135435 6246 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 12:36:10.135471 6246 handler.go:208] Removed *v1.Node event handler 2\\\\nI1128 12:36:10.135421 6246 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1128 12:36:10.135482 6246 factory.go:656] Stopping watch factory\\\\nI1128 12:36:10.135558 6246 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pbmbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:13Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.119331 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwgdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13786eba-201c-40ca-89b7-174795999a9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec60bab90c7fee1fd38c00da4f84d5133876ad8f2817e5447795fcab4feb2942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwgdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:13Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.138904 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c2psj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d9943eb-ea06-476d-8736-0a45e588d9f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8vbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8vbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:36:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c2psj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:13Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.152380 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jf46d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd0b81f7-c868-4f90-b20d-9d1b53f5216f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smlr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smlr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:36:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jf46d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:13Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.168366 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.168408 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.168419 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.168435 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.168446 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:13Z","lastTransitionTime":"2025-11-28T12:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.169564 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b303d954-23c9-4fc9-8e79-981009172099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6912a42c418059dabf07c7d940bf1c4102c8dcf91cd4dd6ca0b177f4acd276ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf14e5e2229156dc442c92253ef1f23c75a5a6f5dec2d2537cddcdd1df54b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a76dbc5b41ebf68792cd449e4a245678be24151f0c980eedd06f956674b2435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3db38b748527004df103120db865f7848491344dfdf5c89a6db10f4d15e6a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9026b47ba3a0076e3f66e452bc9a223292a17659f2b80d04ef6eb6a5c0448710\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 12:35:52.373678 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 12:35:52.376135 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3230331060/tls.crt::/tmp/serving-cert-3230331060/tls.key\\\\\\\"\\\\nI1128 12:35:57.821147 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 12:35:57.824398 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 12:35:57.824424 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 12:35:57.824444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 12:35:57.824450 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 12:35:57.831411 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 12:35:57.831445 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831467 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 12:35:57.831472 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 12:35:57.831476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 12:35:57.831480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 12:35:57.831686 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 12:35:57.839127 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bafddd2d81f67f1445e3714d50eba5cfd6f75d60c2cb47d16f2086861a10bd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:13Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.194428 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c9857379117d130ce02fa4a153dfc01c9f41ba65663ae918bd82c9b14291e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:13Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.212174 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dlvj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8b3aa68-52ee-40cd-a059-6e410b826ce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2e852aeb571e85a95f4581550ee5f911d9c67fbbc4fc699e9af667a9c4b531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db55w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dlvj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:13Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.229981 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:13Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.243907 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3544f7f72339878b2314fde813e8a92a8341fb05a34a4440c7c37b983d8d23f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19dcc5041b0cbae9167c41c808ece2651eac928f93422722ae28825b5ea4f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:13Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.258079 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:13Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.271626 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.271679 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.271700 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.271728 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.271746 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:13Z","lastTransitionTime":"2025-11-28T12:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.279604 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d290cf8678216cdf66a68b32edea2be30af7f7fa4ff7ccac629d9e690b23b13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:13Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.306007 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2gg4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9e9a74657b078824a5614dc894178aed5ca4cb11445b900485e9a6c4378f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2gg4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:13Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.323799 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:13Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.344376 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pzwdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba664a9e-76d2-4d02-889a-e7062bfc903c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5598fdba6afba30cd00c8abdae6c80300fb10dfcde40afab0f15f848addddd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfslc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pzwdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:13Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.363434 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373d4c2a-0b03-4671-945a-0583fa342b3d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e79e9cc7bdaacc427604d12cf94272c7ed3d93519b1d285ba336edded1b3642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0417da6607c0d549767642332fa4fb21bbef525d7073d0a352120092d3450f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b887fb78d1be13c77a88ce49c84ff0839a51056e29d59d571ab7da133dd0d897\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a538ac7a3b48f9c58a68688a95342fb3a9d26ee3e5d7c65f1e3b8d99993294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:13Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.374874 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.374957 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.374977 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.375003 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.375020 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:13Z","lastTransitionTime":"2025-11-28T12:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.382630 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b2a3eb4-4de5-491b-b466-3a35b7d745ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23df7a96829b4103254d6da3740caab05538ddbd3235ce16e8d768e681041c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f92b1378efd9146ee3cb61fef14092136e47b318d132a400c768bedf50d034e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kj9g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:13Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.398172 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwgdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13786eba-201c-40ca-89b7-174795999a9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec60bab90c7fee1fd38c00da4f84d5133876ad8f2817e5447795fcab4feb2942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwgdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:13Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.404545 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2d9943eb-ea06-476d-8736-0a45e588d9f4-metrics-certs\") pod \"network-metrics-daemon-c2psj\" (UID: \"2d9943eb-ea06-476d-8736-0a45e588d9f4\") " pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:36:13 crc kubenswrapper[4779]: E1128 12:36:13.404818 4779 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 12:36:13 crc kubenswrapper[4779]: E1128 12:36:13.404943 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d9943eb-ea06-476d-8736-0a45e588d9f4-metrics-certs podName:2d9943eb-ea06-476d-8736-0a45e588d9f4 nodeName:}" failed. No retries permitted until 2025-11-28 12:36:14.404911128 +0000 UTC m=+34.970586542 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2d9943eb-ea06-476d-8736-0a45e588d9f4-metrics-certs") pod "network-metrics-daemon-c2psj" (UID: "2d9943eb-ea06-476d-8736-0a45e588d9f4") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.415095 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c2psj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d9943eb-ea06-476d-8736-0a45e588d9f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8vbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8vbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:36:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c2psj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:13Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.451527 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbbbf6f-004c-42ae-8a38-1bcc6cb88ac2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9cede79cbe4c47d953dfa702fe815cc14ee242dede33edec3c4617824c89b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4493f154b47a353308d54341114bbbd12157f9575b873e1648d1dae6a386a534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9d44446078a2bb53a5a9b0a3f7a87ecf24a8554fb968a0250fc3a4cfb2d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://123567b9e202a9aae6ab83bca1ea909a496c476395703ab65e855be02f7af06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c959e0d582f2f01523650db7c0a1d6483dda34c3fcdfaa29d2d25e4d0b0f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:13Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.478414 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.478701 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.478778 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.478848 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.478924 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:13Z","lastTransitionTime":"2025-11-28T12:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.478304 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f4f43e-a921-41b2-aa88-506055daff60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e0392cb9de3bc430e1d54372b710e29ea04b2316517c6e5aa17282ee1ba5201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73d25c753f9447edb42849b2859bd37c3eef9869522bee22ade5af4a2d4abfbd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T12:36:08Z\\\",\\\"message\\\":\\\"y (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 12:36:08.214262 6125 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 12:36:08.214316 6125 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1128 12:36:08.214325 6125 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1128 12:36:08.214342 6125 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 12:36:08.214349 6125 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 12:36:08.214375 6125 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1128 12:36:08.214436 6125 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1128 12:36:08.214469 6125 factory.go:656] Stopping watch factory\\\\nI1128 12:36:08.214491 6125 ovnkube.go:599] Stopped ovnkube\\\\nI1128 12:36:08.214534 6125 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 12:36:08.214552 6125 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 12:36:08.214563 6125 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1128 12:36:08.214573 6125 handler.go:208] Removed *v1.Node event handler 2\\\\nI1128 12:36:08.214583 6125 handler.go:208] Removed *v1.Node event handler 7\\\\nI1128 12:36:08.214603 6125 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1128 12:36:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e0392cb9de3bc430e1d54372b710e29ea04b2316517c6e5aa17282ee1ba5201\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"message\\\":\\\"or removal\\\\nI1128 12:36:10.134712 6246 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1128 12:36:10.134716 6246 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1128 12:36:10.134740 6246 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1128 12:36:10.134747 6246 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1128 12:36:10.134749 6246 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 12:36:10.134785 6246 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1128 12:36:10.134810 6246 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1128 12:36:10.135195 6246 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 12:36:10.135267 6246 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1128 12:36:10.135346 6246 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 12:36:10.135400 6246 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 12:36:10.135435 6246 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 12:36:10.135471 6246 handler.go:208] Removed *v1.Node event handler 2\\\\nI1128 12:36:10.135421 6246 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1128 12:36:10.135482 6246 factory.go:656] Stopping watch factory\\\\nI1128 12:36:10.135558 6246 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pbmbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:13Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.499454 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c9857379117d130ce02fa4a153dfc01c9f41ba65663ae918bd82c9b14291e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:13Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.518650 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dlvj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8b3aa68-52ee-40cd-a059-6e410b826ce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2e852aeb571e85a95f4581550ee5f911d9c67fbbc4fc699e9af667a9c4b531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db55w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dlvj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:13Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.531676 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jf46d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd0b81f7-c868-4f90-b20d-9d1b53f5216f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e8508450f924b6b8509b5d06c78535915557c5a7362b50c41515ad15f35e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smlr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383fc6deecc04584b130b3fdc9c1fded751c521513ce60898fdf1927748cd4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smlr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:36:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jf46d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:13Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.549147 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b303d954-23c9-4fc9-8e79-981009172099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6912a42c418059dabf07c7d940bf1c4102c8dcf91cd4dd6ca0b177f4acd276ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf14e5e2229156dc442c92253ef1f23c75a5a6f5dec2d2537cddcdd1df54b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a76dbc5b41ebf68792cd449e4a245678be24151f0c980eedd06f956674b2435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3db38b748527004df103120db865f7848491344dfdf5c89a6db10f4d15e6a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9026b47ba3a0076e3f66e452bc9a223292a17659f2b80d04ef6eb6a5c0448710\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 12:35:52.373678 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 12:35:52.376135 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3230331060/tls.crt::/tmp/serving-cert-3230331060/tls.key\\\\\\\"\\\\nI1128 12:35:57.821147 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 12:35:57.824398 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 12:35:57.824424 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 12:35:57.824444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 12:35:57.824450 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 12:35:57.831411 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 12:35:57.831445 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831467 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 12:35:57.831472 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 12:35:57.831476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 12:35:57.831480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 12:35:57.831686 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 12:35:57.839127 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bafddd2d81f67f1445e3714d50eba5cfd6f75d60c2cb47d16f2086861a10bd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:13Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.582286 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.582327 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.582341 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.582359 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.582371 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:13Z","lastTransitionTime":"2025-11-28T12:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.685564 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.685623 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.685639 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.685660 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.685674 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:13Z","lastTransitionTime":"2025-11-28T12:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.707254 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.707417 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.707469 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.707509 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.707549 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:36:13 crc kubenswrapper[4779]: E1128 12:36:13.707638 4779 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 12:36:13 crc kubenswrapper[4779]: E1128 12:36:13.707744 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 12:36:29.707687001 +0000 UTC m=+50.273362395 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:36:13 crc kubenswrapper[4779]: E1128 12:36:13.707828 4779 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 12:36:13 crc kubenswrapper[4779]: E1128 12:36:13.707822 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 12:36:29.707803915 +0000 UTC m=+50.273479309 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 12:36:13 crc kubenswrapper[4779]: E1128 12:36:13.707760 4779 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 12:36:13 crc kubenswrapper[4779]: E1128 12:36:13.708010 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 12:36:29.707948208 +0000 UTC m=+50.273623612 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 12:36:13 crc kubenswrapper[4779]: E1128 12:36:13.708034 4779 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 12:36:13 crc kubenswrapper[4779]: E1128 12:36:13.707760 4779 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 12:36:13 crc kubenswrapper[4779]: E1128 12:36:13.708071 4779 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 12:36:13 crc kubenswrapper[4779]: E1128 12:36:13.708079 4779 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 12:36:13 crc kubenswrapper[4779]: E1128 12:36:13.708112 4779 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 12:36:13 crc kubenswrapper[4779]: E1128 12:36:13.708157 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 12:36:29.708136983 +0000 UTC m=+50.273812347 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 12:36:13 crc kubenswrapper[4779]: E1128 12:36:13.708188 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 12:36:29.708172704 +0000 UTC m=+50.273848088 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.789158 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.789207 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.789219 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.789237 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.789252 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:13Z","lastTransitionTime":"2025-11-28T12:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.892358 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.892447 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.892465 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.892488 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:13 crc kubenswrapper[4779]: I1128 12:36:13.892506 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:13Z","lastTransitionTime":"2025-11-28T12:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:14 crc kubenswrapper[4779]: I1128 12:36:14.026205 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:14 crc kubenswrapper[4779]: I1128 12:36:14.026265 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:14 crc kubenswrapper[4779]: I1128 12:36:14.026284 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:14 crc kubenswrapper[4779]: I1128 12:36:14.026307 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:14 crc kubenswrapper[4779]: I1128 12:36:14.026325 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:14Z","lastTransitionTime":"2025-11-28T12:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:14 crc kubenswrapper[4779]: I1128 12:36:14.128721 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:14 crc kubenswrapper[4779]: I1128 12:36:14.128772 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:14 crc kubenswrapper[4779]: I1128 12:36:14.128789 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:14 crc kubenswrapper[4779]: I1128 12:36:14.128810 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:14 crc kubenswrapper[4779]: I1128 12:36:14.128826 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:14Z","lastTransitionTime":"2025-11-28T12:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:14 crc kubenswrapper[4779]: I1128 12:36:14.231575 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:14 crc kubenswrapper[4779]: I1128 12:36:14.231898 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:14 crc kubenswrapper[4779]: I1128 12:36:14.232320 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:14 crc kubenswrapper[4779]: I1128 12:36:14.232502 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:14 crc kubenswrapper[4779]: I1128 12:36:14.232915 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:14Z","lastTransitionTime":"2025-11-28T12:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:14 crc kubenswrapper[4779]: I1128 12:36:14.337586 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:14 crc kubenswrapper[4779]: I1128 12:36:14.338002 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:14 crc kubenswrapper[4779]: I1128 12:36:14.338210 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:14 crc kubenswrapper[4779]: I1128 12:36:14.338401 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:14 crc kubenswrapper[4779]: I1128 12:36:14.338645 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:14Z","lastTransitionTime":"2025-11-28T12:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:14 crc kubenswrapper[4779]: I1128 12:36:14.427661 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2d9943eb-ea06-476d-8736-0a45e588d9f4-metrics-certs\") pod \"network-metrics-daemon-c2psj\" (UID: \"2d9943eb-ea06-476d-8736-0a45e588d9f4\") " pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:36:14 crc kubenswrapper[4779]: E1128 12:36:14.427912 4779 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 12:36:14 crc kubenswrapper[4779]: E1128 12:36:14.427971 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d9943eb-ea06-476d-8736-0a45e588d9f4-metrics-certs podName:2d9943eb-ea06-476d-8736-0a45e588d9f4 nodeName:}" failed. No retries permitted until 2025-11-28 12:36:16.427952565 +0000 UTC m=+36.993627929 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2d9943eb-ea06-476d-8736-0a45e588d9f4-metrics-certs") pod "network-metrics-daemon-c2psj" (UID: "2d9943eb-ea06-476d-8736-0a45e588d9f4") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 12:36:14 crc kubenswrapper[4779]: I1128 12:36:14.443022 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:14 crc kubenswrapper[4779]: I1128 12:36:14.443081 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:14 crc kubenswrapper[4779]: I1128 12:36:14.443132 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:14 crc kubenswrapper[4779]: I1128 12:36:14.443173 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:14 crc kubenswrapper[4779]: I1128 12:36:14.443193 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:14Z","lastTransitionTime":"2025-11-28T12:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:14 crc kubenswrapper[4779]: I1128 12:36:14.546572 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:14 crc kubenswrapper[4779]: I1128 12:36:14.546627 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:14 crc kubenswrapper[4779]: I1128 12:36:14.546645 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:14 crc kubenswrapper[4779]: I1128 12:36:14.546668 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:14 crc kubenswrapper[4779]: I1128 12:36:14.546715 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:14Z","lastTransitionTime":"2025-11-28T12:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:14 crc kubenswrapper[4779]: I1128 12:36:14.649873 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:14 crc kubenswrapper[4779]: I1128 12:36:14.649941 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:14 crc kubenswrapper[4779]: I1128 12:36:14.649963 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:14 crc kubenswrapper[4779]: I1128 12:36:14.649991 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:14 crc kubenswrapper[4779]: I1128 12:36:14.650013 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:14Z","lastTransitionTime":"2025-11-28T12:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:14 crc kubenswrapper[4779]: I1128 12:36:14.726160 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:36:14 crc kubenswrapper[4779]: I1128 12:36:14.726217 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:36:14 crc kubenswrapper[4779]: I1128 12:36:14.726189 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:36:14 crc kubenswrapper[4779]: E1128 12:36:14.726367 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:36:14 crc kubenswrapper[4779]: I1128 12:36:14.726399 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:36:14 crc kubenswrapper[4779]: E1128 12:36:14.726541 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:36:14 crc kubenswrapper[4779]: E1128 12:36:14.726748 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:36:14 crc kubenswrapper[4779]: E1128 12:36:14.726980 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c2psj" podUID="2d9943eb-ea06-476d-8736-0a45e588d9f4" Nov 28 12:36:14 crc kubenswrapper[4779]: I1128 12:36:14.753813 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:14 crc kubenswrapper[4779]: I1128 12:36:14.753907 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:14 crc kubenswrapper[4779]: I1128 12:36:14.753935 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:14 crc kubenswrapper[4779]: I1128 12:36:14.753969 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:14 crc kubenswrapper[4779]: I1128 12:36:14.754071 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:14Z","lastTransitionTime":"2025-11-28T12:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:14 crc kubenswrapper[4779]: I1128 12:36:14.856516 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:14 crc kubenswrapper[4779]: I1128 12:36:14.856572 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:14 crc kubenswrapper[4779]: I1128 12:36:14.856589 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:14 crc kubenswrapper[4779]: I1128 12:36:14.856611 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:14 crc kubenswrapper[4779]: I1128 12:36:14.856629 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:14Z","lastTransitionTime":"2025-11-28T12:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:14 crc kubenswrapper[4779]: I1128 12:36:14.959725 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:14 crc kubenswrapper[4779]: I1128 12:36:14.959779 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:14 crc kubenswrapper[4779]: I1128 12:36:14.959797 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:14 crc kubenswrapper[4779]: I1128 12:36:14.959824 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:14 crc kubenswrapper[4779]: I1128 12:36:14.959845 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:14Z","lastTransitionTime":"2025-11-28T12:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:15 crc kubenswrapper[4779]: I1128 12:36:15.063046 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:15 crc kubenswrapper[4779]: I1128 12:36:15.063187 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:15 crc kubenswrapper[4779]: I1128 12:36:15.063210 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:15 crc kubenswrapper[4779]: I1128 12:36:15.063240 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:15 crc kubenswrapper[4779]: I1128 12:36:15.063263 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:15Z","lastTransitionTime":"2025-11-28T12:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:15 crc kubenswrapper[4779]: I1128 12:36:15.166516 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:15 crc kubenswrapper[4779]: I1128 12:36:15.166851 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:15 crc kubenswrapper[4779]: I1128 12:36:15.166983 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:15 crc kubenswrapper[4779]: I1128 12:36:15.167172 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:15 crc kubenswrapper[4779]: I1128 12:36:15.167356 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:15Z","lastTransitionTime":"2025-11-28T12:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:15 crc kubenswrapper[4779]: I1128 12:36:15.271347 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:15 crc kubenswrapper[4779]: I1128 12:36:15.271433 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:15 crc kubenswrapper[4779]: I1128 12:36:15.271456 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:15 crc kubenswrapper[4779]: I1128 12:36:15.271485 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:15 crc kubenswrapper[4779]: I1128 12:36:15.271507 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:15Z","lastTransitionTime":"2025-11-28T12:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:15 crc kubenswrapper[4779]: I1128 12:36:15.374962 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:15 crc kubenswrapper[4779]: I1128 12:36:15.375029 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:15 crc kubenswrapper[4779]: I1128 12:36:15.375048 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:15 crc kubenswrapper[4779]: I1128 12:36:15.375076 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:15 crc kubenswrapper[4779]: I1128 12:36:15.375135 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:15Z","lastTransitionTime":"2025-11-28T12:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:15 crc kubenswrapper[4779]: I1128 12:36:15.478337 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:15 crc kubenswrapper[4779]: I1128 12:36:15.478450 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:15 crc kubenswrapper[4779]: I1128 12:36:15.478474 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:15 crc kubenswrapper[4779]: I1128 12:36:15.478506 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:15 crc kubenswrapper[4779]: I1128 12:36:15.478532 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:15Z","lastTransitionTime":"2025-11-28T12:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:15 crc kubenswrapper[4779]: I1128 12:36:15.581842 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:15 crc kubenswrapper[4779]: I1128 12:36:15.582328 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:15 crc kubenswrapper[4779]: I1128 12:36:15.582575 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:15 crc kubenswrapper[4779]: I1128 12:36:15.582799 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:15 crc kubenswrapper[4779]: I1128 12:36:15.582979 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:15Z","lastTransitionTime":"2025-11-28T12:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:15 crc kubenswrapper[4779]: I1128 12:36:15.686715 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:15 crc kubenswrapper[4779]: I1128 12:36:15.686803 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:15 crc kubenswrapper[4779]: I1128 12:36:15.686823 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:15 crc kubenswrapper[4779]: I1128 12:36:15.686858 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:15 crc kubenswrapper[4779]: I1128 12:36:15.686881 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:15Z","lastTransitionTime":"2025-11-28T12:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:15 crc kubenswrapper[4779]: I1128 12:36:15.790423 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:15 crc kubenswrapper[4779]: I1128 12:36:15.791228 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:15 crc kubenswrapper[4779]: I1128 12:36:15.791269 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:15 crc kubenswrapper[4779]: I1128 12:36:15.791293 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:15 crc kubenswrapper[4779]: I1128 12:36:15.791311 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:15Z","lastTransitionTime":"2025-11-28T12:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:15 crc kubenswrapper[4779]: I1128 12:36:15.895518 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:15 crc kubenswrapper[4779]: I1128 12:36:15.896061 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:15 crc kubenswrapper[4779]: I1128 12:36:15.896261 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:15 crc kubenswrapper[4779]: I1128 12:36:15.896443 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:15 crc kubenswrapper[4779]: I1128 12:36:15.896585 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:15Z","lastTransitionTime":"2025-11-28T12:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:15 crc kubenswrapper[4779]: I1128 12:36:15.999790 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:16 crc kubenswrapper[4779]: I1128 12:36:16.000296 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:16 crc kubenswrapper[4779]: I1128 12:36:16.000508 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:16 crc kubenswrapper[4779]: I1128 12:36:16.000710 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:16 crc kubenswrapper[4779]: I1128 12:36:16.000907 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:16Z","lastTransitionTime":"2025-11-28T12:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:16 crc kubenswrapper[4779]: I1128 12:36:16.103493 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:16 crc kubenswrapper[4779]: I1128 12:36:16.103533 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:16 crc kubenswrapper[4779]: I1128 12:36:16.103543 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:16 crc kubenswrapper[4779]: I1128 12:36:16.103557 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:16 crc kubenswrapper[4779]: I1128 12:36:16.103568 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:16Z","lastTransitionTime":"2025-11-28T12:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:16 crc kubenswrapper[4779]: I1128 12:36:16.206507 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:16 crc kubenswrapper[4779]: I1128 12:36:16.206576 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:16 crc kubenswrapper[4779]: I1128 12:36:16.206599 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:16 crc kubenswrapper[4779]: I1128 12:36:16.206624 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:16 crc kubenswrapper[4779]: I1128 12:36:16.206641 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:16Z","lastTransitionTime":"2025-11-28T12:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:16 crc kubenswrapper[4779]: I1128 12:36:16.309845 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:16 crc kubenswrapper[4779]: I1128 12:36:16.310247 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:16 crc kubenswrapper[4779]: I1128 12:36:16.310447 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:16 crc kubenswrapper[4779]: I1128 12:36:16.310673 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:16 crc kubenswrapper[4779]: I1128 12:36:16.310858 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:16Z","lastTransitionTime":"2025-11-28T12:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:16 crc kubenswrapper[4779]: I1128 12:36:16.414839 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:16 crc kubenswrapper[4779]: I1128 12:36:16.414935 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:16 crc kubenswrapper[4779]: I1128 12:36:16.414952 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:16 crc kubenswrapper[4779]: I1128 12:36:16.414982 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:16 crc kubenswrapper[4779]: I1128 12:36:16.414999 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:16Z","lastTransitionTime":"2025-11-28T12:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:16 crc kubenswrapper[4779]: I1128 12:36:16.447709 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2d9943eb-ea06-476d-8736-0a45e588d9f4-metrics-certs\") pod \"network-metrics-daemon-c2psj\" (UID: \"2d9943eb-ea06-476d-8736-0a45e588d9f4\") " pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:36:16 crc kubenswrapper[4779]: E1128 12:36:16.447905 4779 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 12:36:16 crc kubenswrapper[4779]: E1128 12:36:16.448022 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d9943eb-ea06-476d-8736-0a45e588d9f4-metrics-certs podName:2d9943eb-ea06-476d-8736-0a45e588d9f4 nodeName:}" failed. No retries permitted until 2025-11-28 12:36:20.44799097 +0000 UTC m=+41.013666334 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2d9943eb-ea06-476d-8736-0a45e588d9f4-metrics-certs") pod "network-metrics-daemon-c2psj" (UID: "2d9943eb-ea06-476d-8736-0a45e588d9f4") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 12:36:16 crc kubenswrapper[4779]: I1128 12:36:16.518661 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:16 crc kubenswrapper[4779]: I1128 12:36:16.518730 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:16 crc kubenswrapper[4779]: I1128 12:36:16.518752 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:16 crc kubenswrapper[4779]: I1128 12:36:16.518780 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:16 crc kubenswrapper[4779]: I1128 12:36:16.518800 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:16Z","lastTransitionTime":"2025-11-28T12:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:16 crc kubenswrapper[4779]: I1128 12:36:16.622161 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:16 crc kubenswrapper[4779]: I1128 12:36:16.622219 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:16 crc kubenswrapper[4779]: I1128 12:36:16.622233 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:16 crc kubenswrapper[4779]: I1128 12:36:16.622252 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:16 crc kubenswrapper[4779]: I1128 12:36:16.622264 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:16Z","lastTransitionTime":"2025-11-28T12:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:16 crc kubenswrapper[4779]: I1128 12:36:16.725174 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:36:16 crc kubenswrapper[4779]: E1128 12:36:16.725313 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:36:16 crc kubenswrapper[4779]: I1128 12:36:16.725294 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:36:16 crc kubenswrapper[4779]: I1128 12:36:16.725422 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:36:16 crc kubenswrapper[4779]: E1128 12:36:16.725779 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:36:16 crc kubenswrapper[4779]: E1128 12:36:16.725832 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c2psj" podUID="2d9943eb-ea06-476d-8736-0a45e588d9f4" Nov 28 12:36:16 crc kubenswrapper[4779]: I1128 12:36:16.726164 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:36:16 crc kubenswrapper[4779]: E1128 12:36:16.726514 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:36:16 crc kubenswrapper[4779]: I1128 12:36:16.726844 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:16 crc kubenswrapper[4779]: I1128 12:36:16.727150 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:16 crc kubenswrapper[4779]: I1128 12:36:16.727370 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:16 crc kubenswrapper[4779]: I1128 12:36:16.728260 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:16 crc kubenswrapper[4779]: I1128 12:36:16.728434 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:16Z","lastTransitionTime":"2025-11-28T12:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:16 crc kubenswrapper[4779]: I1128 12:36:16.832662 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:16 crc kubenswrapper[4779]: I1128 12:36:16.832820 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:16 crc kubenswrapper[4779]: I1128 12:36:16.832847 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:16 crc kubenswrapper[4779]: I1128 12:36:16.832878 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:16 crc kubenswrapper[4779]: I1128 12:36:16.832898 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:16Z","lastTransitionTime":"2025-11-28T12:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:16 crc kubenswrapper[4779]: I1128 12:36:16.937138 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:16 crc kubenswrapper[4779]: I1128 12:36:16.937204 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:16 crc kubenswrapper[4779]: I1128 12:36:16.937221 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:16 crc kubenswrapper[4779]: I1128 12:36:16.937245 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:16 crc kubenswrapper[4779]: I1128 12:36:16.937263 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:16Z","lastTransitionTime":"2025-11-28T12:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:17 crc kubenswrapper[4779]: I1128 12:36:17.039930 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:17 crc kubenswrapper[4779]: I1128 12:36:17.039989 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:17 crc kubenswrapper[4779]: I1128 12:36:17.040012 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:17 crc kubenswrapper[4779]: I1128 12:36:17.040043 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:17 crc kubenswrapper[4779]: I1128 12:36:17.040062 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:17Z","lastTransitionTime":"2025-11-28T12:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:17 crc kubenswrapper[4779]: I1128 12:36:17.143362 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:17 crc kubenswrapper[4779]: I1128 12:36:17.143976 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:17 crc kubenswrapper[4779]: I1128 12:36:17.144189 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:17 crc kubenswrapper[4779]: I1128 12:36:17.144341 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:17 crc kubenswrapper[4779]: I1128 12:36:17.144515 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:17Z","lastTransitionTime":"2025-11-28T12:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:17 crc kubenswrapper[4779]: I1128 12:36:17.246962 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:17 crc kubenswrapper[4779]: I1128 12:36:17.247040 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:17 crc kubenswrapper[4779]: I1128 12:36:17.247058 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:17 crc kubenswrapper[4779]: I1128 12:36:17.247080 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:17 crc kubenswrapper[4779]: I1128 12:36:17.247135 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:17Z","lastTransitionTime":"2025-11-28T12:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:17 crc kubenswrapper[4779]: I1128 12:36:17.350557 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:17 crc kubenswrapper[4779]: I1128 12:36:17.350628 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:17 crc kubenswrapper[4779]: I1128 12:36:17.350647 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:17 crc kubenswrapper[4779]: I1128 12:36:17.350674 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:17 crc kubenswrapper[4779]: I1128 12:36:17.350694 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:17Z","lastTransitionTime":"2025-11-28T12:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:17 crc kubenswrapper[4779]: I1128 12:36:17.452891 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:17 crc kubenswrapper[4779]: I1128 12:36:17.452940 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:17 crc kubenswrapper[4779]: I1128 12:36:17.452957 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:17 crc kubenswrapper[4779]: I1128 12:36:17.452981 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:17 crc kubenswrapper[4779]: I1128 12:36:17.452998 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:17Z","lastTransitionTime":"2025-11-28T12:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:17 crc kubenswrapper[4779]: I1128 12:36:17.555984 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:17 crc kubenswrapper[4779]: I1128 12:36:17.556042 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:17 crc kubenswrapper[4779]: I1128 12:36:17.556060 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:17 crc kubenswrapper[4779]: I1128 12:36:17.556083 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:17 crc kubenswrapper[4779]: I1128 12:36:17.556156 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:17Z","lastTransitionTime":"2025-11-28T12:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:17 crc kubenswrapper[4779]: I1128 12:36:17.658907 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:17 crc kubenswrapper[4779]: I1128 12:36:17.658958 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:17 crc kubenswrapper[4779]: I1128 12:36:17.658974 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:17 crc kubenswrapper[4779]: I1128 12:36:17.658996 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:17 crc kubenswrapper[4779]: I1128 12:36:17.659014 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:17Z","lastTransitionTime":"2025-11-28T12:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:17 crc kubenswrapper[4779]: I1128 12:36:17.761658 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:17 crc kubenswrapper[4779]: I1128 12:36:17.761718 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:17 crc kubenswrapper[4779]: I1128 12:36:17.761738 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:17 crc kubenswrapper[4779]: I1128 12:36:17.761763 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:17 crc kubenswrapper[4779]: I1128 12:36:17.761783 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:17Z","lastTransitionTime":"2025-11-28T12:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:17 crc kubenswrapper[4779]: I1128 12:36:17.865298 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:17 crc kubenswrapper[4779]: I1128 12:36:17.865373 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:17 crc kubenswrapper[4779]: I1128 12:36:17.865399 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:17 crc kubenswrapper[4779]: I1128 12:36:17.865432 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:17 crc kubenswrapper[4779]: I1128 12:36:17.865457 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:17Z","lastTransitionTime":"2025-11-28T12:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:17 crc kubenswrapper[4779]: I1128 12:36:17.967974 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:17 crc kubenswrapper[4779]: I1128 12:36:17.968016 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:17 crc kubenswrapper[4779]: I1128 12:36:17.968026 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:17 crc kubenswrapper[4779]: I1128 12:36:17.968042 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:17 crc kubenswrapper[4779]: I1128 12:36:17.968055 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:17Z","lastTransitionTime":"2025-11-28T12:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:18 crc kubenswrapper[4779]: I1128 12:36:18.071259 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:18 crc kubenswrapper[4779]: I1128 12:36:18.071307 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:18 crc kubenswrapper[4779]: I1128 12:36:18.071323 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:18 crc kubenswrapper[4779]: I1128 12:36:18.071346 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:18 crc kubenswrapper[4779]: I1128 12:36:18.071364 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:18Z","lastTransitionTime":"2025-11-28T12:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:18 crc kubenswrapper[4779]: I1128 12:36:18.173819 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:18 crc kubenswrapper[4779]: I1128 12:36:18.174255 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:18 crc kubenswrapper[4779]: I1128 12:36:18.174403 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:18 crc kubenswrapper[4779]: I1128 12:36:18.174544 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:18 crc kubenswrapper[4779]: I1128 12:36:18.174677 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:18Z","lastTransitionTime":"2025-11-28T12:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:18 crc kubenswrapper[4779]: I1128 12:36:18.277636 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:18 crc kubenswrapper[4779]: I1128 12:36:18.277692 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:18 crc kubenswrapper[4779]: I1128 12:36:18.277709 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:18 crc kubenswrapper[4779]: I1128 12:36:18.277735 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:18 crc kubenswrapper[4779]: I1128 12:36:18.277752 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:18Z","lastTransitionTime":"2025-11-28T12:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:18 crc kubenswrapper[4779]: I1128 12:36:18.380631 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:18 crc kubenswrapper[4779]: I1128 12:36:18.380693 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:18 crc kubenswrapper[4779]: I1128 12:36:18.380711 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:18 crc kubenswrapper[4779]: I1128 12:36:18.380734 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:18 crc kubenswrapper[4779]: I1128 12:36:18.380751 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:18Z","lastTransitionTime":"2025-11-28T12:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:18 crc kubenswrapper[4779]: I1128 12:36:18.483833 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:18 crc kubenswrapper[4779]: I1128 12:36:18.483883 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:18 crc kubenswrapper[4779]: I1128 12:36:18.483899 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:18 crc kubenswrapper[4779]: I1128 12:36:18.483931 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:18 crc kubenswrapper[4779]: I1128 12:36:18.483950 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:18Z","lastTransitionTime":"2025-11-28T12:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:18 crc kubenswrapper[4779]: I1128 12:36:18.587310 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:18 crc kubenswrapper[4779]: I1128 12:36:18.587380 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:18 crc kubenswrapper[4779]: I1128 12:36:18.587402 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:18 crc kubenswrapper[4779]: I1128 12:36:18.587430 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:18 crc kubenswrapper[4779]: I1128 12:36:18.587450 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:18Z","lastTransitionTime":"2025-11-28T12:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:18 crc kubenswrapper[4779]: I1128 12:36:18.690275 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:18 crc kubenswrapper[4779]: I1128 12:36:18.690341 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:18 crc kubenswrapper[4779]: I1128 12:36:18.690358 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:18 crc kubenswrapper[4779]: I1128 12:36:18.690381 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:18 crc kubenswrapper[4779]: I1128 12:36:18.690399 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:18Z","lastTransitionTime":"2025-11-28T12:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:18 crc kubenswrapper[4779]: I1128 12:36:18.725854 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:36:18 crc kubenswrapper[4779]: I1128 12:36:18.725918 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:36:18 crc kubenswrapper[4779]: I1128 12:36:18.725953 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:36:18 crc kubenswrapper[4779]: I1128 12:36:18.725968 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:36:18 crc kubenswrapper[4779]: E1128 12:36:18.726541 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c2psj" podUID="2d9943eb-ea06-476d-8736-0a45e588d9f4" Nov 28 12:36:18 crc kubenswrapper[4779]: E1128 12:36:18.726723 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:36:18 crc kubenswrapper[4779]: E1128 12:36:18.726835 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:36:18 crc kubenswrapper[4779]: E1128 12:36:18.727003 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:36:18 crc kubenswrapper[4779]: I1128 12:36:18.792973 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:18 crc kubenswrapper[4779]: I1128 12:36:18.793072 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:18 crc kubenswrapper[4779]: I1128 12:36:18.793122 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:18 crc kubenswrapper[4779]: I1128 12:36:18.793156 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:18 crc kubenswrapper[4779]: I1128 12:36:18.793177 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:18Z","lastTransitionTime":"2025-11-28T12:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:18 crc kubenswrapper[4779]: I1128 12:36:18.895885 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:18 crc kubenswrapper[4779]: I1128 12:36:18.895949 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:18 crc kubenswrapper[4779]: I1128 12:36:18.895971 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:18 crc kubenswrapper[4779]: I1128 12:36:18.896001 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:18 crc kubenswrapper[4779]: I1128 12:36:18.896021 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:18Z","lastTransitionTime":"2025-11-28T12:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:18 crc kubenswrapper[4779]: I1128 12:36:18.999615 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:18 crc kubenswrapper[4779]: I1128 12:36:18.999667 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:18 crc kubenswrapper[4779]: I1128 12:36:18.999685 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:18 crc kubenswrapper[4779]: I1128 12:36:18.999710 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:18.999727 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:18Z","lastTransitionTime":"2025-11-28T12:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:19.102438 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:19.102521 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:19.102543 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:19.102566 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:19.102585 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:19Z","lastTransitionTime":"2025-11-28T12:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:19.205830 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:19.205889 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:19.205906 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:19.205933 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:19.205950 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:19Z","lastTransitionTime":"2025-11-28T12:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:19.309064 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:19.309165 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:19.309189 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:19.309220 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:19.309243 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:19Z","lastTransitionTime":"2025-11-28T12:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:19.411889 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:19.411949 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:19.411964 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:19.411985 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:19.412002 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:19Z","lastTransitionTime":"2025-11-28T12:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:19.514560 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:19.514608 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:19.514620 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:19.514637 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:19.514651 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:19Z","lastTransitionTime":"2025-11-28T12:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:19.617452 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:19.617525 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:19.617542 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:19.617565 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:19.617583 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:19Z","lastTransitionTime":"2025-11-28T12:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:19.721043 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:19.721134 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:19.721153 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:19.721179 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:19.721197 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:19Z","lastTransitionTime":"2025-11-28T12:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:19.745833 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b2a3eb4-4de5-491b-b466-3a35b7d745ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23df7a96829b4103254d6da3740caab05538ddbd3235ce16e8d768e681041c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f92b1378efd9146ee3cb61fef14092136e47b318d132a400c768bedf50d034e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kj9g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:19Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:19.769651 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pzwdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba664a9e-76d2-4d02-889a-e7062bfc903c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5598fdba6afba30cd00c8abdae6c80300fb10dfcde40afab0f15f848addddd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfslc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pzwdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:19Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:19.787989 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373d4c2a-0b03-4671-945a-0583fa342b3d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e79e9cc7bdaacc427604d12cf94272c7ed3d93519b1d285ba336edded1b3642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0417da6607c0d549767642332fa4fb21bbef525d7073d0a352120092d3450f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b887fb78d1be13c77a88ce49c84ff0839a51056e29d59d571ab7da133dd0d897\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a538ac7a3b48f9c58a68688a95342fb3a9d26ee3e5d7c65f1e3b8d99993294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:19Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:19.821926 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f4f43e-a921-41b2-aa88-506055daff60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e0392cb9de3bc430e1d54372b710e29ea04b2316517c6e5aa17282ee1ba5201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73d25c753f9447edb42849b2859bd37c3eef9869522bee22ade5af4a2d4abfbd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T12:36:08Z\\\",\\\"message\\\":\\\"y (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 12:36:08.214262 6125 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 12:36:08.214316 6125 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1128 12:36:08.214325 6125 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1128 12:36:08.214342 6125 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 12:36:08.214349 6125 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 12:36:08.214375 6125 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1128 12:36:08.214436 6125 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1128 12:36:08.214469 6125 factory.go:656] Stopping watch factory\\\\nI1128 12:36:08.214491 6125 ovnkube.go:599] Stopped ovnkube\\\\nI1128 12:36:08.214534 6125 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 12:36:08.214552 6125 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 12:36:08.214563 6125 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1128 12:36:08.214573 6125 handler.go:208] Removed *v1.Node event handler 2\\\\nI1128 12:36:08.214583 6125 handler.go:208] Removed *v1.Node event handler 7\\\\nI1128 12:36:08.214603 6125 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1128 12:36:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e0392cb9de3bc430e1d54372b710e29ea04b2316517c6e5aa17282ee1ba5201\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"message\\\":\\\"or removal\\\\nI1128 12:36:10.134712 6246 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1128 12:36:10.134716 6246 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1128 12:36:10.134740 6246 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1128 12:36:10.134747 6246 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1128 12:36:10.134749 6246 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 12:36:10.134785 6246 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1128 12:36:10.134810 6246 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1128 12:36:10.135195 6246 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 12:36:10.135267 6246 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1128 12:36:10.135346 6246 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 12:36:10.135400 6246 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 12:36:10.135435 6246 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 12:36:10.135471 6246 handler.go:208] Removed *v1.Node event handler 2\\\\nI1128 12:36:10.135421 6246 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1128 12:36:10.135482 6246 factory.go:656] Stopping watch factory\\\\nI1128 12:36:10.135558 6246 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pbmbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:19Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:19.829566 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:19.829636 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:19.829662 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:19.829697 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:19.829721 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:19Z","lastTransitionTime":"2025-11-28T12:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:19.840442 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwgdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13786eba-201c-40ca-89b7-174795999a9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec60bab90c7fee1fd38c00da4f84d5133876ad8f2817e5447795fcab4feb2942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwgdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:19Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:19.856186 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c2psj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d9943eb-ea06-476d-8736-0a45e588d9f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8vbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8vbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:36:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c2psj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:19Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:19.917794 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbbbf6f-004c-42ae-8a38-1bcc6cb88ac2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9cede79cbe4c47d953dfa702fe815cc14ee242dede33edec3c4617824c89b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4493f154b47a353308d54341114bbbd12157f9575b873e1648d1dae6a386a534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9d44446078a2bb53a5a9b0a3f7a87ecf24a8554fb968a0250fc3a4cfb2d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://123567b9e202a9aae6ab83bca1ea909a496c476395703ab65e855be02f7af06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c959e0d582f2f01523650db7c0a1d6483dda34c3fcdfaa29d2d25e4d0b0f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:19Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:19.931657 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:19.931716 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:19.931732 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:19.931754 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:19.931769 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:19Z","lastTransitionTime":"2025-11-28T12:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:19.938420 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b303d954-23c9-4fc9-8e79-981009172099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6912a42c418059dabf07c7d940bf1c4102c8dcf91cd4dd6ca0b177f4acd276ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf14e5e2229156dc442c92253ef1f23c75a5a6f5dec2d2537cddcdd1df54b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a76dbc5b41ebf68792cd449e4a245678be24151f0c980eedd06f956674b2435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3db38b748527004df103120db865f7848491344dfdf5c89a6db10f4d15e6a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9026b47ba3a0076e3f66e452bc9a223292a17659f2b80d04ef6eb6a5c0448710\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 12:35:52.373678 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 12:35:52.376135 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3230331060/tls.crt::/tmp/serving-cert-3230331060/tls.key\\\\\\\"\\\\nI1128 12:35:57.821147 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 12:35:57.824398 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 12:35:57.824424 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 12:35:57.824444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 12:35:57.824450 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 12:35:57.831411 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 12:35:57.831445 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831467 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 12:35:57.831472 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 12:35:57.831476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 12:35:57.831480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 12:35:57.831686 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 12:35:57.839127 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bafddd2d81f67f1445e3714d50eba5cfd6f75d60c2cb47d16f2086861a10bd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:19Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:19.950475 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c9857379117d130ce02fa4a153dfc01c9f41ba65663ae918bd82c9b14291e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:19Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:19.960933 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dlvj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8b3aa68-52ee-40cd-a059-6e410b826ce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2e852aeb571e85a95f4581550ee5f911d9c67fbbc4fc699e9af667a9c4b531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db55w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dlvj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:19Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:19.970917 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jf46d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd0b81f7-c868-4f90-b20d-9d1b53f5216f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e8508450f924b6b8509b5d06c78535915557c5a7362b50c41515ad15f35e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smlr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383fc6deecc04584b130b3fdc9c1fded751c521513ce60898fdf1927748cd4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smlr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:36:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jf46d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:19Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:19.984587 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:19Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:19 crc kubenswrapper[4779]: I1128 12:36:19.994925 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:19Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.008164 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3544f7f72339878b2314fde813e8a92a8341fb05a34a4440c7c37b983d8d23f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19dcc5041b0cbae9167c41c808ece2651eac928f93422722ae28825b5ea4f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:20Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.021893 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:20Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.034979 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.035027 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.035040 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.035057 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.035070 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:20Z","lastTransitionTime":"2025-11-28T12:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.036159 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d290cf8678216cdf66a68b32edea2be30af7f7fa4ff7ccac629d9e690b23b13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:20Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.051343 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2gg4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9e9a74657b078824a5614dc894178aed5ca4cb11445b900485e9a6c4378f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2gg4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:20Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.138252 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.138321 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.138340 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.138366 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.138385 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:20Z","lastTransitionTime":"2025-11-28T12:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.241901 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.242307 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.242449 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.242593 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.242727 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:20Z","lastTransitionTime":"2025-11-28T12:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.345978 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.346162 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.346195 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.346263 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.346285 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:20Z","lastTransitionTime":"2025-11-28T12:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.448954 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.449229 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.449299 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.449366 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.449434 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:20Z","lastTransitionTime":"2025-11-28T12:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.496368 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2d9943eb-ea06-476d-8736-0a45e588d9f4-metrics-certs\") pod \"network-metrics-daemon-c2psj\" (UID: \"2d9943eb-ea06-476d-8736-0a45e588d9f4\") " pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:36:20 crc kubenswrapper[4779]: E1128 12:36:20.496553 4779 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 12:36:20 crc kubenswrapper[4779]: E1128 12:36:20.496629 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d9943eb-ea06-476d-8736-0a45e588d9f4-metrics-certs podName:2d9943eb-ea06-476d-8736-0a45e588d9f4 nodeName:}" failed. No retries permitted until 2025-11-28 12:36:28.496605855 +0000 UTC m=+49.062281249 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2d9943eb-ea06-476d-8736-0a45e588d9f4-metrics-certs") pod "network-metrics-daemon-c2psj" (UID: "2d9943eb-ea06-476d-8736-0a45e588d9f4") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.552081 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.552261 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.552286 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.552316 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.552341 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:20Z","lastTransitionTime":"2025-11-28T12:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.655611 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.655673 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.655689 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.655719 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.655736 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:20Z","lastTransitionTime":"2025-11-28T12:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.726263 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:36:20 crc kubenswrapper[4779]: E1128 12:36:20.726454 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.726829 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.726908 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:36:20 crc kubenswrapper[4779]: E1128 12:36:20.727177 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:36:20 crc kubenswrapper[4779]: E1128 12:36:20.727456 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c2psj" podUID="2d9943eb-ea06-476d-8736-0a45e588d9f4" Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.727489 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:36:20 crc kubenswrapper[4779]: E1128 12:36:20.727626 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.758481 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.758546 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.758566 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.758589 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.758609 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:20Z","lastTransitionTime":"2025-11-28T12:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.861262 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.861340 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.861360 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.861387 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.861404 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:20Z","lastTransitionTime":"2025-11-28T12:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.965285 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.965356 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.965382 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.965413 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:20 crc kubenswrapper[4779]: I1128 12:36:20.965432 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:20Z","lastTransitionTime":"2025-11-28T12:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.068693 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.068755 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.068772 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.068796 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.068815 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:21Z","lastTransitionTime":"2025-11-28T12:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.172174 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.172235 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.172256 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.172285 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.172309 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:21Z","lastTransitionTime":"2025-11-28T12:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.275546 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.275621 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.275642 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.275669 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.275693 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:21Z","lastTransitionTime":"2025-11-28T12:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.379241 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.379354 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.379373 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.379397 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.379415 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:21Z","lastTransitionTime":"2025-11-28T12:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.482753 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.482834 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.482853 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.482883 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.482909 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:21Z","lastTransitionTime":"2025-11-28T12:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.586239 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.586326 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.586346 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.586375 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.586397 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:21Z","lastTransitionTime":"2025-11-28T12:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.605807 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.605873 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.605897 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.605929 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.605951 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:21Z","lastTransitionTime":"2025-11-28T12:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:21 crc kubenswrapper[4779]: E1128 12:36:21.629575 4779 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"78a2023c-0feb-4049-a56a-d55919a84d1c\\\",\\\"systemUUID\\\":\\\"232cf3c8-8956-4a87-8900-bbd0298775e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:21Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.634800 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.635034 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.635275 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.635463 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.635645 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:21Z","lastTransitionTime":"2025-11-28T12:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:21 crc kubenswrapper[4779]: E1128 12:36:21.652057 4779 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"78a2023c-0feb-4049-a56a-d55919a84d1c\\\",\\\"systemUUID\\\":\\\"232cf3c8-8956-4a87-8900-bbd0298775e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:21Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.657511 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.657538 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.657547 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.657561 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.657572 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:21Z","lastTransitionTime":"2025-11-28T12:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:21 crc kubenswrapper[4779]: E1128 12:36:21.675657 4779 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"78a2023c-0feb-4049-a56a-d55919a84d1c\\\",\\\"systemUUID\\\":\\\"232cf3c8-8956-4a87-8900-bbd0298775e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:21Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.680422 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.680490 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.680503 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.680524 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.680538 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:21Z","lastTransitionTime":"2025-11-28T12:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:21 crc kubenswrapper[4779]: E1128 12:36:21.699904 4779 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"78a2023c-0feb-4049-a56a-d55919a84d1c\\\",\\\"systemUUID\\\":\\\"232cf3c8-8956-4a87-8900-bbd0298775e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:21Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.704751 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.704852 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.704871 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.704897 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.704914 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:21Z","lastTransitionTime":"2025-11-28T12:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:21 crc kubenswrapper[4779]: E1128 12:36:21.721453 4779 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"78a2023c-0feb-4049-a56a-d55919a84d1c\\\",\\\"systemUUID\\\":\\\"232cf3c8-8956-4a87-8900-bbd0298775e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:21Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:21 crc kubenswrapper[4779]: E1128 12:36:21.721676 4779 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.723691 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.723747 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.723762 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.723786 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.723801 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:21Z","lastTransitionTime":"2025-11-28T12:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.826500 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.826566 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.826585 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.826610 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.826632 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:21Z","lastTransitionTime":"2025-11-28T12:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.928975 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.929011 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.929022 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.929038 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:21 crc kubenswrapper[4779]: I1128 12:36:21.929048 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:21Z","lastTransitionTime":"2025-11-28T12:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:22 crc kubenswrapper[4779]: I1128 12:36:22.033227 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:22 crc kubenswrapper[4779]: I1128 12:36:22.033284 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:22 crc kubenswrapper[4779]: I1128 12:36:22.033303 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:22 crc kubenswrapper[4779]: I1128 12:36:22.033325 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:22 crc kubenswrapper[4779]: I1128 12:36:22.033344 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:22Z","lastTransitionTime":"2025-11-28T12:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:22 crc kubenswrapper[4779]: I1128 12:36:22.136381 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:22 crc kubenswrapper[4779]: I1128 12:36:22.136440 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:22 crc kubenswrapper[4779]: I1128 12:36:22.136458 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:22 crc kubenswrapper[4779]: I1128 12:36:22.136481 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:22 crc kubenswrapper[4779]: I1128 12:36:22.136494 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:22Z","lastTransitionTime":"2025-11-28T12:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:22 crc kubenswrapper[4779]: I1128 12:36:22.238834 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:22 crc kubenswrapper[4779]: I1128 12:36:22.238910 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:22 crc kubenswrapper[4779]: I1128 12:36:22.238932 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:22 crc kubenswrapper[4779]: I1128 12:36:22.238961 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:22 crc kubenswrapper[4779]: I1128 12:36:22.238983 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:22Z","lastTransitionTime":"2025-11-28T12:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:22 crc kubenswrapper[4779]: I1128 12:36:22.342965 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:22 crc kubenswrapper[4779]: I1128 12:36:22.343042 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:22 crc kubenswrapper[4779]: I1128 12:36:22.343060 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:22 crc kubenswrapper[4779]: I1128 12:36:22.343123 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:22 crc kubenswrapper[4779]: I1128 12:36:22.343148 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:22Z","lastTransitionTime":"2025-11-28T12:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:22 crc kubenswrapper[4779]: I1128 12:36:22.446249 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:22 crc kubenswrapper[4779]: I1128 12:36:22.446318 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:22 crc kubenswrapper[4779]: I1128 12:36:22.446331 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:22 crc kubenswrapper[4779]: I1128 12:36:22.446355 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:22 crc kubenswrapper[4779]: I1128 12:36:22.446372 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:22Z","lastTransitionTime":"2025-11-28T12:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:22 crc kubenswrapper[4779]: I1128 12:36:22.549702 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:22 crc kubenswrapper[4779]: I1128 12:36:22.549749 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:22 crc kubenswrapper[4779]: I1128 12:36:22.549759 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:22 crc kubenswrapper[4779]: I1128 12:36:22.549778 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:22 crc kubenswrapper[4779]: I1128 12:36:22.549790 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:22Z","lastTransitionTime":"2025-11-28T12:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:22 crc kubenswrapper[4779]: I1128 12:36:22.653523 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:22 crc kubenswrapper[4779]: I1128 12:36:22.653561 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:22 crc kubenswrapper[4779]: I1128 12:36:22.653572 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:22 crc kubenswrapper[4779]: I1128 12:36:22.653589 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:22 crc kubenswrapper[4779]: I1128 12:36:22.653601 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:22Z","lastTransitionTime":"2025-11-28T12:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:22 crc kubenswrapper[4779]: I1128 12:36:22.725625 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:36:22 crc kubenswrapper[4779]: I1128 12:36:22.725625 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:36:22 crc kubenswrapper[4779]: I1128 12:36:22.725688 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:36:22 crc kubenswrapper[4779]: I1128 12:36:22.725775 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:36:22 crc kubenswrapper[4779]: E1128 12:36:22.725929 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:36:22 crc kubenswrapper[4779]: E1128 12:36:22.726008 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:36:22 crc kubenswrapper[4779]: E1128 12:36:22.726188 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c2psj" podUID="2d9943eb-ea06-476d-8736-0a45e588d9f4" Nov 28 12:36:22 crc kubenswrapper[4779]: E1128 12:36:22.726322 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:36:22 crc kubenswrapper[4779]: I1128 12:36:22.756539 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:22 crc kubenswrapper[4779]: I1128 12:36:22.756594 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:22 crc kubenswrapper[4779]: I1128 12:36:22.756612 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:22 crc kubenswrapper[4779]: I1128 12:36:22.756638 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:22 crc kubenswrapper[4779]: I1128 12:36:22.756656 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:22Z","lastTransitionTime":"2025-11-28T12:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:22 crc kubenswrapper[4779]: I1128 12:36:22.859699 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:22 crc kubenswrapper[4779]: I1128 12:36:22.859762 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:22 crc kubenswrapper[4779]: I1128 12:36:22.859782 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:22 crc kubenswrapper[4779]: I1128 12:36:22.859808 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:22 crc kubenswrapper[4779]: I1128 12:36:22.859824 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:22Z","lastTransitionTime":"2025-11-28T12:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:22 crc kubenswrapper[4779]: I1128 12:36:22.963034 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:22 crc kubenswrapper[4779]: I1128 12:36:22.963128 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:22 crc kubenswrapper[4779]: I1128 12:36:22.963148 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:22 crc kubenswrapper[4779]: I1128 12:36:22.963174 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:22 crc kubenswrapper[4779]: I1128 12:36:22.963194 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:22Z","lastTransitionTime":"2025-11-28T12:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:23 crc kubenswrapper[4779]: I1128 12:36:23.066080 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:23 crc kubenswrapper[4779]: I1128 12:36:23.066179 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:23 crc kubenswrapper[4779]: I1128 12:36:23.066220 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:23 crc kubenswrapper[4779]: I1128 12:36:23.066240 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:23 crc kubenswrapper[4779]: I1128 12:36:23.066253 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:23Z","lastTransitionTime":"2025-11-28T12:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:23 crc kubenswrapper[4779]: I1128 12:36:23.168289 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:23 crc kubenswrapper[4779]: I1128 12:36:23.168384 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:23 crc kubenswrapper[4779]: I1128 12:36:23.168408 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:23 crc kubenswrapper[4779]: I1128 12:36:23.168438 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:23 crc kubenswrapper[4779]: I1128 12:36:23.168459 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:23Z","lastTransitionTime":"2025-11-28T12:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:23 crc kubenswrapper[4779]: I1128 12:36:23.272722 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:23 crc kubenswrapper[4779]: I1128 12:36:23.272813 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:23 crc kubenswrapper[4779]: I1128 12:36:23.272836 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:23 crc kubenswrapper[4779]: I1128 12:36:23.272867 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:23 crc kubenswrapper[4779]: I1128 12:36:23.272887 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:23Z","lastTransitionTime":"2025-11-28T12:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:23 crc kubenswrapper[4779]: I1128 12:36:23.376247 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:23 crc kubenswrapper[4779]: I1128 12:36:23.376321 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:23 crc kubenswrapper[4779]: I1128 12:36:23.376357 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:23 crc kubenswrapper[4779]: I1128 12:36:23.376376 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:23 crc kubenswrapper[4779]: I1128 12:36:23.376388 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:23Z","lastTransitionTime":"2025-11-28T12:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:23 crc kubenswrapper[4779]: I1128 12:36:23.479436 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:23 crc kubenswrapper[4779]: I1128 12:36:23.479478 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:23 crc kubenswrapper[4779]: I1128 12:36:23.479489 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:23 crc kubenswrapper[4779]: I1128 12:36:23.479507 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:23 crc kubenswrapper[4779]: I1128 12:36:23.479520 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:23Z","lastTransitionTime":"2025-11-28T12:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:23 crc kubenswrapper[4779]: I1128 12:36:23.582311 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:23 crc kubenswrapper[4779]: I1128 12:36:23.582383 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:23 crc kubenswrapper[4779]: I1128 12:36:23.582409 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:23 crc kubenswrapper[4779]: I1128 12:36:23.582437 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:23 crc kubenswrapper[4779]: I1128 12:36:23.582459 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:23Z","lastTransitionTime":"2025-11-28T12:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:23 crc kubenswrapper[4779]: I1128 12:36:23.684635 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:23 crc kubenswrapper[4779]: I1128 12:36:23.684695 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:23 crc kubenswrapper[4779]: I1128 12:36:23.684712 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:23 crc kubenswrapper[4779]: I1128 12:36:23.684739 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:23 crc kubenswrapper[4779]: I1128 12:36:23.684757 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:23Z","lastTransitionTime":"2025-11-28T12:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:23 crc kubenswrapper[4779]: I1128 12:36:23.787743 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:23 crc kubenswrapper[4779]: I1128 12:36:23.787805 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:23 crc kubenswrapper[4779]: I1128 12:36:23.787823 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:23 crc kubenswrapper[4779]: I1128 12:36:23.787847 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:23 crc kubenswrapper[4779]: I1128 12:36:23.787865 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:23Z","lastTransitionTime":"2025-11-28T12:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:23 crc kubenswrapper[4779]: I1128 12:36:23.890743 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:23 crc kubenswrapper[4779]: I1128 12:36:23.890787 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:23 crc kubenswrapper[4779]: I1128 12:36:23.890822 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:23 crc kubenswrapper[4779]: I1128 12:36:23.890841 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:23 crc kubenswrapper[4779]: I1128 12:36:23.890853 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:23Z","lastTransitionTime":"2025-11-28T12:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:23 crc kubenswrapper[4779]: I1128 12:36:23.993966 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:23 crc kubenswrapper[4779]: I1128 12:36:23.993999 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:23 crc kubenswrapper[4779]: I1128 12:36:23.994007 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:23 crc kubenswrapper[4779]: I1128 12:36:23.994019 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:23 crc kubenswrapper[4779]: I1128 12:36:23.994044 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:23Z","lastTransitionTime":"2025-11-28T12:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:24 crc kubenswrapper[4779]: I1128 12:36:24.096778 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:24 crc kubenswrapper[4779]: I1128 12:36:24.096841 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:24 crc kubenswrapper[4779]: I1128 12:36:24.096859 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:24 crc kubenswrapper[4779]: I1128 12:36:24.096885 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:24 crc kubenswrapper[4779]: I1128 12:36:24.096903 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:24Z","lastTransitionTime":"2025-11-28T12:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:24 crc kubenswrapper[4779]: I1128 12:36:24.200657 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:24 crc kubenswrapper[4779]: I1128 12:36:24.201175 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:24 crc kubenswrapper[4779]: I1128 12:36:24.201353 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:24 crc kubenswrapper[4779]: I1128 12:36:24.201495 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:24 crc kubenswrapper[4779]: I1128 12:36:24.201628 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:24Z","lastTransitionTime":"2025-11-28T12:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:24 crc kubenswrapper[4779]: I1128 12:36:24.304527 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:24 crc kubenswrapper[4779]: I1128 12:36:24.304607 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:24 crc kubenswrapper[4779]: I1128 12:36:24.304663 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:24 crc kubenswrapper[4779]: I1128 12:36:24.304684 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:24 crc kubenswrapper[4779]: I1128 12:36:24.304696 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:24Z","lastTransitionTime":"2025-11-28T12:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:24 crc kubenswrapper[4779]: I1128 12:36:24.406747 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:24 crc kubenswrapper[4779]: I1128 12:36:24.407155 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:24 crc kubenswrapper[4779]: I1128 12:36:24.407314 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:24 crc kubenswrapper[4779]: I1128 12:36:24.407465 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:24 crc kubenswrapper[4779]: I1128 12:36:24.407583 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:24Z","lastTransitionTime":"2025-11-28T12:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:24 crc kubenswrapper[4779]: I1128 12:36:24.511257 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:24 crc kubenswrapper[4779]: I1128 12:36:24.511309 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:24 crc kubenswrapper[4779]: I1128 12:36:24.511324 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:24 crc kubenswrapper[4779]: I1128 12:36:24.511397 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:24 crc kubenswrapper[4779]: I1128 12:36:24.511413 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:24Z","lastTransitionTime":"2025-11-28T12:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:24 crc kubenswrapper[4779]: I1128 12:36:24.614542 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:24 crc kubenswrapper[4779]: I1128 12:36:24.614861 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:24 crc kubenswrapper[4779]: I1128 12:36:24.615053 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:24 crc kubenswrapper[4779]: I1128 12:36:24.615253 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:24 crc kubenswrapper[4779]: I1128 12:36:24.615425 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:24Z","lastTransitionTime":"2025-11-28T12:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:24 crc kubenswrapper[4779]: I1128 12:36:24.718647 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:24 crc kubenswrapper[4779]: I1128 12:36:24.718688 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:24 crc kubenswrapper[4779]: I1128 12:36:24.718699 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:24 crc kubenswrapper[4779]: I1128 12:36:24.718717 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:24 crc kubenswrapper[4779]: I1128 12:36:24.718729 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:24Z","lastTransitionTime":"2025-11-28T12:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:24 crc kubenswrapper[4779]: I1128 12:36:24.725530 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:36:24 crc kubenswrapper[4779]: I1128 12:36:24.725643 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:36:24 crc kubenswrapper[4779]: E1128 12:36:24.725699 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:36:24 crc kubenswrapper[4779]: I1128 12:36:24.725545 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:36:24 crc kubenswrapper[4779]: I1128 12:36:24.725962 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:36:24 crc kubenswrapper[4779]: E1128 12:36:24.725999 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:36:24 crc kubenswrapper[4779]: E1128 12:36:24.726270 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:36:24 crc kubenswrapper[4779]: E1128 12:36:24.726421 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c2psj" podUID="2d9943eb-ea06-476d-8736-0a45e588d9f4" Nov 28 12:36:24 crc kubenswrapper[4779]: I1128 12:36:24.822414 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:24 crc kubenswrapper[4779]: I1128 12:36:24.822527 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:24 crc kubenswrapper[4779]: I1128 12:36:24.822553 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:24 crc kubenswrapper[4779]: I1128 12:36:24.822593 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:24 crc kubenswrapper[4779]: I1128 12:36:24.822615 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:24Z","lastTransitionTime":"2025-11-28T12:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:24 crc kubenswrapper[4779]: I1128 12:36:24.926262 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:24 crc kubenswrapper[4779]: I1128 12:36:24.926305 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:24 crc kubenswrapper[4779]: I1128 12:36:24.926317 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:24 crc kubenswrapper[4779]: I1128 12:36:24.926334 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:24 crc kubenswrapper[4779]: I1128 12:36:24.926346 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:24Z","lastTransitionTime":"2025-11-28T12:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:25 crc kubenswrapper[4779]: I1128 12:36:25.029312 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:25 crc kubenswrapper[4779]: I1128 12:36:25.029396 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:25 crc kubenswrapper[4779]: I1128 12:36:25.029419 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:25 crc kubenswrapper[4779]: I1128 12:36:25.029449 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:25 crc kubenswrapper[4779]: I1128 12:36:25.029471 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:25Z","lastTransitionTime":"2025-11-28T12:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:25 crc kubenswrapper[4779]: I1128 12:36:25.132934 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:25 crc kubenswrapper[4779]: I1128 12:36:25.132993 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:25 crc kubenswrapper[4779]: I1128 12:36:25.133008 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:25 crc kubenswrapper[4779]: I1128 12:36:25.133030 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:25 crc kubenswrapper[4779]: I1128 12:36:25.133049 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:25Z","lastTransitionTime":"2025-11-28T12:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:25 crc kubenswrapper[4779]: I1128 12:36:25.237172 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:25 crc kubenswrapper[4779]: I1128 12:36:25.237226 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:25 crc kubenswrapper[4779]: I1128 12:36:25.237242 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:25 crc kubenswrapper[4779]: I1128 12:36:25.237267 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:25 crc kubenswrapper[4779]: I1128 12:36:25.237285 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:25Z","lastTransitionTime":"2025-11-28T12:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:25 crc kubenswrapper[4779]: I1128 12:36:25.340700 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:25 crc kubenswrapper[4779]: I1128 12:36:25.340782 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:25 crc kubenswrapper[4779]: I1128 12:36:25.340811 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:25 crc kubenswrapper[4779]: I1128 12:36:25.340851 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:25 crc kubenswrapper[4779]: I1128 12:36:25.340875 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:25Z","lastTransitionTime":"2025-11-28T12:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:25 crc kubenswrapper[4779]: I1128 12:36:25.444773 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:25 crc kubenswrapper[4779]: I1128 12:36:25.444832 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:25 crc kubenswrapper[4779]: I1128 12:36:25.444844 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:25 crc kubenswrapper[4779]: I1128 12:36:25.444869 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:25 crc kubenswrapper[4779]: I1128 12:36:25.444884 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:25Z","lastTransitionTime":"2025-11-28T12:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:25 crc kubenswrapper[4779]: I1128 12:36:25.547931 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:25 crc kubenswrapper[4779]: I1128 12:36:25.547987 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:25 crc kubenswrapper[4779]: I1128 12:36:25.547999 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:25 crc kubenswrapper[4779]: I1128 12:36:25.548019 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:25 crc kubenswrapper[4779]: I1128 12:36:25.548032 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:25Z","lastTransitionTime":"2025-11-28T12:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:25 crc kubenswrapper[4779]: I1128 12:36:25.651029 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:25 crc kubenswrapper[4779]: I1128 12:36:25.651074 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:25 crc kubenswrapper[4779]: I1128 12:36:25.651119 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:25 crc kubenswrapper[4779]: I1128 12:36:25.651145 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:25 crc kubenswrapper[4779]: I1128 12:36:25.651164 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:25Z","lastTransitionTime":"2025-11-28T12:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:25 crc kubenswrapper[4779]: I1128 12:36:25.755074 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:25 crc kubenswrapper[4779]: I1128 12:36:25.755165 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:25 crc kubenswrapper[4779]: I1128 12:36:25.755184 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:25 crc kubenswrapper[4779]: I1128 12:36:25.755220 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:25 crc kubenswrapper[4779]: I1128 12:36:25.755240 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:25Z","lastTransitionTime":"2025-11-28T12:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:25 crc kubenswrapper[4779]: I1128 12:36:25.861429 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:25 crc kubenswrapper[4779]: I1128 12:36:25.861483 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:25 crc kubenswrapper[4779]: I1128 12:36:25.861504 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:25 crc kubenswrapper[4779]: I1128 12:36:25.861523 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:25 crc kubenswrapper[4779]: I1128 12:36:25.861535 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:25Z","lastTransitionTime":"2025-11-28T12:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:25 crc kubenswrapper[4779]: I1128 12:36:25.964448 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:25 crc kubenswrapper[4779]: I1128 12:36:25.964509 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:25 crc kubenswrapper[4779]: I1128 12:36:25.964527 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:25 crc kubenswrapper[4779]: I1128 12:36:25.964549 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:25 crc kubenswrapper[4779]: I1128 12:36:25.964566 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:25Z","lastTransitionTime":"2025-11-28T12:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:26 crc kubenswrapper[4779]: I1128 12:36:26.066833 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:26 crc kubenswrapper[4779]: I1128 12:36:26.066896 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:26 crc kubenswrapper[4779]: I1128 12:36:26.066914 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:26 crc kubenswrapper[4779]: I1128 12:36:26.066939 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:26 crc kubenswrapper[4779]: I1128 12:36:26.066955 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:26Z","lastTransitionTime":"2025-11-28T12:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:26 crc kubenswrapper[4779]: I1128 12:36:26.169215 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:26 crc kubenswrapper[4779]: I1128 12:36:26.169267 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:26 crc kubenswrapper[4779]: I1128 12:36:26.169284 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:26 crc kubenswrapper[4779]: I1128 12:36:26.169306 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:26 crc kubenswrapper[4779]: I1128 12:36:26.169322 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:26Z","lastTransitionTime":"2025-11-28T12:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:26 crc kubenswrapper[4779]: I1128 12:36:26.272340 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:26 crc kubenswrapper[4779]: I1128 12:36:26.272400 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:26 crc kubenswrapper[4779]: I1128 12:36:26.272416 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:26 crc kubenswrapper[4779]: I1128 12:36:26.272440 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:26 crc kubenswrapper[4779]: I1128 12:36:26.272457 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:26Z","lastTransitionTime":"2025-11-28T12:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:26 crc kubenswrapper[4779]: I1128 12:36:26.375650 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:26 crc kubenswrapper[4779]: I1128 12:36:26.375725 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:26 crc kubenswrapper[4779]: I1128 12:36:26.375752 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:26 crc kubenswrapper[4779]: I1128 12:36:26.375785 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:26 crc kubenswrapper[4779]: I1128 12:36:26.375806 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:26Z","lastTransitionTime":"2025-11-28T12:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:26 crc kubenswrapper[4779]: I1128 12:36:26.478550 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:26 crc kubenswrapper[4779]: I1128 12:36:26.478618 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:26 crc kubenswrapper[4779]: I1128 12:36:26.478632 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:26 crc kubenswrapper[4779]: I1128 12:36:26.478658 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:26 crc kubenswrapper[4779]: I1128 12:36:26.478674 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:26Z","lastTransitionTime":"2025-11-28T12:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:26 crc kubenswrapper[4779]: I1128 12:36:26.582174 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:26 crc kubenswrapper[4779]: I1128 12:36:26.582246 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:26 crc kubenswrapper[4779]: I1128 12:36:26.582266 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:26 crc kubenswrapper[4779]: I1128 12:36:26.582294 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:26 crc kubenswrapper[4779]: I1128 12:36:26.582313 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:26Z","lastTransitionTime":"2025-11-28T12:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:26 crc kubenswrapper[4779]: I1128 12:36:26.685236 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:26 crc kubenswrapper[4779]: I1128 12:36:26.685300 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:26 crc kubenswrapper[4779]: I1128 12:36:26.685317 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:26 crc kubenswrapper[4779]: I1128 12:36:26.685342 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:26 crc kubenswrapper[4779]: I1128 12:36:26.685360 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:26Z","lastTransitionTime":"2025-11-28T12:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:26 crc kubenswrapper[4779]: I1128 12:36:26.725667 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:36:26 crc kubenswrapper[4779]: I1128 12:36:26.725735 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:36:26 crc kubenswrapper[4779]: I1128 12:36:26.725745 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:36:26 crc kubenswrapper[4779]: E1128 12:36:26.725837 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:36:26 crc kubenswrapper[4779]: E1128 12:36:26.725943 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:36:26 crc kubenswrapper[4779]: E1128 12:36:26.726065 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:36:26 crc kubenswrapper[4779]: I1128 12:36:26.726162 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:36:26 crc kubenswrapper[4779]: E1128 12:36:26.726379 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c2psj" podUID="2d9943eb-ea06-476d-8736-0a45e588d9f4" Nov 28 12:36:26 crc kubenswrapper[4779]: I1128 12:36:26.788817 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:26 crc kubenswrapper[4779]: I1128 12:36:26.788911 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:26 crc kubenswrapper[4779]: I1128 12:36:26.788940 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:26 crc kubenswrapper[4779]: I1128 12:36:26.788981 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:26 crc kubenswrapper[4779]: I1128 12:36:26.789007 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:26Z","lastTransitionTime":"2025-11-28T12:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:26 crc kubenswrapper[4779]: I1128 12:36:26.891361 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:26 crc kubenswrapper[4779]: I1128 12:36:26.891521 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:26 crc kubenswrapper[4779]: I1128 12:36:26.891544 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:26 crc kubenswrapper[4779]: I1128 12:36:26.891574 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:26 crc kubenswrapper[4779]: I1128 12:36:26.891592 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:26Z","lastTransitionTime":"2025-11-28T12:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:26 crc kubenswrapper[4779]: I1128 12:36:26.995155 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:26 crc kubenswrapper[4779]: I1128 12:36:26.995242 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:26 crc kubenswrapper[4779]: I1128 12:36:26.995260 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:26 crc kubenswrapper[4779]: I1128 12:36:26.995281 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:26 crc kubenswrapper[4779]: I1128 12:36:26.995296 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:26Z","lastTransitionTime":"2025-11-28T12:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.098177 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.098283 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.098309 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.098349 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.098374 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:27Z","lastTransitionTime":"2025-11-28T12:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.201464 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.201523 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.201540 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.201565 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.201582 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:27Z","lastTransitionTime":"2025-11-28T12:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.305036 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.305070 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.305085 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.305119 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.305128 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:27Z","lastTransitionTime":"2025-11-28T12:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.408639 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.408698 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.408715 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.408738 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.408760 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:27Z","lastTransitionTime":"2025-11-28T12:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.511949 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.512001 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.512020 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.512043 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.512060 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:27Z","lastTransitionTime":"2025-11-28T12:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.615026 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.615132 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.615152 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.615180 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.615198 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:27Z","lastTransitionTime":"2025-11-28T12:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.719011 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.719085 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.719152 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.719186 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.719208 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:27Z","lastTransitionTime":"2025-11-28T12:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.726717 4779 scope.go:117] "RemoveContainer" containerID="2e0392cb9de3bc430e1d54372b710e29ea04b2316517c6e5aa17282ee1ba5201" Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.753832 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbbbf6f-004c-42ae-8a38-1bcc6cb88ac2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9cede79cbe4c47d953dfa702fe815cc14ee242dede33edec3c4617824c89b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4493f154b47a353308d54341114bbbd12157f9575b873e1648d1dae6a386a534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9d44446078a2bb53a5a9b0a3f7a87ecf24a8554fb968a0250fc3a4cfb2d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://123567b9e202a9aae6ab83bca1ea909a496c476395703ab65e855be02f7af06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c959e0d582f2f01523650db7c0a1d6483dda34c3fcdfaa29d2d25e4d0b0f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:27Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.793733 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f4f43e-a921-41b2-aa88-506055daff60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e0392cb9de3bc430e1d54372b710e29ea04b2316517c6e5aa17282ee1ba5201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e0392cb9de3bc430e1d54372b710e29ea04b2316517c6e5aa17282ee1ba5201\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"message\\\":\\\"or removal\\\\nI1128 12:36:10.134712 6246 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1128 12:36:10.134716 6246 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1128 12:36:10.134740 6246 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1128 12:36:10.134747 6246 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1128 12:36:10.134749 6246 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 12:36:10.134785 6246 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1128 12:36:10.134810 6246 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1128 12:36:10.135195 6246 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 12:36:10.135267 6246 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1128 12:36:10.135346 6246 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 12:36:10.135400 6246 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 12:36:10.135435 6246 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 12:36:10.135471 6246 handler.go:208] Removed *v1.Node event handler 2\\\\nI1128 12:36:10.135421 6246 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1128 12:36:10.135482 6246 factory.go:656] Stopping watch factory\\\\nI1128 12:36:10.135558 6246 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pbmbn_openshift-ovn-kubernetes(35f4f43e-a921-41b2-aa88-506055daff60)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pbmbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:27Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.814325 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwgdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13786eba-201c-40ca-89b7-174795999a9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec60bab90c7fee1fd38c00da4f84d5133876ad8f2817e5447795fcab4feb2942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwgdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:27Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.825076 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.825159 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.825219 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.825249 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.825270 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:27Z","lastTransitionTime":"2025-11-28T12:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.829146 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c2psj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d9943eb-ea06-476d-8736-0a45e588d9f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8vbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8vbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:36:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c2psj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:27Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.844351 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jf46d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd0b81f7-c868-4f90-b20d-9d1b53f5216f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e8508450f924b6b8509b5d06c78535915557c5a7362b50c41515ad15f35e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smlr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383fc6deecc04584b130b3fdc9c1fded751c521513ce60898fdf1927748cd4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smlr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:36:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jf46d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:27Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.864661 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b303d954-23c9-4fc9-8e79-981009172099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6912a42c418059dabf07c7d940bf1c4102c8dcf91cd4dd6ca0b177f4acd276ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf14e5e2229156dc442c92253ef1f23c75a5a6f5dec2d2537cddcdd1df54b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a76dbc5b41ebf68792cd449e4a245678be24151f0c980eedd06f956674b2435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3db38b748527004df103120db865f7848491344dfdf5c89a6db10f4d15e6a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9026b47ba3a0076e3f66e452bc9a223292a17659f2b80d04ef6eb6a5c0448710\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 12:35:52.373678 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 12:35:52.376135 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3230331060/tls.crt::/tmp/serving-cert-3230331060/tls.key\\\\\\\"\\\\nI1128 12:35:57.821147 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 12:35:57.824398 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 12:35:57.824424 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 12:35:57.824444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 12:35:57.824450 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 12:35:57.831411 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 12:35:57.831445 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831467 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 12:35:57.831472 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 12:35:57.831476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 12:35:57.831480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 12:35:57.831686 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 12:35:57.839127 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bafddd2d81f67f1445e3714d50eba5cfd6f75d60c2cb47d16f2086861a10bd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:27Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.886296 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c9857379117d130ce02fa4a153dfc01c9f41ba65663ae918bd82c9b14291e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:27Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.898860 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dlvj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8b3aa68-52ee-40cd-a059-6e410b826ce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2e852aeb571e85a95f4581550ee5f911d9c67fbbc4fc699e9af667a9c4b531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db55w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dlvj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:27Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.916606 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:27Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.927586 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.927643 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.927662 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.927689 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.927711 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:27Z","lastTransitionTime":"2025-11-28T12:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.931777 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d290cf8678216cdf66a68b32edea2be30af7f7fa4ff7ccac629d9e690b23b13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:27Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.948399 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2gg4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9e9a74657b078824a5614dc894178aed5ca4cb11445b900485e9a6c4378f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2gg4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:27Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.960601 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:27Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.970067 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:27Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.981791 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3544f7f72339878b2314fde813e8a92a8341fb05a34a4440c7c37b983d8d23f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19dcc5041b0cbae9167c41c808ece2651eac928f93422722ae28825b5ea4f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:27Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:27 crc kubenswrapper[4779]: I1128 12:36:27.994639 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373d4c2a-0b03-4671-945a-0583fa342b3d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e79e9cc7bdaacc427604d12cf94272c7ed3d93519b1d285ba336edded1b3642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0417da6607c0d549767642332fa4fb21bbef525d7073d0a352120092d3450f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b887fb78d1be13c77a88ce49c84ff0839a51056e29d59d571ab7da133dd0d897\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a538ac7a3b48f9c58a68688a95342fb3a9d26ee3e5d7c65f1e3b8d99993294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:27Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.005827 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b2a3eb4-4de5-491b-b466-3a35b7d745ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23df7a96829b4103254d6da3740caab05538ddbd3235ce16e8d768e681041c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f92b1378efd9146ee3cb61fef14092136e47b318d132a400c768bedf50d034e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kj9g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:28Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.019538 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pzwdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba664a9e-76d2-4d02-889a-e7062bfc903c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5598fdba6afba30cd00c8abdae6c80300fb10dfcde40afab0f15f848addddd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfslc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pzwdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:28Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.031062 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.031110 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.031130 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.031148 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.031160 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:28Z","lastTransitionTime":"2025-11-28T12:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.080970 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pbmbn_35f4f43e-a921-41b2-aa88-506055daff60/ovnkube-controller/1.log" Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.083519 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" event={"ID":"35f4f43e-a921-41b2-aa88-506055daff60","Type":"ContainerStarted","Data":"3f78ed0375efd54092331e1cbb01c168e6cc218dc9abaf935e986271e1fd7ed2"} Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.133465 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.133511 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.133520 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.133535 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.133546 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:28Z","lastTransitionTime":"2025-11-28T12:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.237019 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.237052 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.237060 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.237075 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.237108 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:28Z","lastTransitionTime":"2025-11-28T12:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.339799 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.339881 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.339906 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.339935 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.339958 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:28Z","lastTransitionTime":"2025-11-28T12:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.442233 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.442537 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.442673 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.442826 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.442924 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:28Z","lastTransitionTime":"2025-11-28T12:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.544638 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.544903 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.545000 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.545081 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.545161 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:28Z","lastTransitionTime":"2025-11-28T12:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.596323 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2d9943eb-ea06-476d-8736-0a45e588d9f4-metrics-certs\") pod \"network-metrics-daemon-c2psj\" (UID: \"2d9943eb-ea06-476d-8736-0a45e588d9f4\") " pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:36:28 crc kubenswrapper[4779]: E1128 12:36:28.596439 4779 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 12:36:28 crc kubenswrapper[4779]: E1128 12:36:28.596493 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d9943eb-ea06-476d-8736-0a45e588d9f4-metrics-certs podName:2d9943eb-ea06-476d-8736-0a45e588d9f4 nodeName:}" failed. No retries permitted until 2025-11-28 12:36:44.596478997 +0000 UTC m=+65.162154351 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2d9943eb-ea06-476d-8736-0a45e588d9f4-metrics-certs") pod "network-metrics-daemon-c2psj" (UID: "2d9943eb-ea06-476d-8736-0a45e588d9f4") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.647812 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.647866 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.647882 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.647903 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.647917 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:28Z","lastTransitionTime":"2025-11-28T12:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.725734 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.725811 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.725847 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:36:28 crc kubenswrapper[4779]: E1128 12:36:28.726292 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c2psj" podUID="2d9943eb-ea06-476d-8736-0a45e588d9f4" Nov 28 12:36:28 crc kubenswrapper[4779]: E1128 12:36:28.726051 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:36:28 crc kubenswrapper[4779]: E1128 12:36:28.726197 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.725874 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:36:28 crc kubenswrapper[4779]: E1128 12:36:28.726477 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.750102 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.750316 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.750385 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.750448 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.750524 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:28Z","lastTransitionTime":"2025-11-28T12:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.853966 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.854022 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.854042 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.854067 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.854085 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:28Z","lastTransitionTime":"2025-11-28T12:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.958381 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.958446 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.958470 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.958494 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:28 crc kubenswrapper[4779]: I1128 12:36:28.958511 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:28Z","lastTransitionTime":"2025-11-28T12:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.062334 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.062425 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.062441 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.062464 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.062480 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:29Z","lastTransitionTime":"2025-11-28T12:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.087253 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.108828 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d290cf8678216cdf66a68b32edea2be30af7f7fa4ff7ccac629d9e690b23b13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:29Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.165933 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.165981 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.165999 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.166022 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.166040 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:29Z","lastTransitionTime":"2025-11-28T12:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.168570 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2gg4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9e9a74657b078824a5614dc894178aed5ca4cb11445b900485e9a6c4378f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2gg4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:29Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.191734 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:29Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.217294 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:29Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.239955 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3544f7f72339878b2314fde813e8a92a8341fb05a34a4440c7c37b983d8d23f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19dcc5041b0cbae9167c41c808ece2651eac928f93422722ae28825b5ea4f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:29Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.257087 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:29Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.268382 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.268435 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.268454 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.268479 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.268497 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:29Z","lastTransitionTime":"2025-11-28T12:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.276569 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373d4c2a-0b03-4671-945a-0583fa342b3d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e79e9cc7bdaacc427604d12cf94272c7ed3d93519b1d285ba336edded1b3642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0417da6607c0d549767642332fa4fb21bbef525d7073d0a352120092d3450f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b887fb78d1be13c77a88ce49c84ff0839a51056e29d59d571ab7da133dd0d897\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a538ac7a3b48f9c58a68688a95342fb3a9d26ee3e5d7c65f1e3b8d99993294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:29Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.293406 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b2a3eb4-4de5-491b-b466-3a35b7d745ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23df7a96829b4103254d6da3740caab05538ddbd3235ce16e8d768e681041c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f92b1378efd9146ee3cb61fef14092136e47b318d132a400c768bedf50d034e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kj9g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:29Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.308692 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pzwdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba664a9e-76d2-4d02-889a-e7062bfc903c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5598fdba6afba30cd00c8abdae6c80300fb10dfcde40afab0f15f848addddd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfslc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pzwdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:29Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.333976 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbbbf6f-004c-42ae-8a38-1bcc6cb88ac2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9cede79cbe4c47d953dfa702fe815cc14ee242dede33edec3c4617824c89b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4493f154b47a353308d54341114bbbd12157f9575b873e1648d1dae6a386a534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9d44446078a2bb53a5a9b0a3f7a87ecf24a8554fb968a0250fc3a4cfb2d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://123567b9e202a9aae6ab83bca1ea909a496c476395703ab65e855be02f7af06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c959e0d582f2f01523650db7c0a1d6483dda34c3fcdfaa29d2d25e4d0b0f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:29Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.355011 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f4f43e-a921-41b2-aa88-506055daff60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f78ed0375efd54092331e1cbb01c168e6cc218dc9abaf935e986271e1fd7ed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e0392cb9de3bc430e1d54372b710e29ea04b2316517c6e5aa17282ee1ba5201\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"message\\\":\\\"or removal\\\\nI1128 12:36:10.134712 6246 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1128 12:36:10.134716 6246 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1128 12:36:10.134740 6246 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1128 12:36:10.134747 6246 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1128 12:36:10.134749 6246 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 12:36:10.134785 6246 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1128 12:36:10.134810 6246 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1128 12:36:10.135195 6246 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 12:36:10.135267 6246 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1128 12:36:10.135346 6246 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 12:36:10.135400 6246 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 12:36:10.135435 6246 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 12:36:10.135471 6246 handler.go:208] Removed *v1.Node event handler 2\\\\nI1128 12:36:10.135421 6246 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1128 12:36:10.135482 6246 factory.go:656] Stopping watch factory\\\\nI1128 12:36:10.135558 6246 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pbmbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:29Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.366885 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwgdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13786eba-201c-40ca-89b7-174795999a9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec60bab90c7fee1fd38c00da4f84d5133876ad8f2817e5447795fcab4feb2942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwgdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:29Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.370512 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.370555 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.370564 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.370579 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.370589 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:29Z","lastTransitionTime":"2025-11-28T12:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.381676 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c2psj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d9943eb-ea06-476d-8736-0a45e588d9f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8vbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8vbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:36:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c2psj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:29Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.398644 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b303d954-23c9-4fc9-8e79-981009172099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6912a42c418059dabf07c7d940bf1c4102c8dcf91cd4dd6ca0b177f4acd276ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf14e5e2229156dc442c92253ef1f23c75a5a6f5dec2d2537cddcdd1df54b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a76dbc5b41ebf68792cd449e4a245678be24151f0c980eedd06f956674b2435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3db38b748527004df103120db865f7848491344dfdf5c89a6db10f4d15e6a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9026b47ba3a0076e3f66e452bc9a223292a17659f2b80d04ef6eb6a5c0448710\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 12:35:52.373678 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 12:35:52.376135 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3230331060/tls.crt::/tmp/serving-cert-3230331060/tls.key\\\\\\\"\\\\nI1128 12:35:57.821147 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 12:35:57.824398 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 12:35:57.824424 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 12:35:57.824444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 12:35:57.824450 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 12:35:57.831411 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 12:35:57.831445 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831467 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 12:35:57.831472 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 12:35:57.831476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 12:35:57.831480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 12:35:57.831686 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 12:35:57.839127 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bafddd2d81f67f1445e3714d50eba5cfd6f75d60c2cb47d16f2086861a10bd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:29Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.419724 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c9857379117d130ce02fa4a153dfc01c9f41ba65663ae918bd82c9b14291e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:29Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.434869 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dlvj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8b3aa68-52ee-40cd-a059-6e410b826ce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2e852aeb571e85a95f4581550ee5f911d9c67fbbc4fc699e9af667a9c4b531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db55w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dlvj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:29Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.448556 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jf46d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd0b81f7-c868-4f90-b20d-9d1b53f5216f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e8508450f924b6b8509b5d06c78535915557c5a7362b50c41515ad15f35e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smlr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383fc6deecc04584b130b3fdc9c1fded751c521513ce60898fdf1927748cd4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smlr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:36:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jf46d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:29Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.453263 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.465711 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.467111 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dlvj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8b3aa68-52ee-40cd-a059-6e410b826ce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2e852aeb571e85a95f4581550ee5f911d9c67fbbc4fc699e9af667a9c4b531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db55w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dlvj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:29Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.473076 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.473131 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.473143 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.473160 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.473175 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:29Z","lastTransitionTime":"2025-11-28T12:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.489487 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jf46d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd0b81f7-c868-4f90-b20d-9d1b53f5216f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e8508450f924b6b8509b5d06c78535915557c5a7362b50c41515ad15f35e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smlr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383fc6deecc04584b130b3fdc9c1fded751c521513ce60898fdf1927748cd4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smlr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:36:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jf46d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:29Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.514775 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b303d954-23c9-4fc9-8e79-981009172099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6912a42c418059dabf07c7d940bf1c4102c8dcf91cd4dd6ca0b177f4acd276ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf14e5e2229156dc442c92253ef1f23c75a5a6f5dec2d2537cddcdd1df54b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a76dbc5b41ebf68792cd449e4a245678be24151f0c980eedd06f956674b2435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3db38b748527004df103120db865f7848491344dfdf5c89a6db10f4d15e6a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9026b47ba3a0076e3f66e452bc9a223292a17659f2b80d04ef6eb6a5c0448710\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 12:35:52.373678 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 12:35:52.376135 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3230331060/tls.crt::/tmp/serving-cert-3230331060/tls.key\\\\\\\"\\\\nI1128 12:35:57.821147 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 12:35:57.824398 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 12:35:57.824424 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 12:35:57.824444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 12:35:57.824450 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 12:35:57.831411 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 12:35:57.831445 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831467 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 12:35:57.831472 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 12:35:57.831476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 12:35:57.831480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 12:35:57.831686 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 12:35:57.839127 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bafddd2d81f67f1445e3714d50eba5cfd6f75d60c2cb47d16f2086861a10bd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:29Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.538078 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c9857379117d130ce02fa4a153dfc01c9f41ba65663ae918bd82c9b14291e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:29Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.553762 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3544f7f72339878b2314fde813e8a92a8341fb05a34a4440c7c37b983d8d23f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19dcc5041b0cbae9167c41c808ece2651eac928f93422722ae28825b5ea4f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:29Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.569888 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:29Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.575006 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.575121 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.575142 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.575165 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.575179 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:29Z","lastTransitionTime":"2025-11-28T12:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.587660 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d290cf8678216cdf66a68b32edea2be30af7f7fa4ff7ccac629d9e690b23b13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:29Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.604752 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2gg4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9e9a74657b078824a5614dc894178aed5ca4cb11445b900485e9a6c4378f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2gg4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:29Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.616968 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:29Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.628508 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:29Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.641228 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373d4c2a-0b03-4671-945a-0583fa342b3d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e79e9cc7bdaacc427604d12cf94272c7ed3d93519b1d285ba336edded1b3642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0417da6607c0d549767642332fa4fb21bbef525d7073d0a352120092d3450f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b887fb78d1be13c77a88ce49c84ff0839a51056e29d59d571ab7da133dd0d897\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a538ac7a3b48f9c58a68688a95342fb3a9d26ee3e5d7c65f1e3b8d99993294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:29Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.654142 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b2a3eb4-4de5-491b-b466-3a35b7d745ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23df7a96829b4103254d6da3740caab05538ddbd3235ce16e8d768e681041c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f92b1378efd9146ee3cb61fef14092136e47b318d132a400c768bedf50d034e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kj9g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:29Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.669551 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pzwdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba664a9e-76d2-4d02-889a-e7062bfc903c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5598fdba6afba30cd00c8abdae6c80300fb10dfcde40afab0f15f848addddd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfslc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pzwdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:29Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.677403 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.677450 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.677461 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.677480 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.677491 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:29Z","lastTransitionTime":"2025-11-28T12:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.683860 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c2psj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d9943eb-ea06-476d-8736-0a45e588d9f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8vbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8vbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:36:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c2psj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:29Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.707884 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbbbf6f-004c-42ae-8a38-1bcc6cb88ac2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9cede79cbe4c47d953dfa702fe815cc14ee242dede33edec3c4617824c89b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4493f154b47a353308d54341114bbbd12157f9575b873e1648d1dae6a386a534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9d44446078a2bb53a5a9b0a3f7a87ecf24a8554fb968a0250fc3a4cfb2d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://123567b9e202a9aae6ab83bca1ea909a496c476395703ab65e855be02f7af06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c959e0d582f2f01523650db7c0a1d6483dda34c3fcdfaa29d2d25e4d0b0f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:29Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.711748 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:36:29 crc kubenswrapper[4779]: E1128 12:36:29.711892 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 12:37:01.711875148 +0000 UTC m=+82.277550502 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.711894 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.711942 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.711967 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.711985 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:36:29 crc kubenswrapper[4779]: E1128 12:36:29.712028 4779 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 12:36:29 crc kubenswrapper[4779]: E1128 12:36:29.712104 4779 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 12:36:29 crc kubenswrapper[4779]: E1128 12:36:29.712117 4779 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 12:36:29 crc kubenswrapper[4779]: E1128 12:36:29.712126 4779 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 12:36:29 crc kubenswrapper[4779]: E1128 12:36:29.712039 4779 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 12:36:29 crc kubenswrapper[4779]: E1128 12:36:29.712151 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 12:37:01.712084953 +0000 UTC m=+82.277760347 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 12:36:29 crc kubenswrapper[4779]: E1128 12:36:29.712145 4779 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 12:36:29 crc kubenswrapper[4779]: E1128 12:36:29.712181 4779 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 12:36:29 crc kubenswrapper[4779]: E1128 12:36:29.712189 4779 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 12:36:29 crc kubenswrapper[4779]: E1128 12:36:29.712192 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 12:37:01.712173166 +0000 UTC m=+82.277848560 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 12:36:29 crc kubenswrapper[4779]: E1128 12:36:29.712212 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 12:37:01.712204496 +0000 UTC m=+82.277879850 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 12:36:29 crc kubenswrapper[4779]: E1128 12:36:29.712233 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 12:37:01.712217807 +0000 UTC m=+82.277893191 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.728490 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f4f43e-a921-41b2-aa88-506055daff60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f78ed0375efd54092331e1cbb01c168e6cc218dc9abaf935e986271e1fd7ed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e0392cb9de3bc430e1d54372b710e29ea04b2316517c6e5aa17282ee1ba5201\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"message\\\":\\\"or removal\\\\nI1128 12:36:10.134712 6246 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1128 12:36:10.134716 6246 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1128 12:36:10.134740 6246 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1128 12:36:10.134747 6246 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1128 12:36:10.134749 6246 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 12:36:10.134785 6246 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1128 12:36:10.134810 6246 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1128 12:36:10.135195 6246 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 12:36:10.135267 6246 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1128 12:36:10.135346 6246 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 12:36:10.135400 6246 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 12:36:10.135435 6246 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 12:36:10.135471 6246 handler.go:208] Removed *v1.Node event handler 2\\\\nI1128 12:36:10.135421 6246 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1128 12:36:10.135482 6246 factory.go:656] Stopping watch factory\\\\nI1128 12:36:10.135558 6246 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pbmbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:29Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.742053 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwgdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13786eba-201c-40ca-89b7-174795999a9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec60bab90c7fee1fd38c00da4f84d5133876ad8f2817e5447795fcab4feb2942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwgdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:29Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.754785 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b2a3eb4-4de5-491b-b466-3a35b7d745ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23df7a96829b4103254d6da3740caab05538ddbd3235ce16e8d768e681041c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f92b1378efd9146ee3cb61fef14092136e47b318d132a400c768bedf50d034e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kj9g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:29Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.772605 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pzwdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba664a9e-76d2-4d02-889a-e7062bfc903c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5598fdba6afba30cd00c8abdae6c80300fb10dfcde40afab0f15f848addddd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfslc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pzwdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:29Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.780022 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.780053 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.780064 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.780081 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.780113 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:29Z","lastTransitionTime":"2025-11-28T12:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.790686 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373d4c2a-0b03-4671-945a-0583fa342b3d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e79e9cc7bdaacc427604d12cf94272c7ed3d93519b1d285ba336edded1b3642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0417da6607c0d549767642332fa4fb21bbef525d7073d0a352120092d3450f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b887fb78d1be13c77a88ce49c84ff0839a51056e29d59d571ab7da133dd0d897\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a538ac7a3b48f9c58a68688a95342fb3a9d26ee3e5d7c65f1e3b8d99993294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:29Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.824919 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f4f43e-a921-41b2-aa88-506055daff60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f78ed0375efd54092331e1cbb01c168e6cc218dc9abaf935e986271e1fd7ed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e0392cb9de3bc430e1d54372b710e29ea04b2316517c6e5aa17282ee1ba5201\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"message\\\":\\\"or removal\\\\nI1128 12:36:10.134712 6246 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1128 12:36:10.134716 6246 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1128 12:36:10.134740 6246 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1128 12:36:10.134747 6246 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1128 12:36:10.134749 6246 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 12:36:10.134785 6246 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1128 12:36:10.134810 6246 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1128 12:36:10.135195 6246 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 12:36:10.135267 6246 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1128 12:36:10.135346 6246 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 12:36:10.135400 6246 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 12:36:10.135435 6246 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 12:36:10.135471 6246 handler.go:208] Removed *v1.Node event handler 2\\\\nI1128 12:36:10.135421 6246 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1128 12:36:10.135482 6246 factory.go:656] Stopping watch factory\\\\nI1128 12:36:10.135558 6246 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pbmbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:29Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.840941 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwgdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13786eba-201c-40ca-89b7-174795999a9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec60bab90c7fee1fd38c00da4f84d5133876ad8f2817e5447795fcab4feb2942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwgdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:29Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.855554 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c2psj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d9943eb-ea06-476d-8736-0a45e588d9f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8vbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8vbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:36:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c2psj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:29Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.876726 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbbbf6f-004c-42ae-8a38-1bcc6cb88ac2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9cede79cbe4c47d953dfa702fe815cc14ee242dede33edec3c4617824c89b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4493f154b47a353308d54341114bbbd12157f9575b873e1648d1dae6a386a534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9d44446078a2bb53a5a9b0a3f7a87ecf24a8554fb968a0250fc3a4cfb2d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://123567b9e202a9aae6ab83bca1ea909a496c476395703ab65e855be02f7af06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c959e0d582f2f01523650db7c0a1d6483dda34c3fcdfaa29d2d25e4d0b0f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:29Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.882865 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.882918 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.882932 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.882955 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.882970 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:29Z","lastTransitionTime":"2025-11-28T12:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.895519 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b303d954-23c9-4fc9-8e79-981009172099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6912a42c418059dabf07c7d940bf1c4102c8dcf91cd4dd6ca0b177f4acd276ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf14e5e2229156dc442c92253ef1f23c75a5a6f5dec2d2537cddcdd1df54b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a76dbc5b41ebf68792cd449e4a245678be24151f0c980eedd06f956674b2435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3db38b748527004df103120db865f7848491344dfdf5c89a6db10f4d15e6a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9026b47ba3a0076e3f66e452bc9a223292a17659f2b80d04ef6eb6a5c0448710\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 12:35:52.373678 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 12:35:52.376135 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3230331060/tls.crt::/tmp/serving-cert-3230331060/tls.key\\\\\\\"\\\\nI1128 12:35:57.821147 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 12:35:57.824398 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 12:35:57.824424 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 12:35:57.824444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 12:35:57.824450 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 12:35:57.831411 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 12:35:57.831445 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831467 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 12:35:57.831472 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 12:35:57.831476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 12:35:57.831480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 12:35:57.831686 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 12:35:57.839127 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bafddd2d81f67f1445e3714d50eba5cfd6f75d60c2cb47d16f2086861a10bd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:29Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.910653 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c9857379117d130ce02fa4a153dfc01c9f41ba65663ae918bd82c9b14291e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:29Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.923451 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dlvj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8b3aa68-52ee-40cd-a059-6e410b826ce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2e852aeb571e85a95f4581550ee5f911d9c67fbbc4fc699e9af667a9c4b531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db55w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dlvj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:29Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.934876 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jf46d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd0b81f7-c868-4f90-b20d-9d1b53f5216f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e8508450f924b6b8509b5d06c78535915557c5a7362b50c41515ad15f35e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smlr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383fc6deecc04584b130b3fdc9c1fded751c521513ce60898fdf1927748cd4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smlr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:36:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jf46d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:29Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.946121 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:29Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.958648 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:29Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.972545 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3544f7f72339878b2314fde813e8a92a8341fb05a34a4440c7c37b983d8d23f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19dcc5041b0cbae9167c41c808ece2651eac928f93422722ae28825b5ea4f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:29Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.984857 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.984901 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.984917 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.984936 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.984949 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:29Z","lastTransitionTime":"2025-11-28T12:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:29 crc kubenswrapper[4779]: I1128 12:36:29.988050 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:29Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.003222 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d290cf8678216cdf66a68b32edea2be30af7f7fa4ff7ccac629d9e690b23b13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:30Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.020538 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2gg4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9e9a74657b078824a5614dc894178aed5ca4cb11445b900485e9a6c4378f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2gg4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:30Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.030556 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91a3b1-3cec-4dcd-8f16-bc721aaedc52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7be2ce5bc20d31216029627f86e27657d444334d72ba98e4ae9923c9d23cf35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9512174ef01c8751a11fc5e6193513236518b4a9d5b63b05020544b8708b70b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54bf19864670db9dbeda1e3b133e9246f9e4027714f684783efed888890af9ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dd288476ad4d58bebb413208bbe2f45bf3997fd7587a90b08ff3af6bdc2ad10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dd288476ad4d58bebb413208bbe2f45bf3997fd7587a90b08ff3af6bdc2ad10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:30Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.091699 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.091737 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.091746 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.091763 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.091772 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:30Z","lastTransitionTime":"2025-11-28T12:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.095803 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pbmbn_35f4f43e-a921-41b2-aa88-506055daff60/ovnkube-controller/2.log" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.096800 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pbmbn_35f4f43e-a921-41b2-aa88-506055daff60/ovnkube-controller/1.log" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.101483 4779 generic.go:334] "Generic (PLEG): container finished" podID="35f4f43e-a921-41b2-aa88-506055daff60" containerID="3f78ed0375efd54092331e1cbb01c168e6cc218dc9abaf935e986271e1fd7ed2" exitCode=1 Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.101635 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" event={"ID":"35f4f43e-a921-41b2-aa88-506055daff60","Type":"ContainerDied","Data":"3f78ed0375efd54092331e1cbb01c168e6cc218dc9abaf935e986271e1fd7ed2"} Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.101676 4779 scope.go:117] "RemoveContainer" containerID="2e0392cb9de3bc430e1d54372b710e29ea04b2316517c6e5aa17282ee1ba5201" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.103058 4779 scope.go:117] "RemoveContainer" containerID="3f78ed0375efd54092331e1cbb01c168e6cc218dc9abaf935e986271e1fd7ed2" Nov 28 12:36:30 crc kubenswrapper[4779]: E1128 12:36:30.103259 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pbmbn_openshift-ovn-kubernetes(35f4f43e-a921-41b2-aa88-506055daff60)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" podUID="35f4f43e-a921-41b2-aa88-506055daff60" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.136027 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbbbf6f-004c-42ae-8a38-1bcc6cb88ac2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9cede79cbe4c47d953dfa702fe815cc14ee242dede33edec3c4617824c89b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4493f154b47a353308d54341114bbbd12157f9575b873e1648d1dae6a386a534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9d44446078a2bb53a5a9b0a3f7a87ecf24a8554fb968a0250fc3a4cfb2d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://123567b9e202a9aae6ab83bca1ea909a496c476395703ab65e855be02f7af06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c959e0d582f2f01523650db7c0a1d6483dda34c3fcdfaa29d2d25e4d0b0f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:30Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.165024 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f4f43e-a921-41b2-aa88-506055daff60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f78ed0375efd54092331e1cbb01c168e6cc218dc9abaf935e986271e1fd7ed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e0392cb9de3bc430e1d54372b710e29ea04b2316517c6e5aa17282ee1ba5201\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"message\\\":\\\"or removal\\\\nI1128 12:36:10.134712 6246 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1128 12:36:10.134716 6246 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1128 12:36:10.134740 6246 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1128 12:36:10.134747 6246 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1128 12:36:10.134749 6246 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 12:36:10.134785 6246 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1128 12:36:10.134810 6246 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1128 12:36:10.135195 6246 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 12:36:10.135267 6246 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1128 12:36:10.135346 6246 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 12:36:10.135400 6246 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 12:36:10.135435 6246 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 12:36:10.135471 6246 handler.go:208] Removed *v1.Node event handler 2\\\\nI1128 12:36:10.135421 6246 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1128 12:36:10.135482 6246 factory.go:656] Stopping watch factory\\\\nI1128 12:36:10.135558 6246 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f78ed0375efd54092331e1cbb01c168e6cc218dc9abaf935e986271e1fd7ed2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T12:36:29Z\\\",\\\"message\\\":\\\"h UID \\\\\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\\\\\" in cache\\\\nI1128 12:36:28.904558 6457 port_cache.go:96] port-cache(openshift-network-diagnostics_network-check-target-xd92c): added port \\\\u0026{name:openshift-network-diagnostics_network-check-target-xd92c uuid:61897e97-c771-4738-8709-09636387cb00 logicalSwitch:crc ips:[0xc007ff6d20] mac:[10 88 10 217 0 4] expires:{wall:0 ext:0 loc:\\\\u003cnil\\\\u003e}} with IP: [10.217.0.4/23] and MAC: 0a:58:0a:d9:00:04\\\\nI1128 12:36:28.904608 6457 services_controller.go:360] Finished syncing service etcd on namespace openshift-etcd for network=default : 5.545787ms\\\\nF1128 12:36:28.904620 6457 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet vali\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pbmbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:30Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.182738 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwgdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13786eba-201c-40ca-89b7-174795999a9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec60bab90c7fee1fd38c00da4f84d5133876ad8f2817e5447795fcab4feb2942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwgdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:30Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.195187 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.195249 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.195267 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.195292 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.195312 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:30Z","lastTransitionTime":"2025-11-28T12:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.198364 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c2psj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d9943eb-ea06-476d-8736-0a45e588d9f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8vbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8vbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:36:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c2psj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:30Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.219068 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b303d954-23c9-4fc9-8e79-981009172099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6912a42c418059dabf07c7d940bf1c4102c8dcf91cd4dd6ca0b177f4acd276ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf14e5e2229156dc442c92253ef1f23c75a5a6f5dec2d2537cddcdd1df54b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a76dbc5b41ebf68792cd449e4a245678be24151f0c980eedd06f956674b2435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3db38b748527004df103120db865f7848491344dfdf5c89a6db10f4d15e6a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9026b47ba3a0076e3f66e452bc9a223292a17659f2b80d04ef6eb6a5c0448710\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 12:35:52.373678 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 12:35:52.376135 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3230331060/tls.crt::/tmp/serving-cert-3230331060/tls.key\\\\\\\"\\\\nI1128 12:35:57.821147 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 12:35:57.824398 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 12:35:57.824424 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 12:35:57.824444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 12:35:57.824450 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 12:35:57.831411 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 12:35:57.831445 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831467 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 12:35:57.831472 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 12:35:57.831476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 12:35:57.831480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 12:35:57.831686 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 12:35:57.839127 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bafddd2d81f67f1445e3714d50eba5cfd6f75d60c2cb47d16f2086861a10bd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:30Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.239513 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c9857379117d130ce02fa4a153dfc01c9f41ba65663ae918bd82c9b14291e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:30Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.253026 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dlvj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8b3aa68-52ee-40cd-a059-6e410b826ce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2e852aeb571e85a95f4581550ee5f911d9c67fbbc4fc699e9af667a9c4b531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db55w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dlvj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:30Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.267272 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jf46d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd0b81f7-c868-4f90-b20d-9d1b53f5216f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e8508450f924b6b8509b5d06c78535915557c5a7362b50c41515ad15f35e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smlr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383fc6deecc04584b130b3fdc9c1fded751c521513ce60898fdf1927748cd4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smlr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:36:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jf46d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:30Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.288146 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2gg4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9e9a74657b078824a5614dc894178aed5ca4cb11445b900485e9a6c4378f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2gg4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:30Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.302526 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.302587 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.302600 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.302624 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.302637 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:30Z","lastTransitionTime":"2025-11-28T12:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.303738 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91a3b1-3cec-4dcd-8f16-bc721aaedc52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7be2ce5bc20d31216029627f86e27657d444334d72ba98e4ae9923c9d23cf35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9512174ef01c8751a11fc5e6193513236518b4a9d5b63b05020544b8708b70b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54bf19864670db9dbeda1e3b133e9246f9e4027714f684783efed888890af9ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dd288476ad4d58bebb413208bbe2f45bf3997fd7587a90b08ff3af6bdc2ad10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dd288476ad4d58bebb413208bbe2f45bf3997fd7587a90b08ff3af6bdc2ad10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:30Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.317147 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:30Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.328448 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:30Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.344483 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3544f7f72339878b2314fde813e8a92a8341fb05a34a4440c7c37b983d8d23f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19dcc5041b0cbae9167c41c808ece2651eac928f93422722ae28825b5ea4f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:30Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.362919 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:30Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.377164 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d290cf8678216cdf66a68b32edea2be30af7f7fa4ff7ccac629d9e690b23b13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:30Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.391339 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373d4c2a-0b03-4671-945a-0583fa342b3d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e79e9cc7bdaacc427604d12cf94272c7ed3d93519b1d285ba336edded1b3642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0417da6607c0d549767642332fa4fb21bbef525d7073d0a352120092d3450f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b887fb78d1be13c77a88ce49c84ff0839a51056e29d59d571ab7da133dd0d897\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a538ac7a3b48f9c58a68688a95342fb3a9d26ee3e5d7c65f1e3b8d99993294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:30Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.406022 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.406075 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.406088 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.406127 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.406142 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:30Z","lastTransitionTime":"2025-11-28T12:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.407134 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b2a3eb4-4de5-491b-b466-3a35b7d745ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23df7a96829b4103254d6da3740caab05538ddbd3235ce16e8d768e681041c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f92b1378efd9146ee3cb61fef14092136e47b318d132a400c768bedf50d034e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kj9g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:30Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.423814 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pzwdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba664a9e-76d2-4d02-889a-e7062bfc903c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5598fdba6afba30cd00c8abdae6c80300fb10dfcde40afab0f15f848addddd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfslc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pzwdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:30Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.509244 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.509289 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.509303 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.509321 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.509332 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:30Z","lastTransitionTime":"2025-11-28T12:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.611931 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.611980 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.611990 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.612007 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.612019 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:30Z","lastTransitionTime":"2025-11-28T12:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.714914 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.714973 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.714989 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.715041 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.715057 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:30Z","lastTransitionTime":"2025-11-28T12:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.725837 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.725890 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:36:30 crc kubenswrapper[4779]: E1128 12:36:30.725967 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.725846 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.726152 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:36:30 crc kubenswrapper[4779]: E1128 12:36:30.726238 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:36:30 crc kubenswrapper[4779]: E1128 12:36:30.726443 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c2psj" podUID="2d9943eb-ea06-476d-8736-0a45e588d9f4" Nov 28 12:36:30 crc kubenswrapper[4779]: E1128 12:36:30.726557 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.818408 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.818469 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.818482 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.818505 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.818519 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:30Z","lastTransitionTime":"2025-11-28T12:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.921366 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.921434 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.921446 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.921462 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:30 crc kubenswrapper[4779]: I1128 12:36:30.921475 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:30Z","lastTransitionTime":"2025-11-28T12:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.024359 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.024404 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.024415 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.024433 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.024445 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:31Z","lastTransitionTime":"2025-11-28T12:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.108363 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pbmbn_35f4f43e-a921-41b2-aa88-506055daff60/ovnkube-controller/2.log" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.113559 4779 scope.go:117] "RemoveContainer" containerID="3f78ed0375efd54092331e1cbb01c168e6cc218dc9abaf935e986271e1fd7ed2" Nov 28 12:36:31 crc kubenswrapper[4779]: E1128 12:36:31.113790 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pbmbn_openshift-ovn-kubernetes(35f4f43e-a921-41b2-aa88-506055daff60)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" podUID="35f4f43e-a921-41b2-aa88-506055daff60" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.126485 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.126554 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.126577 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.126605 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.126627 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:31Z","lastTransitionTime":"2025-11-28T12:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.137287 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b303d954-23c9-4fc9-8e79-981009172099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6912a42c418059dabf07c7d940bf1c4102c8dcf91cd4dd6ca0b177f4acd276ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf14e5e2229156dc442c92253ef1f23c75a5a6f5dec2d2537cddcdd1df54b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a76dbc5b41ebf68792cd449e4a245678be24151f0c980eedd06f956674b2435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3db38b748527004df103120db865f7848491344dfdf5c89a6db10f4d15e6a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9026b47ba3a0076e3f66e452bc9a223292a17659f2b80d04ef6eb6a5c0448710\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 12:35:52.373678 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 12:35:52.376135 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3230331060/tls.crt::/tmp/serving-cert-3230331060/tls.key\\\\\\\"\\\\nI1128 12:35:57.821147 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 12:35:57.824398 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 12:35:57.824424 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 12:35:57.824444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 12:35:57.824450 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 12:35:57.831411 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 12:35:57.831445 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831467 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 12:35:57.831472 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 12:35:57.831476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 12:35:57.831480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 12:35:57.831686 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 12:35:57.839127 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bafddd2d81f67f1445e3714d50eba5cfd6f75d60c2cb47d16f2086861a10bd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:31Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.154327 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c9857379117d130ce02fa4a153dfc01c9f41ba65663ae918bd82c9b14291e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:31Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.169845 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dlvj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8b3aa68-52ee-40cd-a059-6e410b826ce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2e852aeb571e85a95f4581550ee5f911d9c67fbbc4fc699e9af667a9c4b531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db55w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dlvj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:31Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.186822 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jf46d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd0b81f7-c868-4f90-b20d-9d1b53f5216f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e8508450f924b6b8509b5d06c78535915557c5a7362b50c41515ad15f35e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smlr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383fc6deecc04584b130b3fdc9c1fded751c521513ce60898fdf1927748cd4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smlr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:36:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jf46d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:31Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.205745 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91a3b1-3cec-4dcd-8f16-bc721aaedc52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7be2ce5bc20d31216029627f86e27657d444334d72ba98e4ae9923c9d23cf35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9512174ef01c8751a11fc5e6193513236518b4a9d5b63b05020544b8708b70b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54bf19864670db9dbeda1e3b133e9246f9e4027714f684783efed888890af9ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dd288476ad4d58bebb413208bbe2f45bf3997fd7587a90b08ff3af6bdc2ad10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dd288476ad4d58bebb413208bbe2f45bf3997fd7587a90b08ff3af6bdc2ad10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:31Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.226578 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:31Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.228789 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.228839 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.228851 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.228880 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.228893 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:31Z","lastTransitionTime":"2025-11-28T12:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.246623 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:31Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.263306 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3544f7f72339878b2314fde813e8a92a8341fb05a34a4440c7c37b983d8d23f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19dcc5041b0cbae9167c41c808ece2651eac928f93422722ae28825b5ea4f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:31Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.280959 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:31Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.296992 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d290cf8678216cdf66a68b32edea2be30af7f7fa4ff7ccac629d9e690b23b13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:31Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.316368 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2gg4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9e9a74657b078824a5614dc894178aed5ca4cb11445b900485e9a6c4378f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2gg4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:31Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.332119 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.332197 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.332222 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.332253 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.332276 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:31Z","lastTransitionTime":"2025-11-28T12:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.335136 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373d4c2a-0b03-4671-945a-0583fa342b3d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e79e9cc7bdaacc427604d12cf94272c7ed3d93519b1d285ba336edded1b3642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0417da6607c0d549767642332fa4fb21bbef525d7073d0a352120092d3450f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b887fb78d1be13c77a88ce49c84ff0839a51056e29d59d571ab7da133dd0d897\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a538ac7a3b48f9c58a68688a95342fb3a9d26ee3e5d7c65f1e3b8d99993294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:31Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.349464 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b2a3eb4-4de5-491b-b466-3a35b7d745ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23df7a96829b4103254d6da3740caab05538ddbd3235ce16e8d768e681041c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f92b1378efd9146ee3cb61fef14092136e47b318d132a400c768bedf50d034e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kj9g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:31Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.364321 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pzwdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba664a9e-76d2-4d02-889a-e7062bfc903c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5598fdba6afba30cd00c8abdae6c80300fb10dfcde40afab0f15f848addddd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfslc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pzwdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:31Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.389920 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbbbf6f-004c-42ae-8a38-1bcc6cb88ac2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9cede79cbe4c47d953dfa702fe815cc14ee242dede33edec3c4617824c89b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4493f154b47a353308d54341114bbbd12157f9575b873e1648d1dae6a386a534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9d44446078a2bb53a5a9b0a3f7a87ecf24a8554fb968a0250fc3a4cfb2d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://123567b9e202a9aae6ab83bca1ea909a496c476395703ab65e855be02f7af06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c959e0d582f2f01523650db7c0a1d6483dda34c3fcdfaa29d2d25e4d0b0f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:31Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.419401 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f4f43e-a921-41b2-aa88-506055daff60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f78ed0375efd54092331e1cbb01c168e6cc218dc9abaf935e986271e1fd7ed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f78ed0375efd54092331e1cbb01c168e6cc218dc9abaf935e986271e1fd7ed2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T12:36:29Z\\\",\\\"message\\\":\\\"h UID \\\\\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\\\\\" in cache\\\\nI1128 12:36:28.904558 6457 port_cache.go:96] port-cache(openshift-network-diagnostics_network-check-target-xd92c): added port \\\\u0026{name:openshift-network-diagnostics_network-check-target-xd92c uuid:61897e97-c771-4738-8709-09636387cb00 logicalSwitch:crc ips:[0xc007ff6d20] mac:[10 88 10 217 0 4] expires:{wall:0 ext:0 loc:\\\\u003cnil\\\\u003e}} with IP: [10.217.0.4/23] and MAC: 0a:58:0a:d9:00:04\\\\nI1128 12:36:28.904608 6457 services_controller.go:360] Finished syncing service etcd on namespace openshift-etcd for network=default : 5.545787ms\\\\nF1128 12:36:28.904620 6457 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet vali\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pbmbn_openshift-ovn-kubernetes(35f4f43e-a921-41b2-aa88-506055daff60)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pbmbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:31Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.433474 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwgdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13786eba-201c-40ca-89b7-174795999a9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec60bab90c7fee1fd38c00da4f84d5133876ad8f2817e5447795fcab4feb2942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwgdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:31Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.434869 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.434898 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.434906 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.434921 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.434940 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:31Z","lastTransitionTime":"2025-11-28T12:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.447462 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c2psj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d9943eb-ea06-476d-8736-0a45e588d9f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8vbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8vbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:36:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c2psj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:31Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.538463 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.538536 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.538560 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.538589 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.538611 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:31Z","lastTransitionTime":"2025-11-28T12:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.641589 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.641832 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.641924 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.642006 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.642171 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:31Z","lastTransitionTime":"2025-11-28T12:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.744382 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.744434 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.744450 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.744472 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.744490 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:31Z","lastTransitionTime":"2025-11-28T12:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.826596 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.826653 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.826670 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.826694 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.826711 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:31Z","lastTransitionTime":"2025-11-28T12:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:31 crc kubenswrapper[4779]: E1128 12:36:31.846881 4779 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"78a2023c-0feb-4049-a56a-d55919a84d1c\\\",\\\"systemUUID\\\":\\\"232cf3c8-8956-4a87-8900-bbd0298775e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:31Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.852146 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.852199 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.852218 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.852243 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.852260 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:31Z","lastTransitionTime":"2025-11-28T12:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:31 crc kubenswrapper[4779]: E1128 12:36:31.873463 4779 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"78a2023c-0feb-4049-a56a-d55919a84d1c\\\",\\\"systemUUID\\\":\\\"232cf3c8-8956-4a87-8900-bbd0298775e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:31Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.877743 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.877800 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.877818 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.877840 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.877856 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:31Z","lastTransitionTime":"2025-11-28T12:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:31 crc kubenswrapper[4779]: E1128 12:36:31.896014 4779 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"78a2023c-0feb-4049-a56a-d55919a84d1c\\\",\\\"systemUUID\\\":\\\"232cf3c8-8956-4a87-8900-bbd0298775e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:31Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.900557 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.900605 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.900617 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.900635 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.900647 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:31Z","lastTransitionTime":"2025-11-28T12:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:31 crc kubenswrapper[4779]: E1128 12:36:31.916858 4779 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"78a2023c-0feb-4049-a56a-d55919a84d1c\\\",\\\"systemUUID\\\":\\\"232cf3c8-8956-4a87-8900-bbd0298775e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:31Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.922246 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.922301 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.922322 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.922350 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.922370 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:31Z","lastTransitionTime":"2025-11-28T12:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:31 crc kubenswrapper[4779]: E1128 12:36:31.939365 4779 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"78a2023c-0feb-4049-a56a-d55919a84d1c\\\",\\\"systemUUID\\\":\\\"232cf3c8-8956-4a87-8900-bbd0298775e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:31Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:31 crc kubenswrapper[4779]: E1128 12:36:31.939545 4779 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.941479 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.941517 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.941530 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.941549 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:31 crc kubenswrapper[4779]: I1128 12:36:31.941562 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:31Z","lastTransitionTime":"2025-11-28T12:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:32 crc kubenswrapper[4779]: I1128 12:36:32.044942 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:32 crc kubenswrapper[4779]: I1128 12:36:32.045020 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:32 crc kubenswrapper[4779]: I1128 12:36:32.045036 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:32 crc kubenswrapper[4779]: I1128 12:36:32.045061 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:32 crc kubenswrapper[4779]: I1128 12:36:32.045077 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:32Z","lastTransitionTime":"2025-11-28T12:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:32 crc kubenswrapper[4779]: I1128 12:36:32.148412 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:32 crc kubenswrapper[4779]: I1128 12:36:32.148470 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:32 crc kubenswrapper[4779]: I1128 12:36:32.148494 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:32 crc kubenswrapper[4779]: I1128 12:36:32.148523 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:32 crc kubenswrapper[4779]: I1128 12:36:32.148545 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:32Z","lastTransitionTime":"2025-11-28T12:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:32 crc kubenswrapper[4779]: I1128 12:36:32.251751 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:32 crc kubenswrapper[4779]: I1128 12:36:32.251819 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:32 crc kubenswrapper[4779]: I1128 12:36:32.251841 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:32 crc kubenswrapper[4779]: I1128 12:36:32.251869 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:32 crc kubenswrapper[4779]: I1128 12:36:32.251891 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:32Z","lastTransitionTime":"2025-11-28T12:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:32 crc kubenswrapper[4779]: I1128 12:36:32.358678 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:32 crc kubenswrapper[4779]: I1128 12:36:32.358752 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:32 crc kubenswrapper[4779]: I1128 12:36:32.358772 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:32 crc kubenswrapper[4779]: I1128 12:36:32.358796 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:32 crc kubenswrapper[4779]: I1128 12:36:32.358820 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:32Z","lastTransitionTime":"2025-11-28T12:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:32 crc kubenswrapper[4779]: I1128 12:36:32.467721 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:32 crc kubenswrapper[4779]: I1128 12:36:32.467833 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:32 crc kubenswrapper[4779]: I1128 12:36:32.467860 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:32 crc kubenswrapper[4779]: I1128 12:36:32.467887 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:32 crc kubenswrapper[4779]: I1128 12:36:32.467909 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:32Z","lastTransitionTime":"2025-11-28T12:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:32 crc kubenswrapper[4779]: I1128 12:36:32.570647 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:32 crc kubenswrapper[4779]: I1128 12:36:32.570736 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:32 crc kubenswrapper[4779]: I1128 12:36:32.570760 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:32 crc kubenswrapper[4779]: I1128 12:36:32.570787 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:32 crc kubenswrapper[4779]: I1128 12:36:32.570807 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:32Z","lastTransitionTime":"2025-11-28T12:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:32 crc kubenswrapper[4779]: I1128 12:36:32.673082 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:32 crc kubenswrapper[4779]: I1128 12:36:32.673170 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:32 crc kubenswrapper[4779]: I1128 12:36:32.673186 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:32 crc kubenswrapper[4779]: I1128 12:36:32.673209 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:32 crc kubenswrapper[4779]: I1128 12:36:32.673227 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:32Z","lastTransitionTime":"2025-11-28T12:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:32 crc kubenswrapper[4779]: I1128 12:36:32.725430 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:36:32 crc kubenswrapper[4779]: I1128 12:36:32.725490 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:36:32 crc kubenswrapper[4779]: I1128 12:36:32.726151 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:36:32 crc kubenswrapper[4779]: E1128 12:36:32.726300 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:36:32 crc kubenswrapper[4779]: E1128 12:36:32.726423 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:36:32 crc kubenswrapper[4779]: E1128 12:36:32.726490 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:36:32 crc kubenswrapper[4779]: I1128 12:36:32.726501 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:36:32 crc kubenswrapper[4779]: E1128 12:36:32.726958 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c2psj" podUID="2d9943eb-ea06-476d-8736-0a45e588d9f4" Nov 28 12:36:32 crc kubenswrapper[4779]: I1128 12:36:32.775923 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:32 crc kubenswrapper[4779]: I1128 12:36:32.775970 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:32 crc kubenswrapper[4779]: I1128 12:36:32.775987 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:32 crc kubenswrapper[4779]: I1128 12:36:32.776007 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:32 crc kubenswrapper[4779]: I1128 12:36:32.776025 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:32Z","lastTransitionTime":"2025-11-28T12:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:32 crc kubenswrapper[4779]: I1128 12:36:32.879205 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:32 crc kubenswrapper[4779]: I1128 12:36:32.879348 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:32 crc kubenswrapper[4779]: I1128 12:36:32.879377 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:32 crc kubenswrapper[4779]: I1128 12:36:32.879404 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:32 crc kubenswrapper[4779]: I1128 12:36:32.879425 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:32Z","lastTransitionTime":"2025-11-28T12:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:32 crc kubenswrapper[4779]: I1128 12:36:32.982058 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:32 crc kubenswrapper[4779]: I1128 12:36:32.982131 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:32 crc kubenswrapper[4779]: I1128 12:36:32.982144 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:32 crc kubenswrapper[4779]: I1128 12:36:32.982162 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:32 crc kubenswrapper[4779]: I1128 12:36:32.982173 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:32Z","lastTransitionTime":"2025-11-28T12:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:33 crc kubenswrapper[4779]: I1128 12:36:33.085829 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:33 crc kubenswrapper[4779]: I1128 12:36:33.085890 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:33 crc kubenswrapper[4779]: I1128 12:36:33.085904 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:33 crc kubenswrapper[4779]: I1128 12:36:33.085934 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:33 crc kubenswrapper[4779]: I1128 12:36:33.085955 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:33Z","lastTransitionTime":"2025-11-28T12:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:33 crc kubenswrapper[4779]: I1128 12:36:33.188414 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:33 crc kubenswrapper[4779]: I1128 12:36:33.188672 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:33 crc kubenswrapper[4779]: I1128 12:36:33.188806 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:33 crc kubenswrapper[4779]: I1128 12:36:33.188914 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:33 crc kubenswrapper[4779]: I1128 12:36:33.189015 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:33Z","lastTransitionTime":"2025-11-28T12:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:33 crc kubenswrapper[4779]: I1128 12:36:33.292728 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:33 crc kubenswrapper[4779]: I1128 12:36:33.293013 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:33 crc kubenswrapper[4779]: I1128 12:36:33.293106 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:33 crc kubenswrapper[4779]: I1128 12:36:33.293223 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:33 crc kubenswrapper[4779]: I1128 12:36:33.293306 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:33Z","lastTransitionTime":"2025-11-28T12:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:33 crc kubenswrapper[4779]: I1128 12:36:33.397331 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:33 crc kubenswrapper[4779]: I1128 12:36:33.397509 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:33 crc kubenswrapper[4779]: I1128 12:36:33.397545 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:33 crc kubenswrapper[4779]: I1128 12:36:33.397578 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:33 crc kubenswrapper[4779]: I1128 12:36:33.397604 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:33Z","lastTransitionTime":"2025-11-28T12:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:33 crc kubenswrapper[4779]: I1128 12:36:33.500794 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:33 crc kubenswrapper[4779]: I1128 12:36:33.500842 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:33 crc kubenswrapper[4779]: I1128 12:36:33.500853 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:33 crc kubenswrapper[4779]: I1128 12:36:33.500871 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:33 crc kubenswrapper[4779]: I1128 12:36:33.500884 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:33Z","lastTransitionTime":"2025-11-28T12:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:33 crc kubenswrapper[4779]: I1128 12:36:33.603622 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:33 crc kubenswrapper[4779]: I1128 12:36:33.603651 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:33 crc kubenswrapper[4779]: I1128 12:36:33.603658 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:33 crc kubenswrapper[4779]: I1128 12:36:33.603670 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:33 crc kubenswrapper[4779]: I1128 12:36:33.603679 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:33Z","lastTransitionTime":"2025-11-28T12:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:33 crc kubenswrapper[4779]: I1128 12:36:33.706038 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:33 crc kubenswrapper[4779]: I1128 12:36:33.706078 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:33 crc kubenswrapper[4779]: I1128 12:36:33.706107 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:33 crc kubenswrapper[4779]: I1128 12:36:33.706123 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:33 crc kubenswrapper[4779]: I1128 12:36:33.706134 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:33Z","lastTransitionTime":"2025-11-28T12:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:33 crc kubenswrapper[4779]: I1128 12:36:33.808239 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:33 crc kubenswrapper[4779]: I1128 12:36:33.808301 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:33 crc kubenswrapper[4779]: I1128 12:36:33.808312 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:33 crc kubenswrapper[4779]: I1128 12:36:33.808326 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:33 crc kubenswrapper[4779]: I1128 12:36:33.808337 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:33Z","lastTransitionTime":"2025-11-28T12:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:33 crc kubenswrapper[4779]: I1128 12:36:33.910641 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:33 crc kubenswrapper[4779]: I1128 12:36:33.910911 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:33 crc kubenswrapper[4779]: I1128 12:36:33.910929 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:33 crc kubenswrapper[4779]: I1128 12:36:33.910955 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:33 crc kubenswrapper[4779]: I1128 12:36:33.910972 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:33Z","lastTransitionTime":"2025-11-28T12:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:34 crc kubenswrapper[4779]: I1128 12:36:34.013887 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:34 crc kubenswrapper[4779]: I1128 12:36:34.013929 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:34 crc kubenswrapper[4779]: I1128 12:36:34.013939 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:34 crc kubenswrapper[4779]: I1128 12:36:34.013954 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:34 crc kubenswrapper[4779]: I1128 12:36:34.013965 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:34Z","lastTransitionTime":"2025-11-28T12:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:34 crc kubenswrapper[4779]: I1128 12:36:34.116593 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:34 crc kubenswrapper[4779]: I1128 12:36:34.116646 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:34 crc kubenswrapper[4779]: I1128 12:36:34.116663 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:34 crc kubenswrapper[4779]: I1128 12:36:34.116686 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:34 crc kubenswrapper[4779]: I1128 12:36:34.116702 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:34Z","lastTransitionTime":"2025-11-28T12:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:34 crc kubenswrapper[4779]: I1128 12:36:34.219716 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:34 crc kubenswrapper[4779]: I1128 12:36:34.219761 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:34 crc kubenswrapper[4779]: I1128 12:36:34.219771 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:34 crc kubenswrapper[4779]: I1128 12:36:34.219793 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:34 crc kubenswrapper[4779]: I1128 12:36:34.219808 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:34Z","lastTransitionTime":"2025-11-28T12:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:34 crc kubenswrapper[4779]: I1128 12:36:34.322859 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:34 crc kubenswrapper[4779]: I1128 12:36:34.322947 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:34 crc kubenswrapper[4779]: I1128 12:36:34.322966 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:34 crc kubenswrapper[4779]: I1128 12:36:34.322988 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:34 crc kubenswrapper[4779]: I1128 12:36:34.323005 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:34Z","lastTransitionTime":"2025-11-28T12:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:34 crc kubenswrapper[4779]: I1128 12:36:34.425886 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:34 crc kubenswrapper[4779]: I1128 12:36:34.425918 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:34 crc kubenswrapper[4779]: I1128 12:36:34.425928 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:34 crc kubenswrapper[4779]: I1128 12:36:34.425941 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:34 crc kubenswrapper[4779]: I1128 12:36:34.425951 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:34Z","lastTransitionTime":"2025-11-28T12:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:34 crc kubenswrapper[4779]: I1128 12:36:34.528723 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:34 crc kubenswrapper[4779]: I1128 12:36:34.528798 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:34 crc kubenswrapper[4779]: I1128 12:36:34.528828 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:34 crc kubenswrapper[4779]: I1128 12:36:34.528859 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:34 crc kubenswrapper[4779]: I1128 12:36:34.528881 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:34Z","lastTransitionTime":"2025-11-28T12:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:34 crc kubenswrapper[4779]: I1128 12:36:34.631210 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:34 crc kubenswrapper[4779]: I1128 12:36:34.631272 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:34 crc kubenswrapper[4779]: I1128 12:36:34.631290 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:34 crc kubenswrapper[4779]: I1128 12:36:34.631329 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:34 crc kubenswrapper[4779]: I1128 12:36:34.631350 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:34Z","lastTransitionTime":"2025-11-28T12:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:34 crc kubenswrapper[4779]: I1128 12:36:34.725833 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:36:34 crc kubenswrapper[4779]: I1128 12:36:34.725937 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:36:34 crc kubenswrapper[4779]: I1128 12:36:34.726037 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:36:34 crc kubenswrapper[4779]: E1128 12:36:34.726033 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c2psj" podUID="2d9943eb-ea06-476d-8736-0a45e588d9f4" Nov 28 12:36:34 crc kubenswrapper[4779]: I1128 12:36:34.726168 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:36:34 crc kubenswrapper[4779]: E1128 12:36:34.726327 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:36:34 crc kubenswrapper[4779]: E1128 12:36:34.726449 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:36:34 crc kubenswrapper[4779]: E1128 12:36:34.727358 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:36:34 crc kubenswrapper[4779]: I1128 12:36:34.733830 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:34 crc kubenswrapper[4779]: I1128 12:36:34.733890 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:34 crc kubenswrapper[4779]: I1128 12:36:34.733917 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:34 crc kubenswrapper[4779]: I1128 12:36:34.733949 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:34 crc kubenswrapper[4779]: I1128 12:36:34.733971 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:34Z","lastTransitionTime":"2025-11-28T12:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:34 crc kubenswrapper[4779]: I1128 12:36:34.836866 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:34 crc kubenswrapper[4779]: I1128 12:36:34.836946 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:34 crc kubenswrapper[4779]: I1128 12:36:34.836969 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:34 crc kubenswrapper[4779]: I1128 12:36:34.837003 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:34 crc kubenswrapper[4779]: I1128 12:36:34.837030 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:34Z","lastTransitionTime":"2025-11-28T12:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:34 crc kubenswrapper[4779]: I1128 12:36:34.940453 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:34 crc kubenswrapper[4779]: I1128 12:36:34.940498 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:34 crc kubenswrapper[4779]: I1128 12:36:34.940509 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:34 crc kubenswrapper[4779]: I1128 12:36:34.940527 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:34 crc kubenswrapper[4779]: I1128 12:36:34.940539 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:34Z","lastTransitionTime":"2025-11-28T12:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:35 crc kubenswrapper[4779]: I1128 12:36:35.043716 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:35 crc kubenswrapper[4779]: I1128 12:36:35.043781 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:35 crc kubenswrapper[4779]: I1128 12:36:35.043799 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:35 crc kubenswrapper[4779]: I1128 12:36:35.043827 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:35 crc kubenswrapper[4779]: I1128 12:36:35.043846 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:35Z","lastTransitionTime":"2025-11-28T12:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:35 crc kubenswrapper[4779]: I1128 12:36:35.146463 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:35 crc kubenswrapper[4779]: I1128 12:36:35.146511 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:35 crc kubenswrapper[4779]: I1128 12:36:35.146544 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:35 crc kubenswrapper[4779]: I1128 12:36:35.146562 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:35 crc kubenswrapper[4779]: I1128 12:36:35.146573 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:35Z","lastTransitionTime":"2025-11-28T12:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:35 crc kubenswrapper[4779]: I1128 12:36:35.249655 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:35 crc kubenswrapper[4779]: I1128 12:36:35.249734 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:35 crc kubenswrapper[4779]: I1128 12:36:35.249752 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:35 crc kubenswrapper[4779]: I1128 12:36:35.249776 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:35 crc kubenswrapper[4779]: I1128 12:36:35.249794 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:35Z","lastTransitionTime":"2025-11-28T12:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:35 crc kubenswrapper[4779]: I1128 12:36:35.352987 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:35 crc kubenswrapper[4779]: I1128 12:36:35.353030 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:35 crc kubenswrapper[4779]: I1128 12:36:35.353040 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:35 crc kubenswrapper[4779]: I1128 12:36:35.353056 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:35 crc kubenswrapper[4779]: I1128 12:36:35.353066 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:35Z","lastTransitionTime":"2025-11-28T12:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:35 crc kubenswrapper[4779]: I1128 12:36:35.455701 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:35 crc kubenswrapper[4779]: I1128 12:36:35.455780 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:35 crc kubenswrapper[4779]: I1128 12:36:35.455802 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:35 crc kubenswrapper[4779]: I1128 12:36:35.455825 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:35 crc kubenswrapper[4779]: I1128 12:36:35.455842 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:35Z","lastTransitionTime":"2025-11-28T12:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:35 crc kubenswrapper[4779]: I1128 12:36:35.558576 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:35 crc kubenswrapper[4779]: I1128 12:36:35.558625 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:35 crc kubenswrapper[4779]: I1128 12:36:35.558636 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:35 crc kubenswrapper[4779]: I1128 12:36:35.558654 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:35 crc kubenswrapper[4779]: I1128 12:36:35.558666 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:35Z","lastTransitionTime":"2025-11-28T12:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:35 crc kubenswrapper[4779]: I1128 12:36:35.662182 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:35 crc kubenswrapper[4779]: I1128 12:36:35.662224 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:35 crc kubenswrapper[4779]: I1128 12:36:35.662234 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:35 crc kubenswrapper[4779]: I1128 12:36:35.662250 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:35 crc kubenswrapper[4779]: I1128 12:36:35.662263 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:35Z","lastTransitionTime":"2025-11-28T12:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:35 crc kubenswrapper[4779]: I1128 12:36:35.765017 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:35 crc kubenswrapper[4779]: I1128 12:36:35.765075 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:35 crc kubenswrapper[4779]: I1128 12:36:35.765119 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:35 crc kubenswrapper[4779]: I1128 12:36:35.765143 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:35 crc kubenswrapper[4779]: I1128 12:36:35.765160 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:35Z","lastTransitionTime":"2025-11-28T12:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:35 crc kubenswrapper[4779]: I1128 12:36:35.869134 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:35 crc kubenswrapper[4779]: I1128 12:36:35.869182 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:35 crc kubenswrapper[4779]: I1128 12:36:35.869194 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:35 crc kubenswrapper[4779]: I1128 12:36:35.869212 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:35 crc kubenswrapper[4779]: I1128 12:36:35.869226 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:35Z","lastTransitionTime":"2025-11-28T12:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:35 crc kubenswrapper[4779]: I1128 12:36:35.972297 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:35 crc kubenswrapper[4779]: I1128 12:36:35.972343 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:35 crc kubenswrapper[4779]: I1128 12:36:35.972355 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:35 crc kubenswrapper[4779]: I1128 12:36:35.972373 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:35 crc kubenswrapper[4779]: I1128 12:36:35.972385 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:35Z","lastTransitionTime":"2025-11-28T12:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:36 crc kubenswrapper[4779]: I1128 12:36:36.074715 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:36 crc kubenswrapper[4779]: I1128 12:36:36.074753 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:36 crc kubenswrapper[4779]: I1128 12:36:36.074765 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:36 crc kubenswrapper[4779]: I1128 12:36:36.074782 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:36 crc kubenswrapper[4779]: I1128 12:36:36.074793 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:36Z","lastTransitionTime":"2025-11-28T12:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:36 crc kubenswrapper[4779]: I1128 12:36:36.177683 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:36 crc kubenswrapper[4779]: I1128 12:36:36.177723 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:36 crc kubenswrapper[4779]: I1128 12:36:36.177734 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:36 crc kubenswrapper[4779]: I1128 12:36:36.177752 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:36 crc kubenswrapper[4779]: I1128 12:36:36.177764 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:36Z","lastTransitionTime":"2025-11-28T12:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:36 crc kubenswrapper[4779]: I1128 12:36:36.280483 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:36 crc kubenswrapper[4779]: I1128 12:36:36.280542 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:36 crc kubenswrapper[4779]: I1128 12:36:36.280554 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:36 crc kubenswrapper[4779]: I1128 12:36:36.280572 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:36 crc kubenswrapper[4779]: I1128 12:36:36.280585 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:36Z","lastTransitionTime":"2025-11-28T12:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:36 crc kubenswrapper[4779]: I1128 12:36:36.383700 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:36 crc kubenswrapper[4779]: I1128 12:36:36.383747 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:36 crc kubenswrapper[4779]: I1128 12:36:36.383762 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:36 crc kubenswrapper[4779]: I1128 12:36:36.383781 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:36 crc kubenswrapper[4779]: I1128 12:36:36.383795 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:36Z","lastTransitionTime":"2025-11-28T12:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:36 crc kubenswrapper[4779]: I1128 12:36:36.486609 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:36 crc kubenswrapper[4779]: I1128 12:36:36.486716 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:36 crc kubenswrapper[4779]: I1128 12:36:36.486734 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:36 crc kubenswrapper[4779]: I1128 12:36:36.486758 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:36 crc kubenswrapper[4779]: I1128 12:36:36.486776 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:36Z","lastTransitionTime":"2025-11-28T12:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:36 crc kubenswrapper[4779]: I1128 12:36:36.590244 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:36 crc kubenswrapper[4779]: I1128 12:36:36.590292 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:36 crc kubenswrapper[4779]: I1128 12:36:36.590305 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:36 crc kubenswrapper[4779]: I1128 12:36:36.590324 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:36 crc kubenswrapper[4779]: I1128 12:36:36.590335 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:36Z","lastTransitionTime":"2025-11-28T12:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:36 crc kubenswrapper[4779]: I1128 12:36:36.693215 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:36 crc kubenswrapper[4779]: I1128 12:36:36.693272 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:36 crc kubenswrapper[4779]: I1128 12:36:36.693290 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:36 crc kubenswrapper[4779]: I1128 12:36:36.693313 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:36 crc kubenswrapper[4779]: I1128 12:36:36.693330 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:36Z","lastTransitionTime":"2025-11-28T12:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:36 crc kubenswrapper[4779]: I1128 12:36:36.726155 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:36:36 crc kubenswrapper[4779]: I1128 12:36:36.726201 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:36:36 crc kubenswrapper[4779]: I1128 12:36:36.726249 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:36:36 crc kubenswrapper[4779]: I1128 12:36:36.726268 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:36:36 crc kubenswrapper[4779]: E1128 12:36:36.726376 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c2psj" podUID="2d9943eb-ea06-476d-8736-0a45e588d9f4" Nov 28 12:36:36 crc kubenswrapper[4779]: E1128 12:36:36.726517 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:36:36 crc kubenswrapper[4779]: E1128 12:36:36.726649 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:36:36 crc kubenswrapper[4779]: E1128 12:36:36.726764 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:36:36 crc kubenswrapper[4779]: I1128 12:36:36.796056 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:36 crc kubenswrapper[4779]: I1128 12:36:36.796146 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:36 crc kubenswrapper[4779]: I1128 12:36:36.796158 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:36 crc kubenswrapper[4779]: I1128 12:36:36.796177 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:36 crc kubenswrapper[4779]: I1128 12:36:36.796190 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:36Z","lastTransitionTime":"2025-11-28T12:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:36 crc kubenswrapper[4779]: I1128 12:36:36.898571 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:36 crc kubenswrapper[4779]: I1128 12:36:36.898621 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:36 crc kubenswrapper[4779]: I1128 12:36:36.898637 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:36 crc kubenswrapper[4779]: I1128 12:36:36.898665 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:36 crc kubenswrapper[4779]: I1128 12:36:36.898686 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:36Z","lastTransitionTime":"2025-11-28T12:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:37 crc kubenswrapper[4779]: I1128 12:36:37.001380 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:37 crc kubenswrapper[4779]: I1128 12:36:37.001435 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:37 crc kubenswrapper[4779]: I1128 12:36:37.001451 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:37 crc kubenswrapper[4779]: I1128 12:36:37.001476 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:37 crc kubenswrapper[4779]: I1128 12:36:37.001492 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:37Z","lastTransitionTime":"2025-11-28T12:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:37 crc kubenswrapper[4779]: I1128 12:36:37.105375 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:37 crc kubenswrapper[4779]: I1128 12:36:37.105429 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:37 crc kubenswrapper[4779]: I1128 12:36:37.105447 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:37 crc kubenswrapper[4779]: I1128 12:36:37.105470 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:37 crc kubenswrapper[4779]: I1128 12:36:37.105490 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:37Z","lastTransitionTime":"2025-11-28T12:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:37 crc kubenswrapper[4779]: I1128 12:36:37.207797 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:37 crc kubenswrapper[4779]: I1128 12:36:37.207854 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:37 crc kubenswrapper[4779]: I1128 12:36:37.207875 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:37 crc kubenswrapper[4779]: I1128 12:36:37.207900 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:37 crc kubenswrapper[4779]: I1128 12:36:37.207918 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:37Z","lastTransitionTime":"2025-11-28T12:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:37 crc kubenswrapper[4779]: I1128 12:36:37.310306 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:37 crc kubenswrapper[4779]: I1128 12:36:37.310367 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:37 crc kubenswrapper[4779]: I1128 12:36:37.310385 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:37 crc kubenswrapper[4779]: I1128 12:36:37.310407 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:37 crc kubenswrapper[4779]: I1128 12:36:37.310424 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:37Z","lastTransitionTime":"2025-11-28T12:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:37 crc kubenswrapper[4779]: I1128 12:36:37.413315 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:37 crc kubenswrapper[4779]: I1128 12:36:37.413375 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:37 crc kubenswrapper[4779]: I1128 12:36:37.413393 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:37 crc kubenswrapper[4779]: I1128 12:36:37.413415 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:37 crc kubenswrapper[4779]: I1128 12:36:37.413433 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:37Z","lastTransitionTime":"2025-11-28T12:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:37 crc kubenswrapper[4779]: I1128 12:36:37.516564 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:37 crc kubenswrapper[4779]: I1128 12:36:37.516624 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:37 crc kubenswrapper[4779]: I1128 12:36:37.516642 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:37 crc kubenswrapper[4779]: I1128 12:36:37.516668 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:37 crc kubenswrapper[4779]: I1128 12:36:37.516686 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:37Z","lastTransitionTime":"2025-11-28T12:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:37 crc kubenswrapper[4779]: I1128 12:36:37.619161 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:37 crc kubenswrapper[4779]: I1128 12:36:37.619211 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:37 crc kubenswrapper[4779]: I1128 12:36:37.619224 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:37 crc kubenswrapper[4779]: I1128 12:36:37.619243 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:37 crc kubenswrapper[4779]: I1128 12:36:37.619256 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:37Z","lastTransitionTime":"2025-11-28T12:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:37 crc kubenswrapper[4779]: I1128 12:36:37.726945 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:37 crc kubenswrapper[4779]: I1128 12:36:37.727010 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:37 crc kubenswrapper[4779]: I1128 12:36:37.727028 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:37 crc kubenswrapper[4779]: I1128 12:36:37.727054 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:37 crc kubenswrapper[4779]: I1128 12:36:37.727070 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:37Z","lastTransitionTime":"2025-11-28T12:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:37 crc kubenswrapper[4779]: I1128 12:36:37.829831 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:37 crc kubenswrapper[4779]: I1128 12:36:37.829886 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:37 crc kubenswrapper[4779]: I1128 12:36:37.829898 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:37 crc kubenswrapper[4779]: I1128 12:36:37.829918 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:37 crc kubenswrapper[4779]: I1128 12:36:37.829932 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:37Z","lastTransitionTime":"2025-11-28T12:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:37 crc kubenswrapper[4779]: I1128 12:36:37.932432 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:37 crc kubenswrapper[4779]: I1128 12:36:37.932513 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:37 crc kubenswrapper[4779]: I1128 12:36:37.932529 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:37 crc kubenswrapper[4779]: I1128 12:36:37.932573 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:37 crc kubenswrapper[4779]: I1128 12:36:37.932590 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:37Z","lastTransitionTime":"2025-11-28T12:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:38 crc kubenswrapper[4779]: I1128 12:36:38.035608 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:38 crc kubenswrapper[4779]: I1128 12:36:38.035682 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:38 crc kubenswrapper[4779]: I1128 12:36:38.035705 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:38 crc kubenswrapper[4779]: I1128 12:36:38.035733 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:38 crc kubenswrapper[4779]: I1128 12:36:38.035755 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:38Z","lastTransitionTime":"2025-11-28T12:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:38 crc kubenswrapper[4779]: I1128 12:36:38.137883 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:38 crc kubenswrapper[4779]: I1128 12:36:38.137942 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:38 crc kubenswrapper[4779]: I1128 12:36:38.137966 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:38 crc kubenswrapper[4779]: I1128 12:36:38.138010 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:38 crc kubenswrapper[4779]: I1128 12:36:38.138031 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:38Z","lastTransitionTime":"2025-11-28T12:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:38 crc kubenswrapper[4779]: I1128 12:36:38.240495 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:38 crc kubenswrapper[4779]: I1128 12:36:38.240559 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:38 crc kubenswrapper[4779]: I1128 12:36:38.240590 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:38 crc kubenswrapper[4779]: I1128 12:36:38.240606 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:38 crc kubenswrapper[4779]: I1128 12:36:38.240618 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:38Z","lastTransitionTime":"2025-11-28T12:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:38 crc kubenswrapper[4779]: I1128 12:36:38.343211 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:38 crc kubenswrapper[4779]: I1128 12:36:38.343277 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:38 crc kubenswrapper[4779]: I1128 12:36:38.343294 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:38 crc kubenswrapper[4779]: I1128 12:36:38.343320 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:38 crc kubenswrapper[4779]: I1128 12:36:38.343338 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:38Z","lastTransitionTime":"2025-11-28T12:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:38 crc kubenswrapper[4779]: I1128 12:36:38.446433 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:38 crc kubenswrapper[4779]: I1128 12:36:38.446480 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:38 crc kubenswrapper[4779]: I1128 12:36:38.446492 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:38 crc kubenswrapper[4779]: I1128 12:36:38.446514 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:38 crc kubenswrapper[4779]: I1128 12:36:38.446533 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:38Z","lastTransitionTime":"2025-11-28T12:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:38 crc kubenswrapper[4779]: I1128 12:36:38.550203 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:38 crc kubenswrapper[4779]: I1128 12:36:38.550256 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:38 crc kubenswrapper[4779]: I1128 12:36:38.550273 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:38 crc kubenswrapper[4779]: I1128 12:36:38.550296 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:38 crc kubenswrapper[4779]: I1128 12:36:38.550316 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:38Z","lastTransitionTime":"2025-11-28T12:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:38 crc kubenswrapper[4779]: I1128 12:36:38.652925 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:38 crc kubenswrapper[4779]: I1128 12:36:38.653212 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:38 crc kubenswrapper[4779]: I1128 12:36:38.653251 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:38 crc kubenswrapper[4779]: I1128 12:36:38.653280 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:38 crc kubenswrapper[4779]: I1128 12:36:38.653299 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:38Z","lastTransitionTime":"2025-11-28T12:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:38 crc kubenswrapper[4779]: I1128 12:36:38.725739 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:36:38 crc kubenswrapper[4779]: I1128 12:36:38.725815 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:36:38 crc kubenswrapper[4779]: I1128 12:36:38.725768 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:36:38 crc kubenswrapper[4779]: E1128 12:36:38.725928 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:36:38 crc kubenswrapper[4779]: I1128 12:36:38.725746 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:36:38 crc kubenswrapper[4779]: E1128 12:36:38.726268 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c2psj" podUID="2d9943eb-ea06-476d-8736-0a45e588d9f4" Nov 28 12:36:38 crc kubenswrapper[4779]: E1128 12:36:38.726756 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:36:38 crc kubenswrapper[4779]: E1128 12:36:38.726912 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:36:38 crc kubenswrapper[4779]: I1128 12:36:38.756175 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:38 crc kubenswrapper[4779]: I1128 12:36:38.756248 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:38 crc kubenswrapper[4779]: I1128 12:36:38.756270 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:38 crc kubenswrapper[4779]: I1128 12:36:38.756299 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:38 crc kubenswrapper[4779]: I1128 12:36:38.756321 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:38Z","lastTransitionTime":"2025-11-28T12:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:38 crc kubenswrapper[4779]: I1128 12:36:38.859202 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:38 crc kubenswrapper[4779]: I1128 12:36:38.859237 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:38 crc kubenswrapper[4779]: I1128 12:36:38.859248 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:38 crc kubenswrapper[4779]: I1128 12:36:38.859263 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:38 crc kubenswrapper[4779]: I1128 12:36:38.859275 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:38Z","lastTransitionTime":"2025-11-28T12:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:38 crc kubenswrapper[4779]: I1128 12:36:38.962879 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:38 crc kubenswrapper[4779]: I1128 12:36:38.963006 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:38 crc kubenswrapper[4779]: I1128 12:36:38.963070 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:38 crc kubenswrapper[4779]: I1128 12:36:38.963150 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:38 crc kubenswrapper[4779]: I1128 12:36:38.963171 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:38Z","lastTransitionTime":"2025-11-28T12:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:39 crc kubenswrapper[4779]: I1128 12:36:39.065982 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:39 crc kubenswrapper[4779]: I1128 12:36:39.066055 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:39 crc kubenswrapper[4779]: I1128 12:36:39.066078 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:39 crc kubenswrapper[4779]: I1128 12:36:39.066158 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:39 crc kubenswrapper[4779]: I1128 12:36:39.066187 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:39Z","lastTransitionTime":"2025-11-28T12:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:39 crc kubenswrapper[4779]: I1128 12:36:39.182131 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:39 crc kubenswrapper[4779]: I1128 12:36:39.182177 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:39 crc kubenswrapper[4779]: I1128 12:36:39.182185 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:39 crc kubenswrapper[4779]: I1128 12:36:39.182199 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:39 crc kubenswrapper[4779]: I1128 12:36:39.182210 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:39Z","lastTransitionTime":"2025-11-28T12:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:39 crc kubenswrapper[4779]: I1128 12:36:39.283803 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:39 crc kubenswrapper[4779]: I1128 12:36:39.283849 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:39 crc kubenswrapper[4779]: I1128 12:36:39.283861 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:39 crc kubenswrapper[4779]: I1128 12:36:39.283874 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:39 crc kubenswrapper[4779]: I1128 12:36:39.283883 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:39Z","lastTransitionTime":"2025-11-28T12:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:39 crc kubenswrapper[4779]: I1128 12:36:39.386115 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:39 crc kubenswrapper[4779]: I1128 12:36:39.386174 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:39 crc kubenswrapper[4779]: I1128 12:36:39.386190 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:39 crc kubenswrapper[4779]: I1128 12:36:39.386212 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:39 crc kubenswrapper[4779]: I1128 12:36:39.386231 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:39Z","lastTransitionTime":"2025-11-28T12:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:39 crc kubenswrapper[4779]: I1128 12:36:39.488992 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:39 crc kubenswrapper[4779]: I1128 12:36:39.489057 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:39 crc kubenswrapper[4779]: I1128 12:36:39.489078 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:39 crc kubenswrapper[4779]: I1128 12:36:39.489141 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:39 crc kubenswrapper[4779]: I1128 12:36:39.489159 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:39Z","lastTransitionTime":"2025-11-28T12:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:39 crc kubenswrapper[4779]: I1128 12:36:39.592199 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:39 crc kubenswrapper[4779]: I1128 12:36:39.592261 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:39 crc kubenswrapper[4779]: I1128 12:36:39.592283 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:39 crc kubenswrapper[4779]: I1128 12:36:39.592312 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:39 crc kubenswrapper[4779]: I1128 12:36:39.592329 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:39Z","lastTransitionTime":"2025-11-28T12:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:39 crc kubenswrapper[4779]: I1128 12:36:39.695743 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:39 crc kubenswrapper[4779]: I1128 12:36:39.695806 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:39 crc kubenswrapper[4779]: I1128 12:36:39.695818 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:39 crc kubenswrapper[4779]: I1128 12:36:39.695842 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:39 crc kubenswrapper[4779]: I1128 12:36:39.695858 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:39Z","lastTransitionTime":"2025-11-28T12:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:39 crc kubenswrapper[4779]: I1128 12:36:39.761654 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbbbf6f-004c-42ae-8a38-1bcc6cb88ac2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9cede79cbe4c47d953dfa702fe815cc14ee242dede33edec3c4617824c89b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4493f154b47a353308d54341114bbbd12157f9575b873e1648d1dae6a386a534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9d44446078a2bb53a5a9b0a3f7a87ecf24a8554fb968a0250fc3a4cfb2d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://123567b9e202a9aae6ab83bca1ea909a496c476395703ab65e855be02f7af06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c959e0d582f2f01523650db7c0a1d6483dda34c3fcdfaa29d2d25e4d0b0f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:39Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:39 crc kubenswrapper[4779]: I1128 12:36:39.793220 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f4f43e-a921-41b2-aa88-506055daff60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f78ed0375efd54092331e1cbb01c168e6cc218dc9abaf935e986271e1fd7ed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f78ed0375efd54092331e1cbb01c168e6cc218dc9abaf935e986271e1fd7ed2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T12:36:29Z\\\",\\\"message\\\":\\\"h UID \\\\\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\\\\\" in cache\\\\nI1128 12:36:28.904558 6457 port_cache.go:96] port-cache(openshift-network-diagnostics_network-check-target-xd92c): added port \\\\u0026{name:openshift-network-diagnostics_network-check-target-xd92c uuid:61897e97-c771-4738-8709-09636387cb00 logicalSwitch:crc ips:[0xc007ff6d20] mac:[10 88 10 217 0 4] expires:{wall:0 ext:0 loc:\\\\u003cnil\\\\u003e}} with IP: [10.217.0.4/23] and MAC: 0a:58:0a:d9:00:04\\\\nI1128 12:36:28.904608 6457 services_controller.go:360] Finished syncing service etcd on namespace openshift-etcd for network=default : 5.545787ms\\\\nF1128 12:36:28.904620 6457 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet vali\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pbmbn_openshift-ovn-kubernetes(35f4f43e-a921-41b2-aa88-506055daff60)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pbmbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:39Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:39 crc kubenswrapper[4779]: I1128 12:36:39.798541 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:39 crc kubenswrapper[4779]: I1128 12:36:39.799297 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:39 crc kubenswrapper[4779]: I1128 12:36:39.799341 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:39 crc kubenswrapper[4779]: I1128 12:36:39.799377 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:39 crc kubenswrapper[4779]: I1128 12:36:39.799401 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:39Z","lastTransitionTime":"2025-11-28T12:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:39 crc kubenswrapper[4779]: I1128 12:36:39.807304 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwgdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13786eba-201c-40ca-89b7-174795999a9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec60bab90c7fee1fd38c00da4f84d5133876ad8f2817e5447795fcab4feb2942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwgdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:39Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:39 crc kubenswrapper[4779]: I1128 12:36:39.822088 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c2psj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d9943eb-ea06-476d-8736-0a45e588d9f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8vbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8vbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:36:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c2psj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:39Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:39 crc kubenswrapper[4779]: I1128 12:36:39.840671 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jf46d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd0b81f7-c868-4f90-b20d-9d1b53f5216f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e8508450f924b6b8509b5d06c78535915557c5a7362b50c41515ad15f35e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smlr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383fc6deecc04584b130b3fdc9c1fded751c521513ce60898fdf1927748cd4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smlr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:36:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jf46d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:39Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:39 crc kubenswrapper[4779]: I1128 12:36:39.862549 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b303d954-23c9-4fc9-8e79-981009172099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6912a42c418059dabf07c7d940bf1c4102c8dcf91cd4dd6ca0b177f4acd276ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf14e5e2229156dc442c92253ef1f23c75a5a6f5dec2d2537cddcdd1df54b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a76dbc5b41ebf68792cd449e4a245678be24151f0c980eedd06f956674b2435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3db38b748527004df103120db865f7848491344dfdf5c89a6db10f4d15e6a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9026b47ba3a0076e3f66e452bc9a223292a17659f2b80d04ef6eb6a5c0448710\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 12:35:52.373678 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 12:35:52.376135 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3230331060/tls.crt::/tmp/serving-cert-3230331060/tls.key\\\\\\\"\\\\nI1128 12:35:57.821147 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 12:35:57.824398 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 12:35:57.824424 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 12:35:57.824444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 12:35:57.824450 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 12:35:57.831411 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 12:35:57.831445 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831467 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 12:35:57.831472 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 12:35:57.831476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 12:35:57.831480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 12:35:57.831686 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 12:35:57.839127 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bafddd2d81f67f1445e3714d50eba5cfd6f75d60c2cb47d16f2086861a10bd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:39Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:39 crc kubenswrapper[4779]: I1128 12:36:39.879241 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c9857379117d130ce02fa4a153dfc01c9f41ba65663ae918bd82c9b14291e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:39Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:39 crc kubenswrapper[4779]: I1128 12:36:39.894602 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dlvj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8b3aa68-52ee-40cd-a059-6e410b826ce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2e852aeb571e85a95f4581550ee5f911d9c67fbbc4fc699e9af667a9c4b531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db55w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dlvj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:39Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:39 crc kubenswrapper[4779]: I1128 12:36:39.901875 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:39 crc kubenswrapper[4779]: I1128 12:36:39.901943 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:39 crc kubenswrapper[4779]: I1128 12:36:39.901958 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:39 crc kubenswrapper[4779]: I1128 12:36:39.901977 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:39 crc kubenswrapper[4779]: I1128 12:36:39.901989 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:39Z","lastTransitionTime":"2025-11-28T12:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:39 crc kubenswrapper[4779]: I1128 12:36:39.917464 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:39Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:39 crc kubenswrapper[4779]: I1128 12:36:39.935946 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d290cf8678216cdf66a68b32edea2be30af7f7fa4ff7ccac629d9e690b23b13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:39Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:39 crc kubenswrapper[4779]: I1128 12:36:39.954252 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2gg4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9e9a74657b078824a5614dc894178aed5ca4cb11445b900485e9a6c4378f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2gg4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:39Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:39 crc kubenswrapper[4779]: I1128 12:36:39.972402 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91a3b1-3cec-4dcd-8f16-bc721aaedc52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7be2ce5bc20d31216029627f86e27657d444334d72ba98e4ae9923c9d23cf35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9512174ef01c8751a11fc5e6193513236518b4a9d5b63b05020544b8708b70b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54bf19864670db9dbeda1e3b133e9246f9e4027714f684783efed888890af9ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dd288476ad4d58bebb413208bbe2f45bf3997fd7587a90b08ff3af6bdc2ad10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dd288476ad4d58bebb413208bbe2f45bf3997fd7587a90b08ff3af6bdc2ad10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:39Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:39 crc kubenswrapper[4779]: I1128 12:36:39.988437 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:39Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.003020 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:40Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.004368 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.004431 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.004448 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.004474 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.004491 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:40Z","lastTransitionTime":"2025-11-28T12:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.018870 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3544f7f72339878b2314fde813e8a92a8341fb05a34a4440c7c37b983d8d23f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19dcc5041b0cbae9167c41c808ece2651eac928f93422722ae28825b5ea4f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:40Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.037733 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373d4c2a-0b03-4671-945a-0583fa342b3d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e79e9cc7bdaacc427604d12cf94272c7ed3d93519b1d285ba336edded1b3642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0417da6607c0d549767642332fa4fb21bbef525d7073d0a352120092d3450f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b887fb78d1be13c77a88ce49c84ff0839a51056e29d59d571ab7da133dd0d897\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a538ac7a3b48f9c58a68688a95342fb3a9d26ee3e5d7c65f1e3b8d99993294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:40Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.052915 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b2a3eb4-4de5-491b-b466-3a35b7d745ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23df7a96829b4103254d6da3740caab05538ddbd3235ce16e8d768e681041c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f92b1378efd9146ee3cb61fef14092136e47b318d132a400c768bedf50d034e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kj9g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:40Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.076329 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pzwdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba664a9e-76d2-4d02-889a-e7062bfc903c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5598fdba6afba30cd00c8abdae6c80300fb10dfcde40afab0f15f848addddd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfslc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pzwdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:40Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.107037 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.107083 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.107107 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.107125 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.107138 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:40Z","lastTransitionTime":"2025-11-28T12:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.209501 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.209558 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.209569 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.209588 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.209603 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:40Z","lastTransitionTime":"2025-11-28T12:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.312185 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.312246 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.312264 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.312286 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.312308 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:40Z","lastTransitionTime":"2025-11-28T12:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.416180 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.416259 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.416285 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.416318 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.416338 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:40Z","lastTransitionTime":"2025-11-28T12:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.519074 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.519299 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.519468 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.519500 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.519517 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:40Z","lastTransitionTime":"2025-11-28T12:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.623337 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.623419 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.623439 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.623471 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.623494 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:40Z","lastTransitionTime":"2025-11-28T12:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.725674 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.725739 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.725822 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.725867 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:36:40 crc kubenswrapper[4779]: E1128 12:36:40.725868 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c2psj" podUID="2d9943eb-ea06-476d-8736-0a45e588d9f4" Nov 28 12:36:40 crc kubenswrapper[4779]: E1128 12:36:40.726037 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:36:40 crc kubenswrapper[4779]: E1128 12:36:40.726232 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:36:40 crc kubenswrapper[4779]: E1128 12:36:40.726361 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.727995 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.728140 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.728167 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.728197 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.728217 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:40Z","lastTransitionTime":"2025-11-28T12:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.831428 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.831468 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.831479 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.831499 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.831511 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:40Z","lastTransitionTime":"2025-11-28T12:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.936846 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.936890 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.936905 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.936925 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:40 crc kubenswrapper[4779]: I1128 12:36:40.936938 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:40Z","lastTransitionTime":"2025-11-28T12:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:41 crc kubenswrapper[4779]: I1128 12:36:41.040425 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:41 crc kubenswrapper[4779]: I1128 12:36:41.040505 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:41 crc kubenswrapper[4779]: I1128 12:36:41.040525 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:41 crc kubenswrapper[4779]: I1128 12:36:41.040556 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:41 crc kubenswrapper[4779]: I1128 12:36:41.040577 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:41Z","lastTransitionTime":"2025-11-28T12:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:41 crc kubenswrapper[4779]: I1128 12:36:41.146542 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:41 crc kubenswrapper[4779]: I1128 12:36:41.146626 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:41 crc kubenswrapper[4779]: I1128 12:36:41.146649 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:41 crc kubenswrapper[4779]: I1128 12:36:41.146680 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:41 crc kubenswrapper[4779]: I1128 12:36:41.146706 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:41Z","lastTransitionTime":"2025-11-28T12:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:41 crc kubenswrapper[4779]: I1128 12:36:41.249950 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:41 crc kubenswrapper[4779]: I1128 12:36:41.250017 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:41 crc kubenswrapper[4779]: I1128 12:36:41.250039 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:41 crc kubenswrapper[4779]: I1128 12:36:41.250062 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:41 crc kubenswrapper[4779]: I1128 12:36:41.250079 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:41Z","lastTransitionTime":"2025-11-28T12:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:41 crc kubenswrapper[4779]: I1128 12:36:41.353960 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:41 crc kubenswrapper[4779]: I1128 12:36:41.354046 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:41 crc kubenswrapper[4779]: I1128 12:36:41.354071 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:41 crc kubenswrapper[4779]: I1128 12:36:41.354139 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:41 crc kubenswrapper[4779]: I1128 12:36:41.354165 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:41Z","lastTransitionTime":"2025-11-28T12:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:41 crc kubenswrapper[4779]: I1128 12:36:41.457868 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:41 crc kubenswrapper[4779]: I1128 12:36:41.457926 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:41 crc kubenswrapper[4779]: I1128 12:36:41.457944 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:41 crc kubenswrapper[4779]: I1128 12:36:41.457967 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:41 crc kubenswrapper[4779]: I1128 12:36:41.457985 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:41Z","lastTransitionTime":"2025-11-28T12:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:41 crc kubenswrapper[4779]: I1128 12:36:41.561976 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:41 crc kubenswrapper[4779]: I1128 12:36:41.562051 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:41 crc kubenswrapper[4779]: I1128 12:36:41.562072 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:41 crc kubenswrapper[4779]: I1128 12:36:41.562141 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:41 crc kubenswrapper[4779]: I1128 12:36:41.562174 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:41Z","lastTransitionTime":"2025-11-28T12:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:41 crc kubenswrapper[4779]: I1128 12:36:41.665756 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:41 crc kubenswrapper[4779]: I1128 12:36:41.665819 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:41 crc kubenswrapper[4779]: I1128 12:36:41.665836 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:41 crc kubenswrapper[4779]: I1128 12:36:41.665858 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:41 crc kubenswrapper[4779]: I1128 12:36:41.665874 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:41Z","lastTransitionTime":"2025-11-28T12:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:41 crc kubenswrapper[4779]: I1128 12:36:41.769356 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:41 crc kubenswrapper[4779]: I1128 12:36:41.769431 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:41 crc kubenswrapper[4779]: I1128 12:36:41.769453 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:41 crc kubenswrapper[4779]: I1128 12:36:41.769486 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:41 crc kubenswrapper[4779]: I1128 12:36:41.769505 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:41Z","lastTransitionTime":"2025-11-28T12:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:41 crc kubenswrapper[4779]: I1128 12:36:41.871878 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:41 crc kubenswrapper[4779]: I1128 12:36:41.871927 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:41 crc kubenswrapper[4779]: I1128 12:36:41.871950 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:41 crc kubenswrapper[4779]: I1128 12:36:41.871973 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:41 crc kubenswrapper[4779]: I1128 12:36:41.871988 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:41Z","lastTransitionTime":"2025-11-28T12:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:41 crc kubenswrapper[4779]: I1128 12:36:41.974897 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:41 crc kubenswrapper[4779]: I1128 12:36:41.974961 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:41 crc kubenswrapper[4779]: I1128 12:36:41.974985 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:41 crc kubenswrapper[4779]: I1128 12:36:41.975015 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:41 crc kubenswrapper[4779]: I1128 12:36:41.975039 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:41Z","lastTransitionTime":"2025-11-28T12:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.051554 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.051590 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.051602 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.051617 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.051630 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:42Z","lastTransitionTime":"2025-11-28T12:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:42 crc kubenswrapper[4779]: E1128 12:36:42.068735 4779 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"78a2023c-0feb-4049-a56a-d55919a84d1c\\\",\\\"systemUUID\\\":\\\"232cf3c8-8956-4a87-8900-bbd0298775e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:42Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.073775 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.073876 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.073901 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.073929 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.073954 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:42Z","lastTransitionTime":"2025-11-28T12:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:42 crc kubenswrapper[4779]: E1128 12:36:42.093476 4779 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"78a2023c-0feb-4049-a56a-d55919a84d1c\\\",\\\"systemUUID\\\":\\\"232cf3c8-8956-4a87-8900-bbd0298775e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:42Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.097748 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.097819 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.097839 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.097865 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.097886 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:42Z","lastTransitionTime":"2025-11-28T12:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:42 crc kubenswrapper[4779]: E1128 12:36:42.115896 4779 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"78a2023c-0feb-4049-a56a-d55919a84d1c\\\",\\\"systemUUID\\\":\\\"232cf3c8-8956-4a87-8900-bbd0298775e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:42Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.122626 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.122665 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.122708 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.122726 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.122739 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:42Z","lastTransitionTime":"2025-11-28T12:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:42 crc kubenswrapper[4779]: E1128 12:36:42.137748 4779 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"78a2023c-0feb-4049-a56a-d55919a84d1c\\\",\\\"systemUUID\\\":\\\"232cf3c8-8956-4a87-8900-bbd0298775e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:42Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.142112 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.142179 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.142194 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.142211 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.142223 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:42Z","lastTransitionTime":"2025-11-28T12:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:42 crc kubenswrapper[4779]: E1128 12:36:42.155479 4779 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"78a2023c-0feb-4049-a56a-d55919a84d1c\\\",\\\"systemUUID\\\":\\\"232cf3c8-8956-4a87-8900-bbd0298775e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:42Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:42 crc kubenswrapper[4779]: E1128 12:36:42.155644 4779 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.157117 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.157143 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.157153 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.157167 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.157178 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:42Z","lastTransitionTime":"2025-11-28T12:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.260111 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.260137 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.260144 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.260159 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.260167 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:42Z","lastTransitionTime":"2025-11-28T12:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.362488 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.362538 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.362554 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.362578 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.362597 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:42Z","lastTransitionTime":"2025-11-28T12:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.465173 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.465217 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.465233 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.465257 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.465272 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:42Z","lastTransitionTime":"2025-11-28T12:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.568996 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.569045 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.569054 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.569077 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.569114 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:42Z","lastTransitionTime":"2025-11-28T12:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.672039 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.672156 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.672184 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.672219 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.672240 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:42Z","lastTransitionTime":"2025-11-28T12:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.725770 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.725865 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.725938 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.729476 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:36:42 crc kubenswrapper[4779]: E1128 12:36:42.729622 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:36:42 crc kubenswrapper[4779]: E1128 12:36:42.729800 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c2psj" podUID="2d9943eb-ea06-476d-8736-0a45e588d9f4" Nov 28 12:36:42 crc kubenswrapper[4779]: E1128 12:36:42.729987 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:36:42 crc kubenswrapper[4779]: E1128 12:36:42.730155 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.774536 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.774610 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.774628 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.774661 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.774678 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:42Z","lastTransitionTime":"2025-11-28T12:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.878060 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.878197 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.878221 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.878679 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.878901 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:42Z","lastTransitionTime":"2025-11-28T12:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.982157 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.982232 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.982251 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.982281 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:42 crc kubenswrapper[4779]: I1128 12:36:42.982303 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:42Z","lastTransitionTime":"2025-11-28T12:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:43 crc kubenswrapper[4779]: I1128 12:36:43.086472 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:43 crc kubenswrapper[4779]: I1128 12:36:43.086530 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:43 crc kubenswrapper[4779]: I1128 12:36:43.086551 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:43 crc kubenswrapper[4779]: I1128 12:36:43.086579 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:43 crc kubenswrapper[4779]: I1128 12:36:43.086598 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:43Z","lastTransitionTime":"2025-11-28T12:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:43 crc kubenswrapper[4779]: I1128 12:36:43.189535 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:43 crc kubenswrapper[4779]: I1128 12:36:43.189621 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:43 crc kubenswrapper[4779]: I1128 12:36:43.189647 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:43 crc kubenswrapper[4779]: I1128 12:36:43.189701 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:43 crc kubenswrapper[4779]: I1128 12:36:43.189722 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:43Z","lastTransitionTime":"2025-11-28T12:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:43 crc kubenswrapper[4779]: I1128 12:36:43.292320 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:43 crc kubenswrapper[4779]: I1128 12:36:43.292368 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:43 crc kubenswrapper[4779]: I1128 12:36:43.292380 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:43 crc kubenswrapper[4779]: I1128 12:36:43.292403 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:43 crc kubenswrapper[4779]: I1128 12:36:43.292416 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:43Z","lastTransitionTime":"2025-11-28T12:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:43 crc kubenswrapper[4779]: I1128 12:36:43.395502 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:43 crc kubenswrapper[4779]: I1128 12:36:43.395560 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:43 crc kubenswrapper[4779]: I1128 12:36:43.395573 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:43 crc kubenswrapper[4779]: I1128 12:36:43.395596 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:43 crc kubenswrapper[4779]: I1128 12:36:43.395611 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:43Z","lastTransitionTime":"2025-11-28T12:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:43 crc kubenswrapper[4779]: I1128 12:36:43.498343 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:43 crc kubenswrapper[4779]: I1128 12:36:43.498411 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:43 crc kubenswrapper[4779]: I1128 12:36:43.498432 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:43 crc kubenswrapper[4779]: I1128 12:36:43.498460 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:43 crc kubenswrapper[4779]: I1128 12:36:43.498480 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:43Z","lastTransitionTime":"2025-11-28T12:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:43 crc kubenswrapper[4779]: I1128 12:36:43.602075 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:43 crc kubenswrapper[4779]: I1128 12:36:43.602174 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:43 crc kubenswrapper[4779]: I1128 12:36:43.602195 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:43 crc kubenswrapper[4779]: I1128 12:36:43.602226 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:43 crc kubenswrapper[4779]: I1128 12:36:43.602245 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:43Z","lastTransitionTime":"2025-11-28T12:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:43 crc kubenswrapper[4779]: I1128 12:36:43.705300 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:43 crc kubenswrapper[4779]: I1128 12:36:43.705398 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:43 crc kubenswrapper[4779]: I1128 12:36:43.705422 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:43 crc kubenswrapper[4779]: I1128 12:36:43.705459 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:43 crc kubenswrapper[4779]: I1128 12:36:43.705485 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:43Z","lastTransitionTime":"2025-11-28T12:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:43 crc kubenswrapper[4779]: I1128 12:36:43.808478 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:43 crc kubenswrapper[4779]: I1128 12:36:43.808561 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:43 crc kubenswrapper[4779]: I1128 12:36:43.808583 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:43 crc kubenswrapper[4779]: I1128 12:36:43.808615 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:43 crc kubenswrapper[4779]: I1128 12:36:43.808643 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:43Z","lastTransitionTime":"2025-11-28T12:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:43 crc kubenswrapper[4779]: I1128 12:36:43.911549 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:43 crc kubenswrapper[4779]: I1128 12:36:43.911606 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:43 crc kubenswrapper[4779]: I1128 12:36:43.911625 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:43 crc kubenswrapper[4779]: I1128 12:36:43.911648 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:43 crc kubenswrapper[4779]: I1128 12:36:43.911665 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:43Z","lastTransitionTime":"2025-11-28T12:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:44 crc kubenswrapper[4779]: I1128 12:36:44.014466 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:44 crc kubenswrapper[4779]: I1128 12:36:44.014510 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:44 crc kubenswrapper[4779]: I1128 12:36:44.014529 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:44 crc kubenswrapper[4779]: I1128 12:36:44.014554 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:44 crc kubenswrapper[4779]: I1128 12:36:44.014571 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:44Z","lastTransitionTime":"2025-11-28T12:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:44 crc kubenswrapper[4779]: I1128 12:36:44.122197 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:44 crc kubenswrapper[4779]: I1128 12:36:44.122261 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:44 crc kubenswrapper[4779]: I1128 12:36:44.122281 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:44 crc kubenswrapper[4779]: I1128 12:36:44.122306 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:44 crc kubenswrapper[4779]: I1128 12:36:44.122323 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:44Z","lastTransitionTime":"2025-11-28T12:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:44 crc kubenswrapper[4779]: I1128 12:36:44.225483 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:44 crc kubenswrapper[4779]: I1128 12:36:44.225549 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:44 crc kubenswrapper[4779]: I1128 12:36:44.225573 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:44 crc kubenswrapper[4779]: I1128 12:36:44.225607 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:44 crc kubenswrapper[4779]: I1128 12:36:44.225631 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:44Z","lastTransitionTime":"2025-11-28T12:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:44 crc kubenswrapper[4779]: I1128 12:36:44.328725 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:44 crc kubenswrapper[4779]: I1128 12:36:44.328790 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:44 crc kubenswrapper[4779]: I1128 12:36:44.328809 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:44 crc kubenswrapper[4779]: I1128 12:36:44.328834 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:44 crc kubenswrapper[4779]: I1128 12:36:44.328851 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:44Z","lastTransitionTime":"2025-11-28T12:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:44 crc kubenswrapper[4779]: I1128 12:36:44.436133 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:44 crc kubenswrapper[4779]: I1128 12:36:44.436201 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:44 crc kubenswrapper[4779]: I1128 12:36:44.436236 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:44 crc kubenswrapper[4779]: I1128 12:36:44.436267 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:44 crc kubenswrapper[4779]: I1128 12:36:44.436286 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:44Z","lastTransitionTime":"2025-11-28T12:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:44 crc kubenswrapper[4779]: I1128 12:36:44.540717 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:44 crc kubenswrapper[4779]: I1128 12:36:44.540784 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:44 crc kubenswrapper[4779]: I1128 12:36:44.540798 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:44 crc kubenswrapper[4779]: I1128 12:36:44.540821 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:44 crc kubenswrapper[4779]: I1128 12:36:44.540835 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:44Z","lastTransitionTime":"2025-11-28T12:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:44 crc kubenswrapper[4779]: I1128 12:36:44.608064 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2d9943eb-ea06-476d-8736-0a45e588d9f4-metrics-certs\") pod \"network-metrics-daemon-c2psj\" (UID: \"2d9943eb-ea06-476d-8736-0a45e588d9f4\") " pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:36:44 crc kubenswrapper[4779]: E1128 12:36:44.608265 4779 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 12:36:44 crc kubenswrapper[4779]: E1128 12:36:44.608337 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d9943eb-ea06-476d-8736-0a45e588d9f4-metrics-certs podName:2d9943eb-ea06-476d-8736-0a45e588d9f4 nodeName:}" failed. No retries permitted until 2025-11-28 12:37:16.608317386 +0000 UTC m=+97.173992740 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2d9943eb-ea06-476d-8736-0a45e588d9f4-metrics-certs") pod "network-metrics-daemon-c2psj" (UID: "2d9943eb-ea06-476d-8736-0a45e588d9f4") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 12:36:44 crc kubenswrapper[4779]: I1128 12:36:44.644174 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:44 crc kubenswrapper[4779]: I1128 12:36:44.644236 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:44 crc kubenswrapper[4779]: I1128 12:36:44.644253 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:44 crc kubenswrapper[4779]: I1128 12:36:44.644297 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:44 crc kubenswrapper[4779]: I1128 12:36:44.644328 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:44Z","lastTransitionTime":"2025-11-28T12:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:44 crc kubenswrapper[4779]: I1128 12:36:44.725396 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:36:44 crc kubenswrapper[4779]: I1128 12:36:44.725495 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:36:44 crc kubenswrapper[4779]: I1128 12:36:44.725529 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:36:44 crc kubenswrapper[4779]: E1128 12:36:44.725601 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:36:44 crc kubenswrapper[4779]: I1128 12:36:44.725691 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:36:44 crc kubenswrapper[4779]: E1128 12:36:44.726219 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:36:44 crc kubenswrapper[4779]: E1128 12:36:44.726363 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:36:44 crc kubenswrapper[4779]: E1128 12:36:44.726584 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c2psj" podUID="2d9943eb-ea06-476d-8736-0a45e588d9f4" Nov 28 12:36:44 crc kubenswrapper[4779]: I1128 12:36:44.726823 4779 scope.go:117] "RemoveContainer" containerID="3f78ed0375efd54092331e1cbb01c168e6cc218dc9abaf935e986271e1fd7ed2" Nov 28 12:36:44 crc kubenswrapper[4779]: E1128 12:36:44.730228 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pbmbn_openshift-ovn-kubernetes(35f4f43e-a921-41b2-aa88-506055daff60)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" podUID="35f4f43e-a921-41b2-aa88-506055daff60" Nov 28 12:36:44 crc kubenswrapper[4779]: I1128 12:36:44.746372 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:44 crc kubenswrapper[4779]: I1128 12:36:44.746401 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:44 crc kubenswrapper[4779]: I1128 12:36:44.746411 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:44 crc kubenswrapper[4779]: I1128 12:36:44.746426 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:44 crc kubenswrapper[4779]: I1128 12:36:44.746436 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:44Z","lastTransitionTime":"2025-11-28T12:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:44 crc kubenswrapper[4779]: I1128 12:36:44.848927 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:44 crc kubenswrapper[4779]: I1128 12:36:44.848984 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:44 crc kubenswrapper[4779]: I1128 12:36:44.849003 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:44 crc kubenswrapper[4779]: I1128 12:36:44.849027 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:44 crc kubenswrapper[4779]: I1128 12:36:44.849047 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:44Z","lastTransitionTime":"2025-11-28T12:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:44 crc kubenswrapper[4779]: I1128 12:36:44.952884 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:44 crc kubenswrapper[4779]: I1128 12:36:44.952962 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:44 crc kubenswrapper[4779]: I1128 12:36:44.952986 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:44 crc kubenswrapper[4779]: I1128 12:36:44.953019 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:44 crc kubenswrapper[4779]: I1128 12:36:44.953043 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:44Z","lastTransitionTime":"2025-11-28T12:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.055488 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.055753 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.055879 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.055959 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.056028 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:45Z","lastTransitionTime":"2025-11-28T12:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.158832 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.159170 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.159235 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.159315 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.159382 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:45Z","lastTransitionTime":"2025-11-28T12:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.164385 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-pzwdx_ba664a9e-76d2-4d02-889a-e7062bfc903c/kube-multus/0.log" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.164487 4779 generic.go:334] "Generic (PLEG): container finished" podID="ba664a9e-76d2-4d02-889a-e7062bfc903c" containerID="5598fdba6afba30cd00c8abdae6c80300fb10dfcde40afab0f15f848addddd47" exitCode=1 Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.164563 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-pzwdx" event={"ID":"ba664a9e-76d2-4d02-889a-e7062bfc903c","Type":"ContainerDied","Data":"5598fdba6afba30cd00c8abdae6c80300fb10dfcde40afab0f15f848addddd47"} Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.165484 4779 scope.go:117] "RemoveContainer" containerID="5598fdba6afba30cd00c8abdae6c80300fb10dfcde40afab0f15f848addddd47" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.193556 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbbbf6f-004c-42ae-8a38-1bcc6cb88ac2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9cede79cbe4c47d953dfa702fe815cc14ee242dede33edec3c4617824c89b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4493f154b47a353308d54341114bbbd12157f9575b873e1648d1dae6a386a534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9d44446078a2bb53a5a9b0a3f7a87ecf24a8554fb968a0250fc3a4cfb2d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://123567b9e202a9aae6ab83bca1ea909a496c476395703ab65e855be02f7af06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c959e0d582f2f01523650db7c0a1d6483dda34c3fcdfaa29d2d25e4d0b0f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:45Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.225732 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f4f43e-a921-41b2-aa88-506055daff60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f78ed0375efd54092331e1cbb01c168e6cc218dc9abaf935e986271e1fd7ed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f78ed0375efd54092331e1cbb01c168e6cc218dc9abaf935e986271e1fd7ed2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T12:36:29Z\\\",\\\"message\\\":\\\"h UID \\\\\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\\\\\" in cache\\\\nI1128 12:36:28.904558 6457 port_cache.go:96] port-cache(openshift-network-diagnostics_network-check-target-xd92c): added port \\\\u0026{name:openshift-network-diagnostics_network-check-target-xd92c uuid:61897e97-c771-4738-8709-09636387cb00 logicalSwitch:crc ips:[0xc007ff6d20] mac:[10 88 10 217 0 4] expires:{wall:0 ext:0 loc:\\\\u003cnil\\\\u003e}} with IP: [10.217.0.4/23] and MAC: 0a:58:0a:d9:00:04\\\\nI1128 12:36:28.904608 6457 services_controller.go:360] Finished syncing service etcd on namespace openshift-etcd for network=default : 5.545787ms\\\\nF1128 12:36:28.904620 6457 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet vali\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pbmbn_openshift-ovn-kubernetes(35f4f43e-a921-41b2-aa88-506055daff60)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pbmbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:45Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.237530 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwgdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13786eba-201c-40ca-89b7-174795999a9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec60bab90c7fee1fd38c00da4f84d5133876ad8f2817e5447795fcab4feb2942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwgdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:45Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.248486 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c2psj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d9943eb-ea06-476d-8736-0a45e588d9f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8vbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8vbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:36:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c2psj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:45Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.258605 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jf46d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd0b81f7-c868-4f90-b20d-9d1b53f5216f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e8508450f924b6b8509b5d06c78535915557c5a7362b50c41515ad15f35e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smlr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383fc6deecc04584b130b3fdc9c1fded751c521513ce60898fdf1927748cd4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smlr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:36:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jf46d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:45Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.262759 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.262889 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.262996 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.263085 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.263186 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:45Z","lastTransitionTime":"2025-11-28T12:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.272944 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b303d954-23c9-4fc9-8e79-981009172099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6912a42c418059dabf07c7d940bf1c4102c8dcf91cd4dd6ca0b177f4acd276ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf14e5e2229156dc442c92253ef1f23c75a5a6f5dec2d2537cddcdd1df54b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a76dbc5b41ebf68792cd449e4a245678be24151f0c980eedd06f956674b2435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3db38b748527004df103120db865f7848491344dfdf5c89a6db10f4d15e6a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9026b47ba3a0076e3f66e452bc9a223292a17659f2b80d04ef6eb6a5c0448710\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 12:35:52.373678 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 12:35:52.376135 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3230331060/tls.crt::/tmp/serving-cert-3230331060/tls.key\\\\\\\"\\\\nI1128 12:35:57.821147 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 12:35:57.824398 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 12:35:57.824424 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 12:35:57.824444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 12:35:57.824450 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 12:35:57.831411 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 12:35:57.831445 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831467 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 12:35:57.831472 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 12:35:57.831476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 12:35:57.831480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 12:35:57.831686 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 12:35:57.839127 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bafddd2d81f67f1445e3714d50eba5cfd6f75d60c2cb47d16f2086861a10bd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:45Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.287637 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c9857379117d130ce02fa4a153dfc01c9f41ba65663ae918bd82c9b14291e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:45Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.300383 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dlvj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8b3aa68-52ee-40cd-a059-6e410b826ce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2e852aeb571e85a95f4581550ee5f911d9c67fbbc4fc699e9af667a9c4b531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db55w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dlvj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:45Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.315316 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:45Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.327278 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d290cf8678216cdf66a68b32edea2be30af7f7fa4ff7ccac629d9e690b23b13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:45Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.343609 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2gg4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9e9a74657b078824a5614dc894178aed5ca4cb11445b900485e9a6c4378f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2gg4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:45Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.355047 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91a3b1-3cec-4dcd-8f16-bc721aaedc52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7be2ce5bc20d31216029627f86e27657d444334d72ba98e4ae9923c9d23cf35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9512174ef01c8751a11fc5e6193513236518b4a9d5b63b05020544b8708b70b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54bf19864670db9dbeda1e3b133e9246f9e4027714f684783efed888890af9ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dd288476ad4d58bebb413208bbe2f45bf3997fd7587a90b08ff3af6bdc2ad10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dd288476ad4d58bebb413208bbe2f45bf3997fd7587a90b08ff3af6bdc2ad10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:45Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.365922 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.365998 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.366013 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.366030 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.366043 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:45Z","lastTransitionTime":"2025-11-28T12:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.370345 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:45Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.383364 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:45Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.394644 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3544f7f72339878b2314fde813e8a92a8341fb05a34a4440c7c37b983d8d23f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19dcc5041b0cbae9167c41c808ece2651eac928f93422722ae28825b5ea4f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:45Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.408010 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373d4c2a-0b03-4671-945a-0583fa342b3d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e79e9cc7bdaacc427604d12cf94272c7ed3d93519b1d285ba336edded1b3642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0417da6607c0d549767642332fa4fb21bbef525d7073d0a352120092d3450f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b887fb78d1be13c77a88ce49c84ff0839a51056e29d59d571ab7da133dd0d897\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a538ac7a3b48f9c58a68688a95342fb3a9d26ee3e5d7c65f1e3b8d99993294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:45Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.420568 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b2a3eb4-4de5-491b-b466-3a35b7d745ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23df7a96829b4103254d6da3740caab05538ddbd3235ce16e8d768e681041c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f92b1378efd9146ee3cb61fef14092136e47b318d132a400c768bedf50d034e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kj9g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:45Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.435069 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pzwdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba664a9e-76d2-4d02-889a-e7062bfc903c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5598fdba6afba30cd00c8abdae6c80300fb10dfcde40afab0f15f848addddd47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5598fdba6afba30cd00c8abdae6c80300fb10dfcde40afab0f15f848addddd47\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T12:36:44Z\\\",\\\"message\\\":\\\"2025-11-28T12:35:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4ae75205-766c-4cf0-bf74-190c15ad266e\\\\n2025-11-28T12:35:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4ae75205-766c-4cf0-bf74-190c15ad266e to /host/opt/cni/bin/\\\\n2025-11-28T12:35:59Z [verbose] multus-daemon started\\\\n2025-11-28T12:35:59Z [verbose] Readiness Indicator file check\\\\n2025-11-28T12:36:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfslc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pzwdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:45Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.469055 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.469121 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.469136 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.469154 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.469166 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:45Z","lastTransitionTime":"2025-11-28T12:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.572150 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.572211 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.572224 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.572242 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.572255 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:45Z","lastTransitionTime":"2025-11-28T12:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.675212 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.675541 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.675738 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.675920 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.676140 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:45Z","lastTransitionTime":"2025-11-28T12:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.780814 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.780863 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.780876 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.780894 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.780904 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:45Z","lastTransitionTime":"2025-11-28T12:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.884366 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.884408 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.884418 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.884435 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.884445 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:45Z","lastTransitionTime":"2025-11-28T12:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.992837 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.992961 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.993026 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.993061 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:45 crc kubenswrapper[4779]: I1128 12:36:45.993154 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:45Z","lastTransitionTime":"2025-11-28T12:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.096741 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.096790 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.096800 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.096817 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.096826 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:46Z","lastTransitionTime":"2025-11-28T12:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.169830 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-pzwdx_ba664a9e-76d2-4d02-889a-e7062bfc903c/kube-multus/0.log" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.169894 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-pzwdx" event={"ID":"ba664a9e-76d2-4d02-889a-e7062bfc903c","Type":"ContainerStarted","Data":"3c11decc7085592a2a1e13b74049f378421293a7a1929f765860c47824c4b7a5"} Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.185494 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d290cf8678216cdf66a68b32edea2be30af7f7fa4ff7ccac629d9e690b23b13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:46Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.199262 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.199314 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.199326 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.199346 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.199367 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:46Z","lastTransitionTime":"2025-11-28T12:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.206819 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2gg4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9e9a74657b078824a5614dc894178aed5ca4cb11445b900485e9a6c4378f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2gg4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:46Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.222508 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91a3b1-3cec-4dcd-8f16-bc721aaedc52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7be2ce5bc20d31216029627f86e27657d444334d72ba98e4ae9923c9d23cf35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9512174ef01c8751a11fc5e6193513236518b4a9d5b63b05020544b8708b70b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54bf19864670db9dbeda1e3b133e9246f9e4027714f684783efed888890af9ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dd288476ad4d58bebb413208bbe2f45bf3997fd7587a90b08ff3af6bdc2ad10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dd288476ad4d58bebb413208bbe2f45bf3997fd7587a90b08ff3af6bdc2ad10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:46Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.236443 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:46Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.248585 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:46Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.262834 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3544f7f72339878b2314fde813e8a92a8341fb05a34a4440c7c37b983d8d23f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19dcc5041b0cbae9167c41c808ece2651eac928f93422722ae28825b5ea4f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:46Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.274392 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:46Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.290355 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373d4c2a-0b03-4671-945a-0583fa342b3d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e79e9cc7bdaacc427604d12cf94272c7ed3d93519b1d285ba336edded1b3642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0417da6607c0d549767642332fa4fb21bbef525d7073d0a352120092d3450f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b887fb78d1be13c77a88ce49c84ff0839a51056e29d59d571ab7da133dd0d897\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a538ac7a3b48f9c58a68688a95342fb3a9d26ee3e5d7c65f1e3b8d99993294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:46Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.301112 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.301163 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.301177 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.301196 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.301207 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:46Z","lastTransitionTime":"2025-11-28T12:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.306628 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b2a3eb4-4de5-491b-b466-3a35b7d745ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23df7a96829b4103254d6da3740caab05538ddbd3235ce16e8d768e681041c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f92b1378efd9146ee3cb61fef14092136e47b318d132a400c768bedf50d034e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kj9g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:46Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.324382 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pzwdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba664a9e-76d2-4d02-889a-e7062bfc903c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c11decc7085592a2a1e13b74049f378421293a7a1929f765860c47824c4b7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5598fdba6afba30cd00c8abdae6c80300fb10dfcde40afab0f15f848addddd47\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T12:36:44Z\\\",\\\"message\\\":\\\"2025-11-28T12:35:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4ae75205-766c-4cf0-bf74-190c15ad266e\\\\n2025-11-28T12:35:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4ae75205-766c-4cf0-bf74-190c15ad266e to /host/opt/cni/bin/\\\\n2025-11-28T12:35:59Z [verbose] multus-daemon started\\\\n2025-11-28T12:35:59Z [verbose] Readiness Indicator file check\\\\n2025-11-28T12:36:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfslc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pzwdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:46Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.348986 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbbbf6f-004c-42ae-8a38-1bcc6cb88ac2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9cede79cbe4c47d953dfa702fe815cc14ee242dede33edec3c4617824c89b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4493f154b47a353308d54341114bbbd12157f9575b873e1648d1dae6a386a534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9d44446078a2bb53a5a9b0a3f7a87ecf24a8554fb968a0250fc3a4cfb2d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://123567b9e202a9aae6ab83bca1ea909a496c476395703ab65e855be02f7af06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c959e0d582f2f01523650db7c0a1d6483dda34c3fcdfaa29d2d25e4d0b0f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:46Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.369717 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f4f43e-a921-41b2-aa88-506055daff60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f78ed0375efd54092331e1cbb01c168e6cc218dc9abaf935e986271e1fd7ed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f78ed0375efd54092331e1cbb01c168e6cc218dc9abaf935e986271e1fd7ed2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T12:36:29Z\\\",\\\"message\\\":\\\"h UID \\\\\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\\\\\" in cache\\\\nI1128 12:36:28.904558 6457 port_cache.go:96] port-cache(openshift-network-diagnostics_network-check-target-xd92c): added port \\\\u0026{name:openshift-network-diagnostics_network-check-target-xd92c uuid:61897e97-c771-4738-8709-09636387cb00 logicalSwitch:crc ips:[0xc007ff6d20] mac:[10 88 10 217 0 4] expires:{wall:0 ext:0 loc:\\\\u003cnil\\\\u003e}} with IP: [10.217.0.4/23] and MAC: 0a:58:0a:d9:00:04\\\\nI1128 12:36:28.904608 6457 services_controller.go:360] Finished syncing service etcd on namespace openshift-etcd for network=default : 5.545787ms\\\\nF1128 12:36:28.904620 6457 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet vali\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pbmbn_openshift-ovn-kubernetes(35f4f43e-a921-41b2-aa88-506055daff60)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pbmbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:46Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.380361 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwgdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13786eba-201c-40ca-89b7-174795999a9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec60bab90c7fee1fd38c00da4f84d5133876ad8f2817e5447795fcab4feb2942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwgdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:46Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.390466 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c2psj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d9943eb-ea06-476d-8736-0a45e588d9f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8vbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8vbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:36:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c2psj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:46Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.404004 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.404049 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.404059 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.404110 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.404136 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:46Z","lastTransitionTime":"2025-11-28T12:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.405568 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b303d954-23c9-4fc9-8e79-981009172099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6912a42c418059dabf07c7d940bf1c4102c8dcf91cd4dd6ca0b177f4acd276ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf14e5e2229156dc442c92253ef1f23c75a5a6f5dec2d2537cddcdd1df54b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a76dbc5b41ebf68792cd449e4a245678be24151f0c980eedd06f956674b2435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3db38b748527004df103120db865f7848491344dfdf5c89a6db10f4d15e6a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9026b47ba3a0076e3f66e452bc9a223292a17659f2b80d04ef6eb6a5c0448710\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 12:35:52.373678 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 12:35:52.376135 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3230331060/tls.crt::/tmp/serving-cert-3230331060/tls.key\\\\\\\"\\\\nI1128 12:35:57.821147 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 12:35:57.824398 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 12:35:57.824424 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 12:35:57.824444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 12:35:57.824450 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 12:35:57.831411 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 12:35:57.831445 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831467 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 12:35:57.831472 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 12:35:57.831476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 12:35:57.831480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 12:35:57.831686 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 12:35:57.839127 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bafddd2d81f67f1445e3714d50eba5cfd6f75d60c2cb47d16f2086861a10bd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:46Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.418686 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c9857379117d130ce02fa4a153dfc01c9f41ba65663ae918bd82c9b14291e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:46Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.430928 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dlvj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8b3aa68-52ee-40cd-a059-6e410b826ce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2e852aeb571e85a95f4581550ee5f911d9c67fbbc4fc699e9af667a9c4b531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db55w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dlvj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:46Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.443146 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jf46d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd0b81f7-c868-4f90-b20d-9d1b53f5216f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e8508450f924b6b8509b5d06c78535915557c5a7362b50c41515ad15f35e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smlr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383fc6deecc04584b130b3fdc9c1fded751c521513ce60898fdf1927748cd4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smlr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:36:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jf46d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:46Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.507147 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.507226 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.507237 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.507254 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.507297 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:46Z","lastTransitionTime":"2025-11-28T12:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.609786 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.609872 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.609891 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.609914 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.609933 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:46Z","lastTransitionTime":"2025-11-28T12:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.711954 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.712427 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.712673 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.712908 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.713149 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:46Z","lastTransitionTime":"2025-11-28T12:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.726240 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.726295 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.726295 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.726255 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:36:46 crc kubenswrapper[4779]: E1128 12:36:46.726429 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:36:46 crc kubenswrapper[4779]: E1128 12:36:46.726570 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:36:46 crc kubenswrapper[4779]: E1128 12:36:46.726686 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c2psj" podUID="2d9943eb-ea06-476d-8736-0a45e588d9f4" Nov 28 12:36:46 crc kubenswrapper[4779]: E1128 12:36:46.726775 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.816783 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.816844 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.816860 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.816903 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.816920 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:46Z","lastTransitionTime":"2025-11-28T12:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.919972 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.920033 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.920051 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.920077 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:46 crc kubenswrapper[4779]: I1128 12:36:46.920140 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:46Z","lastTransitionTime":"2025-11-28T12:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:47 crc kubenswrapper[4779]: I1128 12:36:47.022907 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:47 crc kubenswrapper[4779]: I1128 12:36:47.022972 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:47 crc kubenswrapper[4779]: I1128 12:36:47.022995 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:47 crc kubenswrapper[4779]: I1128 12:36:47.023020 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:47 crc kubenswrapper[4779]: I1128 12:36:47.023038 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:47Z","lastTransitionTime":"2025-11-28T12:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:47 crc kubenswrapper[4779]: I1128 12:36:47.126370 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:47 crc kubenswrapper[4779]: I1128 12:36:47.126464 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:47 crc kubenswrapper[4779]: I1128 12:36:47.126487 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:47 crc kubenswrapper[4779]: I1128 12:36:47.126518 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:47 crc kubenswrapper[4779]: I1128 12:36:47.126540 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:47Z","lastTransitionTime":"2025-11-28T12:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:47 crc kubenswrapper[4779]: I1128 12:36:47.229922 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:47 crc kubenswrapper[4779]: I1128 12:36:47.230038 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:47 crc kubenswrapper[4779]: I1128 12:36:47.230058 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:47 crc kubenswrapper[4779]: I1128 12:36:47.230086 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:47 crc kubenswrapper[4779]: I1128 12:36:47.230132 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:47Z","lastTransitionTime":"2025-11-28T12:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:47 crc kubenswrapper[4779]: I1128 12:36:47.333084 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:47 crc kubenswrapper[4779]: I1128 12:36:47.333145 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:47 crc kubenswrapper[4779]: I1128 12:36:47.333159 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:47 crc kubenswrapper[4779]: I1128 12:36:47.333176 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:47 crc kubenswrapper[4779]: I1128 12:36:47.333188 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:47Z","lastTransitionTime":"2025-11-28T12:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:47 crc kubenswrapper[4779]: I1128 12:36:47.436527 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:47 crc kubenswrapper[4779]: I1128 12:36:47.436611 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:47 crc kubenswrapper[4779]: I1128 12:36:47.436628 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:47 crc kubenswrapper[4779]: I1128 12:36:47.436657 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:47 crc kubenswrapper[4779]: I1128 12:36:47.436674 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:47Z","lastTransitionTime":"2025-11-28T12:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:47 crc kubenswrapper[4779]: I1128 12:36:47.539117 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:47 crc kubenswrapper[4779]: I1128 12:36:47.539164 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:47 crc kubenswrapper[4779]: I1128 12:36:47.539179 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:47 crc kubenswrapper[4779]: I1128 12:36:47.539198 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:47 crc kubenswrapper[4779]: I1128 12:36:47.539211 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:47Z","lastTransitionTime":"2025-11-28T12:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:47 crc kubenswrapper[4779]: I1128 12:36:47.642545 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:47 crc kubenswrapper[4779]: I1128 12:36:47.642594 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:47 crc kubenswrapper[4779]: I1128 12:36:47.642619 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:47 crc kubenswrapper[4779]: I1128 12:36:47.642642 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:47 crc kubenswrapper[4779]: I1128 12:36:47.642654 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:47Z","lastTransitionTime":"2025-11-28T12:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:47 crc kubenswrapper[4779]: I1128 12:36:47.746216 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:47 crc kubenswrapper[4779]: I1128 12:36:47.746276 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:47 crc kubenswrapper[4779]: I1128 12:36:47.746290 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:47 crc kubenswrapper[4779]: I1128 12:36:47.746312 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:47 crc kubenswrapper[4779]: I1128 12:36:47.746324 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:47Z","lastTransitionTime":"2025-11-28T12:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:47 crc kubenswrapper[4779]: I1128 12:36:47.848829 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:47 crc kubenswrapper[4779]: I1128 12:36:47.848889 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:47 crc kubenswrapper[4779]: I1128 12:36:47.848906 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:47 crc kubenswrapper[4779]: I1128 12:36:47.848933 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:47 crc kubenswrapper[4779]: I1128 12:36:47.848953 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:47Z","lastTransitionTime":"2025-11-28T12:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:47 crc kubenswrapper[4779]: I1128 12:36:47.952048 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:47 crc kubenswrapper[4779]: I1128 12:36:47.952162 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:47 crc kubenswrapper[4779]: I1128 12:36:47.952192 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:47 crc kubenswrapper[4779]: I1128 12:36:47.952219 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:47 crc kubenswrapper[4779]: I1128 12:36:47.952240 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:47Z","lastTransitionTime":"2025-11-28T12:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:48 crc kubenswrapper[4779]: I1128 12:36:48.055420 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:48 crc kubenswrapper[4779]: I1128 12:36:48.055498 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:48 crc kubenswrapper[4779]: I1128 12:36:48.055521 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:48 crc kubenswrapper[4779]: I1128 12:36:48.055552 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:48 crc kubenswrapper[4779]: I1128 12:36:48.055574 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:48Z","lastTransitionTime":"2025-11-28T12:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:48 crc kubenswrapper[4779]: I1128 12:36:48.158734 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:48 crc kubenswrapper[4779]: I1128 12:36:48.158791 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:48 crc kubenswrapper[4779]: I1128 12:36:48.158802 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:48 crc kubenswrapper[4779]: I1128 12:36:48.158824 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:48 crc kubenswrapper[4779]: I1128 12:36:48.158837 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:48Z","lastTransitionTime":"2025-11-28T12:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:48 crc kubenswrapper[4779]: I1128 12:36:48.261471 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:48 crc kubenswrapper[4779]: I1128 12:36:48.261528 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:48 crc kubenswrapper[4779]: I1128 12:36:48.261540 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:48 crc kubenswrapper[4779]: I1128 12:36:48.261559 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:48 crc kubenswrapper[4779]: I1128 12:36:48.261574 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:48Z","lastTransitionTime":"2025-11-28T12:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:48 crc kubenswrapper[4779]: I1128 12:36:48.363856 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:48 crc kubenswrapper[4779]: I1128 12:36:48.363892 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:48 crc kubenswrapper[4779]: I1128 12:36:48.363902 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:48 crc kubenswrapper[4779]: I1128 12:36:48.363917 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:48 crc kubenswrapper[4779]: I1128 12:36:48.363927 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:48Z","lastTransitionTime":"2025-11-28T12:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:48 crc kubenswrapper[4779]: I1128 12:36:48.466706 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:48 crc kubenswrapper[4779]: I1128 12:36:48.466761 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:48 crc kubenswrapper[4779]: I1128 12:36:48.466774 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:48 crc kubenswrapper[4779]: I1128 12:36:48.466794 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:48 crc kubenswrapper[4779]: I1128 12:36:48.466808 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:48Z","lastTransitionTime":"2025-11-28T12:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:48 crc kubenswrapper[4779]: I1128 12:36:48.569252 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:48 crc kubenswrapper[4779]: I1128 12:36:48.569307 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:48 crc kubenswrapper[4779]: I1128 12:36:48.569317 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:48 crc kubenswrapper[4779]: I1128 12:36:48.569335 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:48 crc kubenswrapper[4779]: I1128 12:36:48.569349 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:48Z","lastTransitionTime":"2025-11-28T12:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:48 crc kubenswrapper[4779]: I1128 12:36:48.671828 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:48 crc kubenswrapper[4779]: I1128 12:36:48.671860 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:48 crc kubenswrapper[4779]: I1128 12:36:48.671868 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:48 crc kubenswrapper[4779]: I1128 12:36:48.671881 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:48 crc kubenswrapper[4779]: I1128 12:36:48.671891 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:48Z","lastTransitionTime":"2025-11-28T12:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:48 crc kubenswrapper[4779]: I1128 12:36:48.725382 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:36:48 crc kubenswrapper[4779]: E1128 12:36:48.725553 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c2psj" podUID="2d9943eb-ea06-476d-8736-0a45e588d9f4" Nov 28 12:36:48 crc kubenswrapper[4779]: I1128 12:36:48.725850 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:36:48 crc kubenswrapper[4779]: E1128 12:36:48.725920 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:36:48 crc kubenswrapper[4779]: I1128 12:36:48.726053 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:36:48 crc kubenswrapper[4779]: E1128 12:36:48.726166 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:36:48 crc kubenswrapper[4779]: I1128 12:36:48.726297 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:36:48 crc kubenswrapper[4779]: E1128 12:36:48.726453 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:36:48 crc kubenswrapper[4779]: I1128 12:36:48.774779 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:48 crc kubenswrapper[4779]: I1128 12:36:48.774889 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:48 crc kubenswrapper[4779]: I1128 12:36:48.774913 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:48 crc kubenswrapper[4779]: I1128 12:36:48.774945 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:48 crc kubenswrapper[4779]: I1128 12:36:48.774966 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:48Z","lastTransitionTime":"2025-11-28T12:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:48 crc kubenswrapper[4779]: I1128 12:36:48.877714 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:48 crc kubenswrapper[4779]: I1128 12:36:48.877750 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:48 crc kubenswrapper[4779]: I1128 12:36:48.877760 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:48 crc kubenswrapper[4779]: I1128 12:36:48.877776 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:48 crc kubenswrapper[4779]: I1128 12:36:48.877785 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:48Z","lastTransitionTime":"2025-11-28T12:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:48 crc kubenswrapper[4779]: I1128 12:36:48.980700 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:48 crc kubenswrapper[4779]: I1128 12:36:48.980771 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:48 crc kubenswrapper[4779]: I1128 12:36:48.980783 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:48 crc kubenswrapper[4779]: I1128 12:36:48.980799 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:48 crc kubenswrapper[4779]: I1128 12:36:48.980812 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:48Z","lastTransitionTime":"2025-11-28T12:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.083807 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.083865 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.083877 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.083897 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.083913 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:49Z","lastTransitionTime":"2025-11-28T12:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.186744 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.186797 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.186809 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.186826 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.186838 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:49Z","lastTransitionTime":"2025-11-28T12:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.289916 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.289974 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.289986 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.290008 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.290020 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:49Z","lastTransitionTime":"2025-11-28T12:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.392281 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.392360 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.392382 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.392411 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.392433 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:49Z","lastTransitionTime":"2025-11-28T12:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.495878 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.495954 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.495975 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.495990 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.495999 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:49Z","lastTransitionTime":"2025-11-28T12:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.598726 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.598779 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.598787 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.598804 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.598813 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:49Z","lastTransitionTime":"2025-11-28T12:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.702277 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.702315 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.702325 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.702343 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.702355 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:49Z","lastTransitionTime":"2025-11-28T12:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.738962 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c2psj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d9943eb-ea06-476d-8736-0a45e588d9f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8vbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8vbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:36:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c2psj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:49Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.765083 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbbbf6f-004c-42ae-8a38-1bcc6cb88ac2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9cede79cbe4c47d953dfa702fe815cc14ee242dede33edec3c4617824c89b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4493f154b47a353308d54341114bbbd12157f9575b873e1648d1dae6a386a534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9d44446078a2bb53a5a9b0a3f7a87ecf24a8554fb968a0250fc3a4cfb2d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://123567b9e202a9aae6ab83bca1ea909a496c476395703ab65e855be02f7af06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c959e0d582f2f01523650db7c0a1d6483dda34c3fcdfaa29d2d25e4d0b0f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:49Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.794363 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f4f43e-a921-41b2-aa88-506055daff60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f78ed0375efd54092331e1cbb01c168e6cc218dc9abaf935e986271e1fd7ed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f78ed0375efd54092331e1cbb01c168e6cc218dc9abaf935e986271e1fd7ed2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T12:36:29Z\\\",\\\"message\\\":\\\"h UID \\\\\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\\\\\" in cache\\\\nI1128 12:36:28.904558 6457 port_cache.go:96] port-cache(openshift-network-diagnostics_network-check-target-xd92c): added port \\\\u0026{name:openshift-network-diagnostics_network-check-target-xd92c uuid:61897e97-c771-4738-8709-09636387cb00 logicalSwitch:crc ips:[0xc007ff6d20] mac:[10 88 10 217 0 4] expires:{wall:0 ext:0 loc:\\\\u003cnil\\\\u003e}} with IP: [10.217.0.4/23] and MAC: 0a:58:0a:d9:00:04\\\\nI1128 12:36:28.904608 6457 services_controller.go:360] Finished syncing service etcd on namespace openshift-etcd for network=default : 5.545787ms\\\\nF1128 12:36:28.904620 6457 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet vali\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pbmbn_openshift-ovn-kubernetes(35f4f43e-a921-41b2-aa88-506055daff60)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pbmbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:49Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.804963 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.805020 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.805034 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.805054 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.805071 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:49Z","lastTransitionTime":"2025-11-28T12:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.806166 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwgdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13786eba-201c-40ca-89b7-174795999a9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec60bab90c7fee1fd38c00da4f84d5133876ad8f2817e5447795fcab4feb2942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwgdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:49Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.818495 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dlvj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8b3aa68-52ee-40cd-a059-6e410b826ce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2e852aeb571e85a95f4581550ee5f911d9c67fbbc4fc699e9af667a9c4b531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db55w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dlvj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:49Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.832980 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jf46d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd0b81f7-c868-4f90-b20d-9d1b53f5216f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e8508450f924b6b8509b5d06c78535915557c5a7362b50c41515ad15f35e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smlr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383fc6deecc04584b130b3fdc9c1fded751c521513ce60898fdf1927748cd4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smlr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:36:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jf46d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:49Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.849377 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b303d954-23c9-4fc9-8e79-981009172099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6912a42c418059dabf07c7d940bf1c4102c8dcf91cd4dd6ca0b177f4acd276ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf14e5e2229156dc442c92253ef1f23c75a5a6f5dec2d2537cddcdd1df54b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a76dbc5b41ebf68792cd449e4a245678be24151f0c980eedd06f956674b2435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3db38b748527004df103120db865f7848491344dfdf5c89a6db10f4d15e6a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9026b47ba3a0076e3f66e452bc9a223292a17659f2b80d04ef6eb6a5c0448710\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 12:35:52.373678 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 12:35:52.376135 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3230331060/tls.crt::/tmp/serving-cert-3230331060/tls.key\\\\\\\"\\\\nI1128 12:35:57.821147 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 12:35:57.824398 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 12:35:57.824424 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 12:35:57.824444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 12:35:57.824450 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 12:35:57.831411 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 12:35:57.831445 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831467 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 12:35:57.831472 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 12:35:57.831476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 12:35:57.831480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 12:35:57.831686 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 12:35:57.839127 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bafddd2d81f67f1445e3714d50eba5cfd6f75d60c2cb47d16f2086861a10bd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:49Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.866248 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c9857379117d130ce02fa4a153dfc01c9f41ba65663ae918bd82c9b14291e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:49Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.886274 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3544f7f72339878b2314fde813e8a92a8341fb05a34a4440c7c37b983d8d23f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19dcc5041b0cbae9167c41c808ece2651eac928f93422722ae28825b5ea4f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:49Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.901023 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:49Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.908076 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.908132 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.908142 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.908161 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.908172 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:49Z","lastTransitionTime":"2025-11-28T12:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.914361 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d290cf8678216cdf66a68b32edea2be30af7f7fa4ff7ccac629d9e690b23b13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:49Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.927935 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2gg4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9e9a74657b078824a5614dc894178aed5ca4cb11445b900485e9a6c4378f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2gg4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:49Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.942073 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91a3b1-3cec-4dcd-8f16-bc721aaedc52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7be2ce5bc20d31216029627f86e27657d444334d72ba98e4ae9923c9d23cf35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9512174ef01c8751a11fc5e6193513236518b4a9d5b63b05020544b8708b70b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54bf19864670db9dbeda1e3b133e9246f9e4027714f684783efed888890af9ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dd288476ad4d58bebb413208bbe2f45bf3997fd7587a90b08ff3af6bdc2ad10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dd288476ad4d58bebb413208bbe2f45bf3997fd7587a90b08ff3af6bdc2ad10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:49Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.954527 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:49Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.969278 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:49Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.982379 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373d4c2a-0b03-4671-945a-0583fa342b3d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e79e9cc7bdaacc427604d12cf94272c7ed3d93519b1d285ba336edded1b3642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0417da6607c0d549767642332fa4fb21bbef525d7073d0a352120092d3450f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b887fb78d1be13c77a88ce49c84ff0839a51056e29d59d571ab7da133dd0d897\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a538ac7a3b48f9c58a68688a95342fb3a9d26ee3e5d7c65f1e3b8d99993294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:49Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:49 crc kubenswrapper[4779]: I1128 12:36:49.997742 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b2a3eb4-4de5-491b-b466-3a35b7d745ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23df7a96829b4103254d6da3740caab05538ddbd3235ce16e8d768e681041c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f92b1378efd9146ee3cb61fef14092136e47b318d132a400c768bedf50d034e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kj9g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:49Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:50 crc kubenswrapper[4779]: I1128 12:36:50.011899 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:50 crc kubenswrapper[4779]: I1128 12:36:50.011936 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:50 crc kubenswrapper[4779]: I1128 12:36:50.011949 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:50 crc kubenswrapper[4779]: I1128 12:36:50.011971 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:50 crc kubenswrapper[4779]: I1128 12:36:50.011986 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:50Z","lastTransitionTime":"2025-11-28T12:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:50 crc kubenswrapper[4779]: I1128 12:36:50.014870 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pzwdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba664a9e-76d2-4d02-889a-e7062bfc903c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c11decc7085592a2a1e13b74049f378421293a7a1929f765860c47824c4b7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5598fdba6afba30cd00c8abdae6c80300fb10dfcde40afab0f15f848addddd47\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T12:36:44Z\\\",\\\"message\\\":\\\"2025-11-28T12:35:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4ae75205-766c-4cf0-bf74-190c15ad266e\\\\n2025-11-28T12:35:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4ae75205-766c-4cf0-bf74-190c15ad266e to /host/opt/cni/bin/\\\\n2025-11-28T12:35:59Z [verbose] multus-daemon started\\\\n2025-11-28T12:35:59Z [verbose] Readiness Indicator file check\\\\n2025-11-28T12:36:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfslc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pzwdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:50Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:50 crc kubenswrapper[4779]: I1128 12:36:50.115189 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:50 crc kubenswrapper[4779]: I1128 12:36:50.115237 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:50 crc kubenswrapper[4779]: I1128 12:36:50.115251 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:50 crc kubenswrapper[4779]: I1128 12:36:50.115272 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:50 crc kubenswrapper[4779]: I1128 12:36:50.115285 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:50Z","lastTransitionTime":"2025-11-28T12:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:50 crc kubenswrapper[4779]: I1128 12:36:50.218277 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:50 crc kubenswrapper[4779]: I1128 12:36:50.218321 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:50 crc kubenswrapper[4779]: I1128 12:36:50.218334 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:50 crc kubenswrapper[4779]: I1128 12:36:50.218358 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:50 crc kubenswrapper[4779]: I1128 12:36:50.218370 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:50Z","lastTransitionTime":"2025-11-28T12:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:50 crc kubenswrapper[4779]: I1128 12:36:50.321455 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:50 crc kubenswrapper[4779]: I1128 12:36:50.321513 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:50 crc kubenswrapper[4779]: I1128 12:36:50.321526 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:50 crc kubenswrapper[4779]: I1128 12:36:50.321549 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:50 crc kubenswrapper[4779]: I1128 12:36:50.321565 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:50Z","lastTransitionTime":"2025-11-28T12:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:50 crc kubenswrapper[4779]: I1128 12:36:50.424527 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:50 crc kubenswrapper[4779]: I1128 12:36:50.424575 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:50 crc kubenswrapper[4779]: I1128 12:36:50.424584 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:50 crc kubenswrapper[4779]: I1128 12:36:50.424600 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:50 crc kubenswrapper[4779]: I1128 12:36:50.424608 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:50Z","lastTransitionTime":"2025-11-28T12:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:50 crc kubenswrapper[4779]: I1128 12:36:50.527060 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:50 crc kubenswrapper[4779]: I1128 12:36:50.527122 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:50 crc kubenswrapper[4779]: I1128 12:36:50.527133 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:50 crc kubenswrapper[4779]: I1128 12:36:50.527153 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:50 crc kubenswrapper[4779]: I1128 12:36:50.527166 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:50Z","lastTransitionTime":"2025-11-28T12:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:50 crc kubenswrapper[4779]: I1128 12:36:50.631260 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:50 crc kubenswrapper[4779]: I1128 12:36:50.631327 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:50 crc kubenswrapper[4779]: I1128 12:36:50.631346 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:50 crc kubenswrapper[4779]: I1128 12:36:50.631373 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:50 crc kubenswrapper[4779]: I1128 12:36:50.631392 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:50Z","lastTransitionTime":"2025-11-28T12:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:50 crc kubenswrapper[4779]: I1128 12:36:50.725391 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:36:50 crc kubenswrapper[4779]: I1128 12:36:50.725492 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:36:50 crc kubenswrapper[4779]: E1128 12:36:50.725557 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:36:50 crc kubenswrapper[4779]: I1128 12:36:50.725604 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:36:50 crc kubenswrapper[4779]: I1128 12:36:50.725675 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:36:50 crc kubenswrapper[4779]: E1128 12:36:50.725853 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:36:50 crc kubenswrapper[4779]: E1128 12:36:50.725974 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c2psj" podUID="2d9943eb-ea06-476d-8736-0a45e588d9f4" Nov 28 12:36:50 crc kubenswrapper[4779]: E1128 12:36:50.726225 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:36:50 crc kubenswrapper[4779]: I1128 12:36:50.734575 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:50 crc kubenswrapper[4779]: I1128 12:36:50.734625 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:50 crc kubenswrapper[4779]: I1128 12:36:50.734642 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:50 crc kubenswrapper[4779]: I1128 12:36:50.734664 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:50 crc kubenswrapper[4779]: I1128 12:36:50.734685 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:50Z","lastTransitionTime":"2025-11-28T12:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:50 crc kubenswrapper[4779]: I1128 12:36:50.837543 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:50 crc kubenswrapper[4779]: I1128 12:36:50.837594 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:50 crc kubenswrapper[4779]: I1128 12:36:50.837605 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:50 crc kubenswrapper[4779]: I1128 12:36:50.837625 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:50 crc kubenswrapper[4779]: I1128 12:36:50.837639 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:50Z","lastTransitionTime":"2025-11-28T12:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:50 crc kubenswrapper[4779]: I1128 12:36:50.939964 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:50 crc kubenswrapper[4779]: I1128 12:36:50.940024 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:50 crc kubenswrapper[4779]: I1128 12:36:50.940040 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:50 crc kubenswrapper[4779]: I1128 12:36:50.940065 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:50 crc kubenswrapper[4779]: I1128 12:36:50.940082 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:50Z","lastTransitionTime":"2025-11-28T12:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:51 crc kubenswrapper[4779]: I1128 12:36:51.044410 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:51 crc kubenswrapper[4779]: I1128 12:36:51.044508 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:51 crc kubenswrapper[4779]: I1128 12:36:51.044534 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:51 crc kubenswrapper[4779]: I1128 12:36:51.044567 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:51 crc kubenswrapper[4779]: I1128 12:36:51.044590 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:51Z","lastTransitionTime":"2025-11-28T12:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:51 crc kubenswrapper[4779]: I1128 12:36:51.147819 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:51 crc kubenswrapper[4779]: I1128 12:36:51.147888 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:51 crc kubenswrapper[4779]: I1128 12:36:51.147907 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:51 crc kubenswrapper[4779]: I1128 12:36:51.147933 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:51 crc kubenswrapper[4779]: I1128 12:36:51.147954 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:51Z","lastTransitionTime":"2025-11-28T12:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:51 crc kubenswrapper[4779]: I1128 12:36:51.251150 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:51 crc kubenswrapper[4779]: I1128 12:36:51.251224 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:51 crc kubenswrapper[4779]: I1128 12:36:51.251244 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:51 crc kubenswrapper[4779]: I1128 12:36:51.251273 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:51 crc kubenswrapper[4779]: I1128 12:36:51.251297 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:51Z","lastTransitionTime":"2025-11-28T12:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:51 crc kubenswrapper[4779]: I1128 12:36:51.353795 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:51 crc kubenswrapper[4779]: I1128 12:36:51.353864 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:51 crc kubenswrapper[4779]: I1128 12:36:51.353881 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:51 crc kubenswrapper[4779]: I1128 12:36:51.353906 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:51 crc kubenswrapper[4779]: I1128 12:36:51.353926 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:51Z","lastTransitionTime":"2025-11-28T12:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:51 crc kubenswrapper[4779]: I1128 12:36:51.457443 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:51 crc kubenswrapper[4779]: I1128 12:36:51.457507 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:51 crc kubenswrapper[4779]: I1128 12:36:51.457523 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:51 crc kubenswrapper[4779]: I1128 12:36:51.457550 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:51 crc kubenswrapper[4779]: I1128 12:36:51.457572 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:51Z","lastTransitionTime":"2025-11-28T12:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:51 crc kubenswrapper[4779]: I1128 12:36:51.560571 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:51 crc kubenswrapper[4779]: I1128 12:36:51.560612 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:51 crc kubenswrapper[4779]: I1128 12:36:51.560625 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:51 crc kubenswrapper[4779]: I1128 12:36:51.560643 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:51 crc kubenswrapper[4779]: I1128 12:36:51.560658 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:51Z","lastTransitionTime":"2025-11-28T12:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:51 crc kubenswrapper[4779]: I1128 12:36:51.663447 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:51 crc kubenswrapper[4779]: I1128 12:36:51.663518 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:51 crc kubenswrapper[4779]: I1128 12:36:51.663540 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:51 crc kubenswrapper[4779]: I1128 12:36:51.663569 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:51 crc kubenswrapper[4779]: I1128 12:36:51.663593 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:51Z","lastTransitionTime":"2025-11-28T12:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:51 crc kubenswrapper[4779]: I1128 12:36:51.766386 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:51 crc kubenswrapper[4779]: I1128 12:36:51.766445 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:51 crc kubenswrapper[4779]: I1128 12:36:51.766462 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:51 crc kubenswrapper[4779]: I1128 12:36:51.766488 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:51 crc kubenswrapper[4779]: I1128 12:36:51.766504 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:51Z","lastTransitionTime":"2025-11-28T12:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:51 crc kubenswrapper[4779]: I1128 12:36:51.869502 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:51 crc kubenswrapper[4779]: I1128 12:36:51.869558 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:51 crc kubenswrapper[4779]: I1128 12:36:51.869570 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:51 crc kubenswrapper[4779]: I1128 12:36:51.869592 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:51 crc kubenswrapper[4779]: I1128 12:36:51.869605 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:51Z","lastTransitionTime":"2025-11-28T12:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:51 crc kubenswrapper[4779]: I1128 12:36:51.973076 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:51 crc kubenswrapper[4779]: I1128 12:36:51.973183 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:51 crc kubenswrapper[4779]: I1128 12:36:51.973207 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:51 crc kubenswrapper[4779]: I1128 12:36:51.973240 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:51 crc kubenswrapper[4779]: I1128 12:36:51.973263 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:51Z","lastTransitionTime":"2025-11-28T12:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.076145 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.076212 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.076229 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.076256 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.076273 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:52Z","lastTransitionTime":"2025-11-28T12:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.178954 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.179021 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.179031 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.179064 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.179075 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:52Z","lastTransitionTime":"2025-11-28T12:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.182340 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.182382 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.182392 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.182403 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.182410 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:52Z","lastTransitionTime":"2025-11-28T12:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:52 crc kubenswrapper[4779]: E1128 12:36:52.198069 4779 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"78a2023c-0feb-4049-a56a-d55919a84d1c\\\",\\\"systemUUID\\\":\\\"232cf3c8-8956-4a87-8900-bbd0298775e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:52Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.202735 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.202775 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.202792 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.202821 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.202845 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:52Z","lastTransitionTime":"2025-11-28T12:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:52 crc kubenswrapper[4779]: E1128 12:36:52.224708 4779 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"78a2023c-0feb-4049-a56a-d55919a84d1c\\\",\\\"systemUUID\\\":\\\"232cf3c8-8956-4a87-8900-bbd0298775e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:52Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.229053 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.229169 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.229209 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.229252 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.229276 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:52Z","lastTransitionTime":"2025-11-28T12:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:52 crc kubenswrapper[4779]: E1128 12:36:52.250357 4779 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"78a2023c-0feb-4049-a56a-d55919a84d1c\\\",\\\"systemUUID\\\":\\\"232cf3c8-8956-4a87-8900-bbd0298775e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:52Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.254813 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.254877 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.254897 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.254924 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.254943 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:52Z","lastTransitionTime":"2025-11-28T12:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:52 crc kubenswrapper[4779]: E1128 12:36:52.272318 4779 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"78a2023c-0feb-4049-a56a-d55919a84d1c\\\",\\\"systemUUID\\\":\\\"232cf3c8-8956-4a87-8900-bbd0298775e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:52Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.278868 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.278923 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.278942 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.278966 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.278982 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:52Z","lastTransitionTime":"2025-11-28T12:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:52 crc kubenswrapper[4779]: E1128 12:36:52.293374 4779 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:36:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"78a2023c-0feb-4049-a56a-d55919a84d1c\\\",\\\"systemUUID\\\":\\\"232cf3c8-8956-4a87-8900-bbd0298775e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:52Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:52 crc kubenswrapper[4779]: E1128 12:36:52.293609 4779 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.296078 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.296199 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.296219 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.296247 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.296266 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:52Z","lastTransitionTime":"2025-11-28T12:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.399279 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.399351 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.399375 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.399403 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.399421 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:52Z","lastTransitionTime":"2025-11-28T12:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.503856 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.503943 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.503959 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.504008 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.504026 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:52Z","lastTransitionTime":"2025-11-28T12:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.607537 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.607589 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.607601 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.607619 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.607630 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:52Z","lastTransitionTime":"2025-11-28T12:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.710987 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.711051 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.711070 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.711129 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.711146 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:52Z","lastTransitionTime":"2025-11-28T12:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.725285 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.725285 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:36:52 crc kubenswrapper[4779]: E1128 12:36:52.725574 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.725290 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:36:52 crc kubenswrapper[4779]: E1128 12:36:52.725667 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:36:52 crc kubenswrapper[4779]: E1128 12:36:52.725750 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.725314 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:36:52 crc kubenswrapper[4779]: E1128 12:36:52.725882 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c2psj" podUID="2d9943eb-ea06-476d-8736-0a45e588d9f4" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.813657 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.813728 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.813746 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.813798 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.813816 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:52Z","lastTransitionTime":"2025-11-28T12:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.916726 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.916776 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.916788 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.916807 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:52 crc kubenswrapper[4779]: I1128 12:36:52.916820 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:52Z","lastTransitionTime":"2025-11-28T12:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:53 crc kubenswrapper[4779]: I1128 12:36:53.020040 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:53 crc kubenswrapper[4779]: I1128 12:36:53.020146 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:53 crc kubenswrapper[4779]: I1128 12:36:53.020172 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:53 crc kubenswrapper[4779]: I1128 12:36:53.020203 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:53 crc kubenswrapper[4779]: I1128 12:36:53.020226 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:53Z","lastTransitionTime":"2025-11-28T12:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:53 crc kubenswrapper[4779]: I1128 12:36:53.123074 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:53 crc kubenswrapper[4779]: I1128 12:36:53.123133 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:53 crc kubenswrapper[4779]: I1128 12:36:53.123141 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:53 crc kubenswrapper[4779]: I1128 12:36:53.123157 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:53 crc kubenswrapper[4779]: I1128 12:36:53.123166 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:53Z","lastTransitionTime":"2025-11-28T12:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:53 crc kubenswrapper[4779]: I1128 12:36:53.225419 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:53 crc kubenswrapper[4779]: I1128 12:36:53.225509 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:53 crc kubenswrapper[4779]: I1128 12:36:53.225530 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:53 crc kubenswrapper[4779]: I1128 12:36:53.225559 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:53 crc kubenswrapper[4779]: I1128 12:36:53.225581 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:53Z","lastTransitionTime":"2025-11-28T12:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:53 crc kubenswrapper[4779]: I1128 12:36:53.327807 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:53 crc kubenswrapper[4779]: I1128 12:36:53.327898 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:53 crc kubenswrapper[4779]: I1128 12:36:53.327921 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:53 crc kubenswrapper[4779]: I1128 12:36:53.327954 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:53 crc kubenswrapper[4779]: I1128 12:36:53.327980 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:53Z","lastTransitionTime":"2025-11-28T12:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:53 crc kubenswrapper[4779]: I1128 12:36:53.431612 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:53 crc kubenswrapper[4779]: I1128 12:36:53.431670 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:53 crc kubenswrapper[4779]: I1128 12:36:53.431682 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:53 crc kubenswrapper[4779]: I1128 12:36:53.431705 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:53 crc kubenswrapper[4779]: I1128 12:36:53.431719 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:53Z","lastTransitionTime":"2025-11-28T12:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:53 crc kubenswrapper[4779]: I1128 12:36:53.534709 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:53 crc kubenswrapper[4779]: I1128 12:36:53.534770 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:53 crc kubenswrapper[4779]: I1128 12:36:53.534783 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:53 crc kubenswrapper[4779]: I1128 12:36:53.534802 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:53 crc kubenswrapper[4779]: I1128 12:36:53.534814 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:53Z","lastTransitionTime":"2025-11-28T12:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:53 crc kubenswrapper[4779]: I1128 12:36:53.637665 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:53 crc kubenswrapper[4779]: I1128 12:36:53.637733 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:53 crc kubenswrapper[4779]: I1128 12:36:53.637751 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:53 crc kubenswrapper[4779]: I1128 12:36:53.637776 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:53 crc kubenswrapper[4779]: I1128 12:36:53.637793 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:53Z","lastTransitionTime":"2025-11-28T12:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:53 crc kubenswrapper[4779]: I1128 12:36:53.740660 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:53 crc kubenswrapper[4779]: I1128 12:36:53.740731 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:53 crc kubenswrapper[4779]: I1128 12:36:53.740748 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:53 crc kubenswrapper[4779]: I1128 12:36:53.740776 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:53 crc kubenswrapper[4779]: I1128 12:36:53.740796 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:53Z","lastTransitionTime":"2025-11-28T12:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:53 crc kubenswrapper[4779]: I1128 12:36:53.843552 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:53 crc kubenswrapper[4779]: I1128 12:36:53.843617 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:53 crc kubenswrapper[4779]: I1128 12:36:53.843634 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:53 crc kubenswrapper[4779]: I1128 12:36:53.843659 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:53 crc kubenswrapper[4779]: I1128 12:36:53.843678 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:53Z","lastTransitionTime":"2025-11-28T12:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:53 crc kubenswrapper[4779]: I1128 12:36:53.947412 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:53 crc kubenswrapper[4779]: I1128 12:36:53.947459 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:53 crc kubenswrapper[4779]: I1128 12:36:53.947470 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:53 crc kubenswrapper[4779]: I1128 12:36:53.947484 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:53 crc kubenswrapper[4779]: I1128 12:36:53.947495 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:53Z","lastTransitionTime":"2025-11-28T12:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:54 crc kubenswrapper[4779]: I1128 12:36:54.051369 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:54 crc kubenswrapper[4779]: I1128 12:36:54.051427 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:54 crc kubenswrapper[4779]: I1128 12:36:54.051443 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:54 crc kubenswrapper[4779]: I1128 12:36:54.051466 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:54 crc kubenswrapper[4779]: I1128 12:36:54.051534 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:54Z","lastTransitionTime":"2025-11-28T12:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:54 crc kubenswrapper[4779]: I1128 12:36:54.155132 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:54 crc kubenswrapper[4779]: I1128 12:36:54.155198 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:54 crc kubenswrapper[4779]: I1128 12:36:54.155219 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:54 crc kubenswrapper[4779]: I1128 12:36:54.155246 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:54 crc kubenswrapper[4779]: I1128 12:36:54.155264 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:54Z","lastTransitionTime":"2025-11-28T12:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:54 crc kubenswrapper[4779]: I1128 12:36:54.258427 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:54 crc kubenswrapper[4779]: I1128 12:36:54.258774 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:54 crc kubenswrapper[4779]: I1128 12:36:54.258929 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:54 crc kubenswrapper[4779]: I1128 12:36:54.259126 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:54 crc kubenswrapper[4779]: I1128 12:36:54.259287 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:54Z","lastTransitionTime":"2025-11-28T12:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:54 crc kubenswrapper[4779]: I1128 12:36:54.362993 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:54 crc kubenswrapper[4779]: I1128 12:36:54.363036 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:54 crc kubenswrapper[4779]: I1128 12:36:54.363049 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:54 crc kubenswrapper[4779]: I1128 12:36:54.363064 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:54 crc kubenswrapper[4779]: I1128 12:36:54.363075 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:54Z","lastTransitionTime":"2025-11-28T12:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:54 crc kubenswrapper[4779]: I1128 12:36:54.467632 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:54 crc kubenswrapper[4779]: I1128 12:36:54.467708 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:54 crc kubenswrapper[4779]: I1128 12:36:54.467727 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:54 crc kubenswrapper[4779]: I1128 12:36:54.467755 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:54 crc kubenswrapper[4779]: I1128 12:36:54.467774 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:54Z","lastTransitionTime":"2025-11-28T12:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:54 crc kubenswrapper[4779]: I1128 12:36:54.571275 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:54 crc kubenswrapper[4779]: I1128 12:36:54.571356 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:54 crc kubenswrapper[4779]: I1128 12:36:54.571411 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:54 crc kubenswrapper[4779]: I1128 12:36:54.571441 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:54 crc kubenswrapper[4779]: I1128 12:36:54.571465 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:54Z","lastTransitionTime":"2025-11-28T12:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:54 crc kubenswrapper[4779]: I1128 12:36:54.675053 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:54 crc kubenswrapper[4779]: I1128 12:36:54.675153 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:54 crc kubenswrapper[4779]: I1128 12:36:54.675171 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:54 crc kubenswrapper[4779]: I1128 12:36:54.675196 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:54 crc kubenswrapper[4779]: I1128 12:36:54.675213 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:54Z","lastTransitionTime":"2025-11-28T12:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:54 crc kubenswrapper[4779]: I1128 12:36:54.726256 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:36:54 crc kubenswrapper[4779]: I1128 12:36:54.726406 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:36:54 crc kubenswrapper[4779]: E1128 12:36:54.726654 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:36:54 crc kubenswrapper[4779]: I1128 12:36:54.726730 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:36:54 crc kubenswrapper[4779]: I1128 12:36:54.726787 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:36:54 crc kubenswrapper[4779]: E1128 12:36:54.726894 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:36:54 crc kubenswrapper[4779]: E1128 12:36:54.727048 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c2psj" podUID="2d9943eb-ea06-476d-8736-0a45e588d9f4" Nov 28 12:36:54 crc kubenswrapper[4779]: E1128 12:36:54.727136 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:36:54 crc kubenswrapper[4779]: I1128 12:36:54.778032 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:54 crc kubenswrapper[4779]: I1128 12:36:54.778078 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:54 crc kubenswrapper[4779]: I1128 12:36:54.778088 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:54 crc kubenswrapper[4779]: I1128 12:36:54.778122 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:54 crc kubenswrapper[4779]: I1128 12:36:54.778133 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:54Z","lastTransitionTime":"2025-11-28T12:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:54 crc kubenswrapper[4779]: I1128 12:36:54.881617 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:54 crc kubenswrapper[4779]: I1128 12:36:54.881711 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:54 crc kubenswrapper[4779]: I1128 12:36:54.881722 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:54 crc kubenswrapper[4779]: I1128 12:36:54.881742 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:54 crc kubenswrapper[4779]: I1128 12:36:54.881757 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:54Z","lastTransitionTime":"2025-11-28T12:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:54 crc kubenswrapper[4779]: I1128 12:36:54.984889 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:54 crc kubenswrapper[4779]: I1128 12:36:54.984978 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:54 crc kubenswrapper[4779]: I1128 12:36:54.985001 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:54 crc kubenswrapper[4779]: I1128 12:36:54.985032 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:54 crc kubenswrapper[4779]: I1128 12:36:54.985051 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:54Z","lastTransitionTime":"2025-11-28T12:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:55 crc kubenswrapper[4779]: I1128 12:36:55.087392 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:55 crc kubenswrapper[4779]: I1128 12:36:55.087445 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:55 crc kubenswrapper[4779]: I1128 12:36:55.087457 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:55 crc kubenswrapper[4779]: I1128 12:36:55.087476 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:55 crc kubenswrapper[4779]: I1128 12:36:55.087490 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:55Z","lastTransitionTime":"2025-11-28T12:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:55 crc kubenswrapper[4779]: I1128 12:36:55.190310 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:55 crc kubenswrapper[4779]: I1128 12:36:55.190391 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:55 crc kubenswrapper[4779]: I1128 12:36:55.190408 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:55 crc kubenswrapper[4779]: I1128 12:36:55.190436 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:55 crc kubenswrapper[4779]: I1128 12:36:55.190454 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:55Z","lastTransitionTime":"2025-11-28T12:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:55 crc kubenswrapper[4779]: I1128 12:36:55.294558 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:55 crc kubenswrapper[4779]: I1128 12:36:55.294623 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:55 crc kubenswrapper[4779]: I1128 12:36:55.294642 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:55 crc kubenswrapper[4779]: I1128 12:36:55.294667 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:55 crc kubenswrapper[4779]: I1128 12:36:55.294685 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:55Z","lastTransitionTime":"2025-11-28T12:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:55 crc kubenswrapper[4779]: I1128 12:36:55.397309 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:55 crc kubenswrapper[4779]: I1128 12:36:55.397392 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:55 crc kubenswrapper[4779]: I1128 12:36:55.397435 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:55 crc kubenswrapper[4779]: I1128 12:36:55.397457 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:55 crc kubenswrapper[4779]: I1128 12:36:55.397474 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:55Z","lastTransitionTime":"2025-11-28T12:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:55 crc kubenswrapper[4779]: I1128 12:36:55.500694 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:55 crc kubenswrapper[4779]: I1128 12:36:55.500771 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:55 crc kubenswrapper[4779]: I1128 12:36:55.500792 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:55 crc kubenswrapper[4779]: I1128 12:36:55.500824 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:55 crc kubenswrapper[4779]: I1128 12:36:55.500847 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:55Z","lastTransitionTime":"2025-11-28T12:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:55 crc kubenswrapper[4779]: I1128 12:36:55.604175 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:55 crc kubenswrapper[4779]: I1128 12:36:55.604240 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:55 crc kubenswrapper[4779]: I1128 12:36:55.604257 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:55 crc kubenswrapper[4779]: I1128 12:36:55.604283 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:55 crc kubenswrapper[4779]: I1128 12:36:55.604301 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:55Z","lastTransitionTime":"2025-11-28T12:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:55 crc kubenswrapper[4779]: I1128 12:36:55.707591 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:55 crc kubenswrapper[4779]: I1128 12:36:55.707647 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:55 crc kubenswrapper[4779]: I1128 12:36:55.707663 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:55 crc kubenswrapper[4779]: I1128 12:36:55.707685 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:55 crc kubenswrapper[4779]: I1128 12:36:55.707699 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:55Z","lastTransitionTime":"2025-11-28T12:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:55 crc kubenswrapper[4779]: I1128 12:36:55.810599 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:55 crc kubenswrapper[4779]: I1128 12:36:55.810663 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:55 crc kubenswrapper[4779]: I1128 12:36:55.810685 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:55 crc kubenswrapper[4779]: I1128 12:36:55.810709 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:55 crc kubenswrapper[4779]: I1128 12:36:55.810727 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:55Z","lastTransitionTime":"2025-11-28T12:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:55 crc kubenswrapper[4779]: I1128 12:36:55.914559 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:55 crc kubenswrapper[4779]: I1128 12:36:55.914649 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:55 crc kubenswrapper[4779]: I1128 12:36:55.914677 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:55 crc kubenswrapper[4779]: I1128 12:36:55.914710 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:55 crc kubenswrapper[4779]: I1128 12:36:55.914734 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:55Z","lastTransitionTime":"2025-11-28T12:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:56 crc kubenswrapper[4779]: I1128 12:36:56.018237 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:56 crc kubenswrapper[4779]: I1128 12:36:56.018300 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:56 crc kubenswrapper[4779]: I1128 12:36:56.018313 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:56 crc kubenswrapper[4779]: I1128 12:36:56.018338 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:56 crc kubenswrapper[4779]: I1128 12:36:56.018352 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:56Z","lastTransitionTime":"2025-11-28T12:36:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:56 crc kubenswrapper[4779]: I1128 12:36:56.121913 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:56 crc kubenswrapper[4779]: I1128 12:36:56.121960 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:56 crc kubenswrapper[4779]: I1128 12:36:56.121970 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:56 crc kubenswrapper[4779]: I1128 12:36:56.121991 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:56 crc kubenswrapper[4779]: I1128 12:36:56.122004 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:56Z","lastTransitionTime":"2025-11-28T12:36:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:56 crc kubenswrapper[4779]: I1128 12:36:56.224796 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:56 crc kubenswrapper[4779]: I1128 12:36:56.224873 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:56 crc kubenswrapper[4779]: I1128 12:36:56.224892 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:56 crc kubenswrapper[4779]: I1128 12:36:56.224920 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:56 crc kubenswrapper[4779]: I1128 12:36:56.224939 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:56Z","lastTransitionTime":"2025-11-28T12:36:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:56 crc kubenswrapper[4779]: I1128 12:36:56.327760 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:56 crc kubenswrapper[4779]: I1128 12:36:56.327832 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:56 crc kubenswrapper[4779]: I1128 12:36:56.327850 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:56 crc kubenswrapper[4779]: I1128 12:36:56.327876 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:56 crc kubenswrapper[4779]: I1128 12:36:56.327896 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:56Z","lastTransitionTime":"2025-11-28T12:36:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:56 crc kubenswrapper[4779]: I1128 12:36:56.431596 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:56 crc kubenswrapper[4779]: I1128 12:36:56.431653 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:56 crc kubenswrapper[4779]: I1128 12:36:56.431668 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:56 crc kubenswrapper[4779]: I1128 12:36:56.431690 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:56 crc kubenswrapper[4779]: I1128 12:36:56.431707 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:56Z","lastTransitionTime":"2025-11-28T12:36:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:56 crc kubenswrapper[4779]: I1128 12:36:56.534827 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:56 crc kubenswrapper[4779]: I1128 12:36:56.534874 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:56 crc kubenswrapper[4779]: I1128 12:36:56.534888 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:56 crc kubenswrapper[4779]: I1128 12:36:56.534906 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:56 crc kubenswrapper[4779]: I1128 12:36:56.534920 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:56Z","lastTransitionTime":"2025-11-28T12:36:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:56 crc kubenswrapper[4779]: I1128 12:36:56.637251 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:56 crc kubenswrapper[4779]: I1128 12:36:56.637295 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:56 crc kubenswrapper[4779]: I1128 12:36:56.637306 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:56 crc kubenswrapper[4779]: I1128 12:36:56.637325 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:56 crc kubenswrapper[4779]: I1128 12:36:56.637337 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:56Z","lastTransitionTime":"2025-11-28T12:36:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:56 crc kubenswrapper[4779]: I1128 12:36:56.725498 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:36:56 crc kubenswrapper[4779]: I1128 12:36:56.725659 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:36:56 crc kubenswrapper[4779]: I1128 12:36:56.725809 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:36:56 crc kubenswrapper[4779]: E1128 12:36:56.725908 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:36:56 crc kubenswrapper[4779]: I1128 12:36:56.725931 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:36:56 crc kubenswrapper[4779]: E1128 12:36:56.726574 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c2psj" podUID="2d9943eb-ea06-476d-8736-0a45e588d9f4" Nov 28 12:36:56 crc kubenswrapper[4779]: E1128 12:36:56.726708 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:36:56 crc kubenswrapper[4779]: E1128 12:36:56.726862 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:36:56 crc kubenswrapper[4779]: I1128 12:36:56.727209 4779 scope.go:117] "RemoveContainer" containerID="3f78ed0375efd54092331e1cbb01c168e6cc218dc9abaf935e986271e1fd7ed2" Nov 28 12:36:56 crc kubenswrapper[4779]: I1128 12:36:56.739486 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:56 crc kubenswrapper[4779]: I1128 12:36:56.739536 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:56 crc kubenswrapper[4779]: I1128 12:36:56.739554 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:56 crc kubenswrapper[4779]: I1128 12:36:56.739577 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:56 crc kubenswrapper[4779]: I1128 12:36:56.739591 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:56Z","lastTransitionTime":"2025-11-28T12:36:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:56 crc kubenswrapper[4779]: I1128 12:36:56.744395 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 28 12:36:56 crc kubenswrapper[4779]: I1128 12:36:56.844589 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:56 crc kubenswrapper[4779]: I1128 12:36:56.844656 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:56 crc kubenswrapper[4779]: I1128 12:36:56.844673 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:56 crc kubenswrapper[4779]: I1128 12:36:56.844702 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:56 crc kubenswrapper[4779]: I1128 12:36:56.844720 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:56Z","lastTransitionTime":"2025-11-28T12:36:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:56 crc kubenswrapper[4779]: I1128 12:36:56.947836 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:56 crc kubenswrapper[4779]: I1128 12:36:56.947932 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:56 crc kubenswrapper[4779]: I1128 12:36:56.947955 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:56 crc kubenswrapper[4779]: I1128 12:36:56.947983 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:56 crc kubenswrapper[4779]: I1128 12:36:56.948006 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:56Z","lastTransitionTime":"2025-11-28T12:36:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:57 crc kubenswrapper[4779]: I1128 12:36:57.051658 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:57 crc kubenswrapper[4779]: I1128 12:36:57.051769 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:57 crc kubenswrapper[4779]: I1128 12:36:57.051790 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:57 crc kubenswrapper[4779]: I1128 12:36:57.051822 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:57 crc kubenswrapper[4779]: I1128 12:36:57.051842 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:57Z","lastTransitionTime":"2025-11-28T12:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:57 crc kubenswrapper[4779]: I1128 12:36:57.155810 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:57 crc kubenswrapper[4779]: I1128 12:36:57.155884 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:57 crc kubenswrapper[4779]: I1128 12:36:57.155900 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:57 crc kubenswrapper[4779]: I1128 12:36:57.155923 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:57 crc kubenswrapper[4779]: I1128 12:36:57.155938 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:57Z","lastTransitionTime":"2025-11-28T12:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:57 crc kubenswrapper[4779]: I1128 12:36:57.259562 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:57 crc kubenswrapper[4779]: I1128 12:36:57.259638 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:57 crc kubenswrapper[4779]: I1128 12:36:57.259659 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:57 crc kubenswrapper[4779]: I1128 12:36:57.259685 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:57 crc kubenswrapper[4779]: I1128 12:36:57.259705 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:57Z","lastTransitionTime":"2025-11-28T12:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:57 crc kubenswrapper[4779]: I1128 12:36:57.363528 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:57 crc kubenswrapper[4779]: I1128 12:36:57.363632 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:57 crc kubenswrapper[4779]: I1128 12:36:57.363661 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:57 crc kubenswrapper[4779]: I1128 12:36:57.363693 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:57 crc kubenswrapper[4779]: I1128 12:36:57.363716 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:57Z","lastTransitionTime":"2025-11-28T12:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:57 crc kubenswrapper[4779]: I1128 12:36:57.467252 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:57 crc kubenswrapper[4779]: I1128 12:36:57.467316 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:57 crc kubenswrapper[4779]: I1128 12:36:57.467334 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:57 crc kubenswrapper[4779]: I1128 12:36:57.467366 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:57 crc kubenswrapper[4779]: I1128 12:36:57.467384 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:57Z","lastTransitionTime":"2025-11-28T12:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:57 crc kubenswrapper[4779]: I1128 12:36:57.569769 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:57 crc kubenswrapper[4779]: I1128 12:36:57.569847 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:57 crc kubenswrapper[4779]: I1128 12:36:57.569870 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:57 crc kubenswrapper[4779]: I1128 12:36:57.569899 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:57 crc kubenswrapper[4779]: I1128 12:36:57.569921 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:57Z","lastTransitionTime":"2025-11-28T12:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:57 crc kubenswrapper[4779]: I1128 12:36:57.673119 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:57 crc kubenswrapper[4779]: I1128 12:36:57.673228 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:57 crc kubenswrapper[4779]: I1128 12:36:57.673268 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:57 crc kubenswrapper[4779]: I1128 12:36:57.673303 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:57 crc kubenswrapper[4779]: I1128 12:36:57.673322 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:57Z","lastTransitionTime":"2025-11-28T12:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:57 crc kubenswrapper[4779]: I1128 12:36:57.777152 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:57 crc kubenswrapper[4779]: I1128 12:36:57.777233 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:57 crc kubenswrapper[4779]: I1128 12:36:57.777254 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:57 crc kubenswrapper[4779]: I1128 12:36:57.777292 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:57 crc kubenswrapper[4779]: I1128 12:36:57.777315 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:57Z","lastTransitionTime":"2025-11-28T12:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:57 crc kubenswrapper[4779]: I1128 12:36:57.880952 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:57 crc kubenswrapper[4779]: I1128 12:36:57.881025 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:57 crc kubenswrapper[4779]: I1128 12:36:57.881044 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:57 crc kubenswrapper[4779]: I1128 12:36:57.881076 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:57 crc kubenswrapper[4779]: I1128 12:36:57.881137 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:57Z","lastTransitionTime":"2025-11-28T12:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:57 crc kubenswrapper[4779]: I1128 12:36:57.984790 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:57 crc kubenswrapper[4779]: I1128 12:36:57.984879 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:57 crc kubenswrapper[4779]: I1128 12:36:57.984909 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:57 crc kubenswrapper[4779]: I1128 12:36:57.984947 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:57 crc kubenswrapper[4779]: I1128 12:36:57.984970 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:57Z","lastTransitionTime":"2025-11-28T12:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:58 crc kubenswrapper[4779]: I1128 12:36:58.088587 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:58 crc kubenswrapper[4779]: I1128 12:36:58.088649 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:58 crc kubenswrapper[4779]: I1128 12:36:58.088668 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:58 crc kubenswrapper[4779]: I1128 12:36:58.088694 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:58 crc kubenswrapper[4779]: I1128 12:36:58.088713 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:58Z","lastTransitionTime":"2025-11-28T12:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:58 crc kubenswrapper[4779]: I1128 12:36:58.192207 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:58 crc kubenswrapper[4779]: I1128 12:36:58.192287 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:58 crc kubenswrapper[4779]: I1128 12:36:58.192305 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:58 crc kubenswrapper[4779]: I1128 12:36:58.192332 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:58 crc kubenswrapper[4779]: I1128 12:36:58.192352 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:58Z","lastTransitionTime":"2025-11-28T12:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:58 crc kubenswrapper[4779]: I1128 12:36:58.296470 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:58 crc kubenswrapper[4779]: I1128 12:36:58.296547 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:58 crc kubenswrapper[4779]: I1128 12:36:58.296573 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:58 crc kubenswrapper[4779]: I1128 12:36:58.296605 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:58 crc kubenswrapper[4779]: I1128 12:36:58.296628 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:58Z","lastTransitionTime":"2025-11-28T12:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:58 crc kubenswrapper[4779]: I1128 12:36:58.398928 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:58 crc kubenswrapper[4779]: I1128 12:36:58.398998 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:58 crc kubenswrapper[4779]: I1128 12:36:58.399017 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:58 crc kubenswrapper[4779]: I1128 12:36:58.399042 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:58 crc kubenswrapper[4779]: I1128 12:36:58.399059 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:58Z","lastTransitionTime":"2025-11-28T12:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:58 crc kubenswrapper[4779]: I1128 12:36:58.502422 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:58 crc kubenswrapper[4779]: I1128 12:36:58.502491 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:58 crc kubenswrapper[4779]: I1128 12:36:58.502504 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:58 crc kubenswrapper[4779]: I1128 12:36:58.502526 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:58 crc kubenswrapper[4779]: I1128 12:36:58.502539 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:58Z","lastTransitionTime":"2025-11-28T12:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:58 crc kubenswrapper[4779]: I1128 12:36:58.605770 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:58 crc kubenswrapper[4779]: I1128 12:36:58.605844 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:58 crc kubenswrapper[4779]: I1128 12:36:58.605867 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:58 crc kubenswrapper[4779]: I1128 12:36:58.605900 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:58 crc kubenswrapper[4779]: I1128 12:36:58.605926 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:58Z","lastTransitionTime":"2025-11-28T12:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:58 crc kubenswrapper[4779]: I1128 12:36:58.709612 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:58 crc kubenswrapper[4779]: I1128 12:36:58.709968 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:58 crc kubenswrapper[4779]: I1128 12:36:58.709982 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:58 crc kubenswrapper[4779]: I1128 12:36:58.710001 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:58 crc kubenswrapper[4779]: I1128 12:36:58.710013 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:58Z","lastTransitionTime":"2025-11-28T12:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:58 crc kubenswrapper[4779]: I1128 12:36:58.726066 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:36:58 crc kubenswrapper[4779]: E1128 12:36:58.726208 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:36:58 crc kubenswrapper[4779]: I1128 12:36:58.726376 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:36:58 crc kubenswrapper[4779]: E1128 12:36:58.726441 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c2psj" podUID="2d9943eb-ea06-476d-8736-0a45e588d9f4" Nov 28 12:36:58 crc kubenswrapper[4779]: I1128 12:36:58.726576 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:36:58 crc kubenswrapper[4779]: E1128 12:36:58.726639 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:36:58 crc kubenswrapper[4779]: I1128 12:36:58.726817 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:36:58 crc kubenswrapper[4779]: E1128 12:36:58.726890 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:36:58 crc kubenswrapper[4779]: I1128 12:36:58.812523 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:58 crc kubenswrapper[4779]: I1128 12:36:58.812564 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:58 crc kubenswrapper[4779]: I1128 12:36:58.812574 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:58 crc kubenswrapper[4779]: I1128 12:36:58.812595 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:58 crc kubenswrapper[4779]: I1128 12:36:58.812607 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:58Z","lastTransitionTime":"2025-11-28T12:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:58 crc kubenswrapper[4779]: I1128 12:36:58.915668 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:58 crc kubenswrapper[4779]: I1128 12:36:58.915736 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:58 crc kubenswrapper[4779]: I1128 12:36:58.915756 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:58 crc kubenswrapper[4779]: I1128 12:36:58.915781 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:58 crc kubenswrapper[4779]: I1128 12:36:58.915801 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:58Z","lastTransitionTime":"2025-11-28T12:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.019177 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.019223 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.019239 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.019273 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.019291 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:59Z","lastTransitionTime":"2025-11-28T12:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.122507 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.122545 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.122556 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.122574 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.122587 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:59Z","lastTransitionTime":"2025-11-28T12:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.224628 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.224685 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.224704 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.224729 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.224748 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:59Z","lastTransitionTime":"2025-11-28T12:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.227145 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pbmbn_35f4f43e-a921-41b2-aa88-506055daff60/ovnkube-controller/2.log" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.232171 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" event={"ID":"35f4f43e-a921-41b2-aa88-506055daff60","Type":"ContainerStarted","Data":"fae861b14ca36a4b482a48b94ffda32e0d188204f356dfe60e2d8778b284dc1b"} Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.232723 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.249630 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwgdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13786eba-201c-40ca-89b7-174795999a9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec60bab90c7fee1fd38c00da4f84d5133876ad8f2817e5447795fcab4feb2942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwgdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.264505 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c2psj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d9943eb-ea06-476d-8736-0a45e588d9f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8vbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8vbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:36:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c2psj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.287738 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbbbf6f-004c-42ae-8a38-1bcc6cb88ac2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9cede79cbe4c47d953dfa702fe815cc14ee242dede33edec3c4617824c89b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4493f154b47a353308d54341114bbbd12157f9575b873e1648d1dae6a386a534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9d44446078a2bb53a5a9b0a3f7a87ecf24a8554fb968a0250fc3a4cfb2d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://123567b9e202a9aae6ab83bca1ea909a496c476395703ab65e855be02f7af06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c959e0d582f2f01523650db7c0a1d6483dda34c3fcdfaa29d2d25e4d0b0f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.317763 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f4f43e-a921-41b2-aa88-506055daff60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fae861b14ca36a4b482a48b94ffda32e0d188204f356dfe60e2d8778b284dc1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f78ed0375efd54092331e1cbb01c168e6cc218dc9abaf935e986271e1fd7ed2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T12:36:29Z\\\",\\\"message\\\":\\\"h UID \\\\\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\\\\\" in cache\\\\nI1128 12:36:28.904558 6457 port_cache.go:96] port-cache(openshift-network-diagnostics_network-check-target-xd92c): added port \\\\u0026{name:openshift-network-diagnostics_network-check-target-xd92c uuid:61897e97-c771-4738-8709-09636387cb00 logicalSwitch:crc ips:[0xc007ff6d20] mac:[10 88 10 217 0 4] expires:{wall:0 ext:0 loc:\\\\u003cnil\\\\u003e}} with IP: [10.217.0.4/23] and MAC: 0a:58:0a:d9:00:04\\\\nI1128 12:36:28.904608 6457 services_controller.go:360] Finished syncing service etcd on namespace openshift-etcd for network=default : 5.545787ms\\\\nF1128 12:36:28.904620 6457 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet vali\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pbmbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.328161 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.328224 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.328244 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.328275 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.328294 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:59Z","lastTransitionTime":"2025-11-28T12:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.334716 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c9857379117d130ce02fa4a153dfc01c9f41ba65663ae918bd82c9b14291e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.351428 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dlvj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8b3aa68-52ee-40cd-a059-6e410b826ce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2e852aeb571e85a95f4581550ee5f911d9c67fbbc4fc699e9af667a9c4b531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db55w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dlvj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.369635 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jf46d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd0b81f7-c868-4f90-b20d-9d1b53f5216f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e8508450f924b6b8509b5d06c78535915557c5a7362b50c41515ad15f35e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smlr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383fc6deecc04584b130b3fdc9c1fded751c521513ce60898fdf1927748cd4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smlr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:36:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jf46d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.386896 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d2732-7fd1-4fa8-9da7-74872484e3f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ca35c83bfed6e6b9e11bc2acb282ab619c3a04941a8ed540853cdd43531a00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9e9439db88e70aa53dff88d8b0a4f533ad90c8652e9a4d58e93fda87fa7f5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9e9439db88e70aa53dff88d8b0a4f533ad90c8652e9a4d58e93fda87fa7f5f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.404927 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b303d954-23c9-4fc9-8e79-981009172099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6912a42c418059dabf07c7d940bf1c4102c8dcf91cd4dd6ca0b177f4acd276ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf14e5e2229156dc442c92253ef1f23c75a5a6f5dec2d2537cddcdd1df54b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a76dbc5b41ebf68792cd449e4a245678be24151f0c980eedd06f956674b2435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3db38b748527004df103120db865f7848491344dfdf5c89a6db10f4d15e6a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9026b47ba3a0076e3f66e452bc9a223292a17659f2b80d04ef6eb6a5c0448710\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 12:35:52.373678 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 12:35:52.376135 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3230331060/tls.crt::/tmp/serving-cert-3230331060/tls.key\\\\\\\"\\\\nI1128 12:35:57.821147 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 12:35:57.824398 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 12:35:57.824424 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 12:35:57.824444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 12:35:57.824450 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 12:35:57.831411 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 12:35:57.831445 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831467 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 12:35:57.831472 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 12:35:57.831476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 12:35:57.831480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 12:35:57.831686 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 12:35:57.839127 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bafddd2d81f67f1445e3714d50eba5cfd6f75d60c2cb47d16f2086861a10bd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.422119 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.431207 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.431271 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.431290 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.431314 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.431331 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:59Z","lastTransitionTime":"2025-11-28T12:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.438327 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3544f7f72339878b2314fde813e8a92a8341fb05a34a4440c7c37b983d8d23f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19dcc5041b0cbae9167c41c808ece2651eac928f93422722ae28825b5ea4f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.455207 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.504683 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d290cf8678216cdf66a68b32edea2be30af7f7fa4ff7ccac629d9e690b23b13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.531490 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2gg4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9e9a74657b078824a5614dc894178aed5ca4cb11445b900485e9a6c4378f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2gg4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.533874 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.533921 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.533937 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.533960 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.533974 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:59Z","lastTransitionTime":"2025-11-28T12:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.546495 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91a3b1-3cec-4dcd-8f16-bc721aaedc52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7be2ce5bc20d31216029627f86e27657d444334d72ba98e4ae9923c9d23cf35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9512174ef01c8751a11fc5e6193513236518b4a9d5b63b05020544b8708b70b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54bf19864670db9dbeda1e3b133e9246f9e4027714f684783efed888890af9ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dd288476ad4d58bebb413208bbe2f45bf3997fd7587a90b08ff3af6bdc2ad10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dd288476ad4d58bebb413208bbe2f45bf3997fd7587a90b08ff3af6bdc2ad10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.559503 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.573769 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pzwdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba664a9e-76d2-4d02-889a-e7062bfc903c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c11decc7085592a2a1e13b74049f378421293a7a1929f765860c47824c4b7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5598fdba6afba30cd00c8abdae6c80300fb10dfcde40afab0f15f848addddd47\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T12:36:44Z\\\",\\\"message\\\":\\\"2025-11-28T12:35:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4ae75205-766c-4cf0-bf74-190c15ad266e\\\\n2025-11-28T12:35:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4ae75205-766c-4cf0-bf74-190c15ad266e to /host/opt/cni/bin/\\\\n2025-11-28T12:35:59Z [verbose] multus-daemon started\\\\n2025-11-28T12:35:59Z [verbose] Readiness Indicator file check\\\\n2025-11-28T12:36:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfslc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pzwdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.585285 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373d4c2a-0b03-4671-945a-0583fa342b3d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e79e9cc7bdaacc427604d12cf94272c7ed3d93519b1d285ba336edded1b3642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0417da6607c0d549767642332fa4fb21bbef525d7073d0a352120092d3450f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b887fb78d1be13c77a88ce49c84ff0839a51056e29d59d571ab7da133dd0d897\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a538ac7a3b48f9c58a68688a95342fb3a9d26ee3e5d7c65f1e3b8d99993294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.595614 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b2a3eb4-4de5-491b-b466-3a35b7d745ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23df7a96829b4103254d6da3740caab05538ddbd3235ce16e8d768e681041c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f92b1378efd9146ee3cb61fef14092136e47b318d132a400c768bedf50d034e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kj9g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.636391 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.636434 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.636448 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.636464 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.636477 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:59Z","lastTransitionTime":"2025-11-28T12:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.738549 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.738619 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.738643 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.738671 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.738695 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:59Z","lastTransitionTime":"2025-11-28T12:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.744544 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d290cf8678216cdf66a68b32edea2be30af7f7fa4ff7ccac629d9e690b23b13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.763946 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2gg4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9e9a74657b078824a5614dc894178aed5ca4cb11445b900485e9a6c4378f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2gg4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.777640 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91a3b1-3cec-4dcd-8f16-bc721aaedc52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7be2ce5bc20d31216029627f86e27657d444334d72ba98e4ae9923c9d23cf35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9512174ef01c8751a11fc5e6193513236518b4a9d5b63b05020544b8708b70b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54bf19864670db9dbeda1e3b133e9246f9e4027714f684783efed888890af9ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dd288476ad4d58bebb413208bbe2f45bf3997fd7587a90b08ff3af6bdc2ad10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dd288476ad4d58bebb413208bbe2f45bf3997fd7587a90b08ff3af6bdc2ad10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.797538 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.808358 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.825255 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3544f7f72339878b2314fde813e8a92a8341fb05a34a4440c7c37b983d8d23f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19dcc5041b0cbae9167c41c808ece2651eac928f93422722ae28825b5ea4f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.840338 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.840697 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.840738 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.840745 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.840762 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.840773 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:59Z","lastTransitionTime":"2025-11-28T12:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.851927 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373d4c2a-0b03-4671-945a-0583fa342b3d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e79e9cc7bdaacc427604d12cf94272c7ed3d93519b1d285ba336edded1b3642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0417da6607c0d549767642332fa4fb21bbef525d7073d0a352120092d3450f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b887fb78d1be13c77a88ce49c84ff0839a51056e29d59d571ab7da133dd0d897\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a538ac7a3b48f9c58a68688a95342fb3a9d26ee3e5d7c65f1e3b8d99993294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.862321 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b2a3eb4-4de5-491b-b466-3a35b7d745ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23df7a96829b4103254d6da3740caab05538ddbd3235ce16e8d768e681041c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f92b1378efd9146ee3cb61fef14092136e47b318d132a400c768bedf50d034e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kj9g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.876199 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pzwdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba664a9e-76d2-4d02-889a-e7062bfc903c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c11decc7085592a2a1e13b74049f378421293a7a1929f765860c47824c4b7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5598fdba6afba30cd00c8abdae6c80300fb10dfcde40afab0f15f848addddd47\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T12:36:44Z\\\",\\\"message\\\":\\\"2025-11-28T12:35:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4ae75205-766c-4cf0-bf74-190c15ad266e\\\\n2025-11-28T12:35:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4ae75205-766c-4cf0-bf74-190c15ad266e to /host/opt/cni/bin/\\\\n2025-11-28T12:35:59Z [verbose] multus-daemon started\\\\n2025-11-28T12:35:59Z [verbose] Readiness Indicator file check\\\\n2025-11-28T12:36:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfslc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pzwdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.893141 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbbbf6f-004c-42ae-8a38-1bcc6cb88ac2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9cede79cbe4c47d953dfa702fe815cc14ee242dede33edec3c4617824c89b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4493f154b47a353308d54341114bbbd12157f9575b873e1648d1dae6a386a534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9d44446078a2bb53a5a9b0a3f7a87ecf24a8554fb968a0250fc3a4cfb2d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://123567b9e202a9aae6ab83bca1ea909a496c476395703ab65e855be02f7af06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c959e0d582f2f01523650db7c0a1d6483dda34c3fcdfaa29d2d25e4d0b0f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.909702 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f4f43e-a921-41b2-aa88-506055daff60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fae861b14ca36a4b482a48b94ffda32e0d188204f356dfe60e2d8778b284dc1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f78ed0375efd54092331e1cbb01c168e6cc218dc9abaf935e986271e1fd7ed2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T12:36:29Z\\\",\\\"message\\\":\\\"h UID \\\\\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\\\\\" in cache\\\\nI1128 12:36:28.904558 6457 port_cache.go:96] port-cache(openshift-network-diagnostics_network-check-target-xd92c): added port \\\\u0026{name:openshift-network-diagnostics_network-check-target-xd92c uuid:61897e97-c771-4738-8709-09636387cb00 logicalSwitch:crc ips:[0xc007ff6d20] mac:[10 88 10 217 0 4] expires:{wall:0 ext:0 loc:\\\\u003cnil\\\\u003e}} with IP: [10.217.0.4/23] and MAC: 0a:58:0a:d9:00:04\\\\nI1128 12:36:28.904608 6457 services_controller.go:360] Finished syncing service etcd on namespace openshift-etcd for network=default : 5.545787ms\\\\nF1128 12:36:28.904620 6457 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet vali\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pbmbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.923121 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwgdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13786eba-201c-40ca-89b7-174795999a9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec60bab90c7fee1fd38c00da4f84d5133876ad8f2817e5447795fcab4feb2942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwgdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.934289 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c2psj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d9943eb-ea06-476d-8736-0a45e588d9f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8vbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8vbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:36:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c2psj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.944747 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.944741 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d2732-7fd1-4fa8-9da7-74872484e3f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ca35c83bfed6e6b9e11bc2acb282ab619c3a04941a8ed540853cdd43531a00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9e9439db88e70aa53dff88d8b0a4f533ad90c8652e9a4d58e93fda87fa7f5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9e9439db88e70aa53dff88d8b0a4f533ad90c8652e9a4d58e93fda87fa7f5f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.944802 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.944822 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.944851 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.944872 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:36:59Z","lastTransitionTime":"2025-11-28T12:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.964073 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b303d954-23c9-4fc9-8e79-981009172099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6912a42c418059dabf07c7d940bf1c4102c8dcf91cd4dd6ca0b177f4acd276ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf14e5e2229156dc442c92253ef1f23c75a5a6f5dec2d2537cddcdd1df54b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a76dbc5b41ebf68792cd449e4a245678be24151f0c980eedd06f956674b2435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3db38b748527004df103120db865f7848491344dfdf5c89a6db10f4d15e6a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9026b47ba3a0076e3f66e452bc9a223292a17659f2b80d04ef6eb6a5c0448710\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 12:35:52.373678 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 12:35:52.376135 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3230331060/tls.crt::/tmp/serving-cert-3230331060/tls.key\\\\\\\"\\\\nI1128 12:35:57.821147 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 12:35:57.824398 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 12:35:57.824424 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 12:35:57.824444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 12:35:57.824450 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 12:35:57.831411 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 12:35:57.831445 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831467 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 12:35:57.831472 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 12:35:57.831476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 12:35:57.831480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 12:35:57.831686 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 12:35:57.839127 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bafddd2d81f67f1445e3714d50eba5cfd6f75d60c2cb47d16f2086861a10bd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.982332 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c9857379117d130ce02fa4a153dfc01c9f41ba65663ae918bd82c9b14291e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:36:59 crc kubenswrapper[4779]: I1128 12:36:59.997747 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dlvj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8b3aa68-52ee-40cd-a059-6e410b826ce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2e852aeb571e85a95f4581550ee5f911d9c67fbbc4fc699e9af667a9c4b531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db55w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dlvj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:36:59Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.015195 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jf46d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd0b81f7-c868-4f90-b20d-9d1b53f5216f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e8508450f924b6b8509b5d06c78535915557c5a7362b50c41515ad15f35e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smlr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383fc6deecc04584b130b3fdc9c1fded751c521513ce60898fdf1927748cd4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smlr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:36:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jf46d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:00Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.050831 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.050874 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.050886 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.050907 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.050921 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:00Z","lastTransitionTime":"2025-11-28T12:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.153534 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.153586 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.153604 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.153627 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.153644 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:00Z","lastTransitionTime":"2025-11-28T12:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.237465 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pbmbn_35f4f43e-a921-41b2-aa88-506055daff60/ovnkube-controller/3.log" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.238351 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pbmbn_35f4f43e-a921-41b2-aa88-506055daff60/ovnkube-controller/2.log" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.241950 4779 generic.go:334] "Generic (PLEG): container finished" podID="35f4f43e-a921-41b2-aa88-506055daff60" containerID="fae861b14ca36a4b482a48b94ffda32e0d188204f356dfe60e2d8778b284dc1b" exitCode=1 Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.241998 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" event={"ID":"35f4f43e-a921-41b2-aa88-506055daff60","Type":"ContainerDied","Data":"fae861b14ca36a4b482a48b94ffda32e0d188204f356dfe60e2d8778b284dc1b"} Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.242043 4779 scope.go:117] "RemoveContainer" containerID="3f78ed0375efd54092331e1cbb01c168e6cc218dc9abaf935e986271e1fd7ed2" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.243149 4779 scope.go:117] "RemoveContainer" containerID="fae861b14ca36a4b482a48b94ffda32e0d188204f356dfe60e2d8778b284dc1b" Nov 28 12:37:00 crc kubenswrapper[4779]: E1128 12:37:00.243443 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pbmbn_openshift-ovn-kubernetes(35f4f43e-a921-41b2-aa88-506055daff60)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" podUID="35f4f43e-a921-41b2-aa88-506055daff60" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.257140 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.257432 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.257452 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.257478 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.257493 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:00Z","lastTransitionTime":"2025-11-28T12:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.262076 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91a3b1-3cec-4dcd-8f16-bc721aaedc52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7be2ce5bc20d31216029627f86e27657d444334d72ba98e4ae9923c9d23cf35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9512174ef01c8751a11fc5e6193513236518b4a9d5b63b05020544b8708b70b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54bf19864670db9dbeda1e3b133e9246f9e4027714f684783efed888890af9ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dd288476ad4d58bebb413208bbe2f45bf3997fd7587a90b08ff3af6bdc2ad10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dd288476ad4d58bebb413208bbe2f45bf3997fd7587a90b08ff3af6bdc2ad10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:00Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.279306 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:00Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.295433 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:00Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.320363 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3544f7f72339878b2314fde813e8a92a8341fb05a34a4440c7c37b983d8d23f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19dcc5041b0cbae9167c41c808ece2651eac928f93422722ae28825b5ea4f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:00Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.339343 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:00Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.354973 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d290cf8678216cdf66a68b32edea2be30af7f7fa4ff7ccac629d9e690b23b13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:00Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.360730 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.360787 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.360800 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.360822 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.360839 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:00Z","lastTransitionTime":"2025-11-28T12:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.377721 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2gg4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9e9a74657b078824a5614dc894178aed5ca4cb11445b900485e9a6c4378f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2gg4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:00Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.395506 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373d4c2a-0b03-4671-945a-0583fa342b3d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e79e9cc7bdaacc427604d12cf94272c7ed3d93519b1d285ba336edded1b3642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0417da6607c0d549767642332fa4fb21bbef525d7073d0a352120092d3450f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b887fb78d1be13c77a88ce49c84ff0839a51056e29d59d571ab7da133dd0d897\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a538ac7a3b48f9c58a68688a95342fb3a9d26ee3e5d7c65f1e3b8d99993294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:00Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.410845 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b2a3eb4-4de5-491b-b466-3a35b7d745ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23df7a96829b4103254d6da3740caab05538ddbd3235ce16e8d768e681041c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f92b1378efd9146ee3cb61fef14092136e47b318d132a400c768bedf50d034e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kj9g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:00Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.429037 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pzwdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba664a9e-76d2-4d02-889a-e7062bfc903c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c11decc7085592a2a1e13b74049f378421293a7a1929f765860c47824c4b7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5598fdba6afba30cd00c8abdae6c80300fb10dfcde40afab0f15f848addddd47\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T12:36:44Z\\\",\\\"message\\\":\\\"2025-11-28T12:35:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4ae75205-766c-4cf0-bf74-190c15ad266e\\\\n2025-11-28T12:35:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4ae75205-766c-4cf0-bf74-190c15ad266e to /host/opt/cni/bin/\\\\n2025-11-28T12:35:59Z [verbose] multus-daemon started\\\\n2025-11-28T12:35:59Z [verbose] Readiness Indicator file check\\\\n2025-11-28T12:36:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfslc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pzwdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:00Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.455624 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbbbf6f-004c-42ae-8a38-1bcc6cb88ac2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9cede79cbe4c47d953dfa702fe815cc14ee242dede33edec3c4617824c89b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4493f154b47a353308d54341114bbbd12157f9575b873e1648d1dae6a386a534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9d44446078a2bb53a5a9b0a3f7a87ecf24a8554fb968a0250fc3a4cfb2d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://123567b9e202a9aae6ab83bca1ea909a496c476395703ab65e855be02f7af06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c959e0d582f2f01523650db7c0a1d6483dda34c3fcdfaa29d2d25e4d0b0f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:00Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.464134 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.464176 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.464189 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.464210 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.464226 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:00Z","lastTransitionTime":"2025-11-28T12:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.482454 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f4f43e-a921-41b2-aa88-506055daff60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fae861b14ca36a4b482a48b94ffda32e0d188204f356dfe60e2d8778b284dc1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f78ed0375efd54092331e1cbb01c168e6cc218dc9abaf935e986271e1fd7ed2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T12:36:29Z\\\",\\\"message\\\":\\\"h UID \\\\\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\\\\\" in cache\\\\nI1128 12:36:28.904558 6457 port_cache.go:96] port-cache(openshift-network-diagnostics_network-check-target-xd92c): added port \\\\u0026{name:openshift-network-diagnostics_network-check-target-xd92c uuid:61897e97-c771-4738-8709-09636387cb00 logicalSwitch:crc ips:[0xc007ff6d20] mac:[10 88 10 217 0 4] expires:{wall:0 ext:0 loc:\\\\u003cnil\\\\u003e}} with IP: [10.217.0.4/23] and MAC: 0a:58:0a:d9:00:04\\\\nI1128 12:36:28.904608 6457 services_controller.go:360] Finished syncing service etcd on namespace openshift-etcd for network=default : 5.545787ms\\\\nF1128 12:36:28.904620 6457 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet vali\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fae861b14ca36a4b482a48b94ffda32e0d188204f356dfe60e2d8778b284dc1b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T12:36:59Z\\\",\\\"message\\\":\\\"128 12:36:59.339929 6826 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1128 12:36:59.339995 6826 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1128 12:36:59.340052 6826 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1128 12:36:59.340142 6826 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 12:36:59.340166 6826 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 12:36:59.340187 6826 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1128 12:36:59.340197 6826 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1128 12:36:59.340234 6826 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 12:36:59.340231 6826 handler.go:208] Removed *v1.Node event handler 2\\\\nI1128 12:36:59.340253 6826 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1128 12:36:59.340268 6826 handler.go:208] Removed *v1.Node event handler 7\\\\nI1128 12:36:59.340268 6826 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1128 12:36:59.340288 6826 factory.go:656] Stopping watch factory\\\\nI1128 12:36:59.340293 6826 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 12:36:59.340293 6826 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 12:36:59.340300 6826 ovnkube.go:599] Stopped ovnkube\\\\nI1128 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pbmbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:00Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.499623 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwgdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13786eba-201c-40ca-89b7-174795999a9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec60bab90c7fee1fd38c00da4f84d5133876ad8f2817e5447795fcab4feb2942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwgdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:00Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.517980 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c2psj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d9943eb-ea06-476d-8736-0a45e588d9f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8vbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8vbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:36:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c2psj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:00Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.531693 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d2732-7fd1-4fa8-9da7-74872484e3f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ca35c83bfed6e6b9e11bc2acb282ab619c3a04941a8ed540853cdd43531a00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9e9439db88e70aa53dff88d8b0a4f533ad90c8652e9a4d58e93fda87fa7f5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9e9439db88e70aa53dff88d8b0a4f533ad90c8652e9a4d58e93fda87fa7f5f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:00Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.554684 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b303d954-23c9-4fc9-8e79-981009172099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6912a42c418059dabf07c7d940bf1c4102c8dcf91cd4dd6ca0b177f4acd276ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf14e5e2229156dc442c92253ef1f23c75a5a6f5dec2d2537cddcdd1df54b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a76dbc5b41ebf68792cd449e4a245678be24151f0c980eedd06f956674b2435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3db38b748527004df103120db865f7848491344dfdf5c89a6db10f4d15e6a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9026b47ba3a0076e3f66e452bc9a223292a17659f2b80d04ef6eb6a5c0448710\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 12:35:52.373678 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 12:35:52.376135 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3230331060/tls.crt::/tmp/serving-cert-3230331060/tls.key\\\\\\\"\\\\nI1128 12:35:57.821147 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 12:35:57.824398 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 12:35:57.824424 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 12:35:57.824444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 12:35:57.824450 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 12:35:57.831411 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 12:35:57.831445 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831467 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 12:35:57.831472 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 12:35:57.831476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 12:35:57.831480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 12:35:57.831686 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 12:35:57.839127 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bafddd2d81f67f1445e3714d50eba5cfd6f75d60c2cb47d16f2086861a10bd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:00Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.567651 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.567725 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.567737 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.567755 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.567785 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:00Z","lastTransitionTime":"2025-11-28T12:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.571700 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c9857379117d130ce02fa4a153dfc01c9f41ba65663ae918bd82c9b14291e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:00Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.591423 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dlvj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8b3aa68-52ee-40cd-a059-6e410b826ce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2e852aeb571e85a95f4581550ee5f911d9c67fbbc4fc699e9af667a9c4b531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db55w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dlvj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:00Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.607409 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jf46d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd0b81f7-c868-4f90-b20d-9d1b53f5216f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e8508450f924b6b8509b5d06c78535915557c5a7362b50c41515ad15f35e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smlr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383fc6deecc04584b130b3fdc9c1fded751c521513ce60898fdf1927748cd4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smlr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:36:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jf46d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:00Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.670854 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.670912 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.670922 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.670939 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.670951 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:00Z","lastTransitionTime":"2025-11-28T12:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.725514 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.725562 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:37:00 crc kubenswrapper[4779]: E1128 12:37:00.725678 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c2psj" podUID="2d9943eb-ea06-476d-8736-0a45e588d9f4" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.725738 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.725759 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:37:00 crc kubenswrapper[4779]: E1128 12:37:00.725910 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:37:00 crc kubenswrapper[4779]: E1128 12:37:00.726070 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:37:00 crc kubenswrapper[4779]: E1128 12:37:00.726174 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.774387 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.774441 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.774461 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.774489 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.774507 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:00Z","lastTransitionTime":"2025-11-28T12:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.878317 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.878382 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.878398 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.878422 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.878438 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:00Z","lastTransitionTime":"2025-11-28T12:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.981734 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.981818 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.981832 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.981879 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:00 crc kubenswrapper[4779]: I1128 12:37:00.981894 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:00Z","lastTransitionTime":"2025-11-28T12:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.085576 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.085647 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.085666 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.085695 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.085713 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:01Z","lastTransitionTime":"2025-11-28T12:37:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.189459 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.189532 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.189546 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.189572 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.189590 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:01Z","lastTransitionTime":"2025-11-28T12:37:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.250043 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pbmbn_35f4f43e-a921-41b2-aa88-506055daff60/ovnkube-controller/3.log" Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.255692 4779 scope.go:117] "RemoveContainer" containerID="fae861b14ca36a4b482a48b94ffda32e0d188204f356dfe60e2d8778b284dc1b" Nov 28 12:37:01 crc kubenswrapper[4779]: E1128 12:37:01.256016 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pbmbn_openshift-ovn-kubernetes(35f4f43e-a921-41b2-aa88-506055daff60)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" podUID="35f4f43e-a921-41b2-aa88-506055daff60" Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.276814 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91a3b1-3cec-4dcd-8f16-bc721aaedc52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7be2ce5bc20d31216029627f86e27657d444334d72ba98e4ae9923c9d23cf35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9512174ef01c8751a11fc5e6193513236518b4a9d5b63b05020544b8708b70b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54bf19864670db9dbeda1e3b133e9246f9e4027714f684783efed888890af9ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dd288476ad4d58bebb413208bbe2f45bf3997fd7587a90b08ff3af6bdc2ad10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dd288476ad4d58bebb413208bbe2f45bf3997fd7587a90b08ff3af6bdc2ad10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:01Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.293173 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.293315 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.293342 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.293381 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.293405 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:01Z","lastTransitionTime":"2025-11-28T12:37:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.294870 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:01Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.311645 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:01Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.334591 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3544f7f72339878b2314fde813e8a92a8341fb05a34a4440c7c37b983d8d23f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19dcc5041b0cbae9167c41c808ece2651eac928f93422722ae28825b5ea4f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:01Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.354882 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:01Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.370995 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d290cf8678216cdf66a68b32edea2be30af7f7fa4ff7ccac629d9e690b23b13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:01Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.397171 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.397367 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.397470 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.397518 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.397545 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:01Z","lastTransitionTime":"2025-11-28T12:37:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.397854 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2gg4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9e9a74657b078824a5614dc894178aed5ca4cb11445b900485e9a6c4378f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2gg4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:01Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.420520 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373d4c2a-0b03-4671-945a-0583fa342b3d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e79e9cc7bdaacc427604d12cf94272c7ed3d93519b1d285ba336edded1b3642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0417da6607c0d549767642332fa4fb21bbef525d7073d0a352120092d3450f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b887fb78d1be13c77a88ce49c84ff0839a51056e29d59d571ab7da133dd0d897\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a538ac7a3b48f9c58a68688a95342fb3a9d26ee3e5d7c65f1e3b8d99993294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:01Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.440548 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b2a3eb4-4de5-491b-b466-3a35b7d745ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23df7a96829b4103254d6da3740caab05538ddbd3235ce16e8d768e681041c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f92b1378efd9146ee3cb61fef14092136e47b318d132a400c768bedf50d034e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kj9g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:01Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.460342 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pzwdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba664a9e-76d2-4d02-889a-e7062bfc903c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c11decc7085592a2a1e13b74049f378421293a7a1929f765860c47824c4b7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5598fdba6afba30cd00c8abdae6c80300fb10dfcde40afab0f15f848addddd47\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T12:36:44Z\\\",\\\"message\\\":\\\"2025-11-28T12:35:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4ae75205-766c-4cf0-bf74-190c15ad266e\\\\n2025-11-28T12:35:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4ae75205-766c-4cf0-bf74-190c15ad266e to /host/opt/cni/bin/\\\\n2025-11-28T12:35:59Z [verbose] multus-daemon started\\\\n2025-11-28T12:35:59Z [verbose] Readiness Indicator file check\\\\n2025-11-28T12:36:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfslc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pzwdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:01Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.494905 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbbbf6f-004c-42ae-8a38-1bcc6cb88ac2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9cede79cbe4c47d953dfa702fe815cc14ee242dede33edec3c4617824c89b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4493f154b47a353308d54341114bbbd12157f9575b873e1648d1dae6a386a534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9d44446078a2bb53a5a9b0a3f7a87ecf24a8554fb968a0250fc3a4cfb2d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://123567b9e202a9aae6ab83bca1ea909a496c476395703ab65e855be02f7af06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c959e0d582f2f01523650db7c0a1d6483dda34c3fcdfaa29d2d25e4d0b0f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:01Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.501492 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.501543 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.501556 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.501576 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.501589 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:01Z","lastTransitionTime":"2025-11-28T12:37:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.529218 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f4f43e-a921-41b2-aa88-506055daff60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fae861b14ca36a4b482a48b94ffda32e0d188204f356dfe60e2d8778b284dc1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fae861b14ca36a4b482a48b94ffda32e0d188204f356dfe60e2d8778b284dc1b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T12:36:59Z\\\",\\\"message\\\":\\\"128 12:36:59.339929 6826 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1128 12:36:59.339995 6826 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1128 12:36:59.340052 6826 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1128 12:36:59.340142 6826 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 12:36:59.340166 6826 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 12:36:59.340187 6826 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1128 12:36:59.340197 6826 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1128 12:36:59.340234 6826 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 12:36:59.340231 6826 handler.go:208] Removed *v1.Node event handler 2\\\\nI1128 12:36:59.340253 6826 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1128 12:36:59.340268 6826 handler.go:208] Removed *v1.Node event handler 7\\\\nI1128 12:36:59.340268 6826 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1128 12:36:59.340288 6826 factory.go:656] Stopping watch factory\\\\nI1128 12:36:59.340293 6826 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 12:36:59.340293 6826 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 12:36:59.340300 6826 ovnkube.go:599] Stopped ovnkube\\\\nI1128 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pbmbn_openshift-ovn-kubernetes(35f4f43e-a921-41b2-aa88-506055daff60)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pbmbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:01Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.547722 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwgdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13786eba-201c-40ca-89b7-174795999a9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec60bab90c7fee1fd38c00da4f84d5133876ad8f2817e5447795fcab4feb2942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwgdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:01Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.563920 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c2psj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d9943eb-ea06-476d-8736-0a45e588d9f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8vbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8vbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:36:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c2psj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:01Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.579925 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d2732-7fd1-4fa8-9da7-74872484e3f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ca35c83bfed6e6b9e11bc2acb282ab619c3a04941a8ed540853cdd43531a00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9e9439db88e70aa53dff88d8b0a4f533ad90c8652e9a4d58e93fda87fa7f5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9e9439db88e70aa53dff88d8b0a4f533ad90c8652e9a4d58e93fda87fa7f5f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:01Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.599773 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b303d954-23c9-4fc9-8e79-981009172099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6912a42c418059dabf07c7d940bf1c4102c8dcf91cd4dd6ca0b177f4acd276ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf14e5e2229156dc442c92253ef1f23c75a5a6f5dec2d2537cddcdd1df54b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a76dbc5b41ebf68792cd449e4a245678be24151f0c980eedd06f956674b2435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3db38b748527004df103120db865f7848491344dfdf5c89a6db10f4d15e6a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9026b47ba3a0076e3f66e452bc9a223292a17659f2b80d04ef6eb6a5c0448710\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 12:35:52.373678 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 12:35:52.376135 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3230331060/tls.crt::/tmp/serving-cert-3230331060/tls.key\\\\\\\"\\\\nI1128 12:35:57.821147 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 12:35:57.824398 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 12:35:57.824424 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 12:35:57.824444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 12:35:57.824450 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 12:35:57.831411 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 12:35:57.831445 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831467 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 12:35:57.831472 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 12:35:57.831476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 12:35:57.831480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 12:35:57.831686 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 12:35:57.839127 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bafddd2d81f67f1445e3714d50eba5cfd6f75d60c2cb47d16f2086861a10bd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:01Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.612983 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.613046 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.613063 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.613087 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.613121 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:01Z","lastTransitionTime":"2025-11-28T12:37:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.623121 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c9857379117d130ce02fa4a153dfc01c9f41ba65663ae918bd82c9b14291e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:01Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.640359 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dlvj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8b3aa68-52ee-40cd-a059-6e410b826ce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2e852aeb571e85a95f4581550ee5f911d9c67fbbc4fc699e9af667a9c4b531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db55w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dlvj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:01Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.658573 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jf46d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd0b81f7-c868-4f90-b20d-9d1b53f5216f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e8508450f924b6b8509b5d06c78535915557c5a7362b50c41515ad15f35e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smlr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383fc6deecc04584b130b3fdc9c1fded751c521513ce60898fdf1927748cd4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smlr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:36:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jf46d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:01Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.716663 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.716737 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.716762 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.716797 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.716823 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:01Z","lastTransitionTime":"2025-11-28T12:37:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.812642 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.812819 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.812871 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:37:01 crc kubenswrapper[4779]: E1128 12:37:01.813035 4779 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.813187 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:37:01 crc kubenswrapper[4779]: E1128 12:37:01.813211 4779 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 12:37:01 crc kubenswrapper[4779]: E1128 12:37:01.813353 4779 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 12:37:01 crc kubenswrapper[4779]: E1128 12:37:01.813361 4779 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 12:37:01 crc kubenswrapper[4779]: E1128 12:37:01.813468 4779 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 12:37:01 crc kubenswrapper[4779]: E1128 12:37:01.813279 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 12:38:05.813241702 +0000 UTC m=+146.378917086 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 12:37:01 crc kubenswrapper[4779]: E1128 12:37:01.813552 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 12:38:05.813521999 +0000 UTC m=+146.379197393 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.813565 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:37:01 crc kubenswrapper[4779]: E1128 12:37:01.813700 4779 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 12:37:01 crc kubenswrapper[4779]: E1128 12:37:01.813707 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 12:38:05.813660833 +0000 UTC m=+146.379336227 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 12:37:01 crc kubenswrapper[4779]: E1128 12:37:01.813735 4779 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 12:37:01 crc kubenswrapper[4779]: E1128 12:37:01.813757 4779 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 12:37:01 crc kubenswrapper[4779]: E1128 12:37:01.813767 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 12:38:05.813752385 +0000 UTC m=+146.379427779 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:37:01 crc kubenswrapper[4779]: E1128 12:37:01.813813 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 12:38:05.813793647 +0000 UTC m=+146.379469101 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.821175 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.821272 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.821377 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.821467 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.821505 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:01Z","lastTransitionTime":"2025-11-28T12:37:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.924765 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.924809 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.924821 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.924838 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:01 crc kubenswrapper[4779]: I1128 12:37:01.924849 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:01Z","lastTransitionTime":"2025-11-28T12:37:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.028635 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.028716 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.028740 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.028771 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.028796 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:02Z","lastTransitionTime":"2025-11-28T12:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.131726 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.131804 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.131821 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.131850 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.131877 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:02Z","lastTransitionTime":"2025-11-28T12:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.235734 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.235807 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.235827 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.235856 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.235874 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:02Z","lastTransitionTime":"2025-11-28T12:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.339542 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.339625 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.339641 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.339667 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.339686 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:02Z","lastTransitionTime":"2025-11-28T12:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.443438 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.443505 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.443525 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.443550 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.443567 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:02Z","lastTransitionTime":"2025-11-28T12:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.501709 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.501783 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.501801 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.501832 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.501850 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:02Z","lastTransitionTime":"2025-11-28T12:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:02 crc kubenswrapper[4779]: E1128 12:37:02.522734 4779 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:37:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:37:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:37:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:37:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:37:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:37:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:37:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:37:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"78a2023c-0feb-4049-a56a-d55919a84d1c\\\",\\\"systemUUID\\\":\\\"232cf3c8-8956-4a87-8900-bbd0298775e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:02Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.528082 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.528204 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.528225 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.528256 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.528280 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:02Z","lastTransitionTime":"2025-11-28T12:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:02 crc kubenswrapper[4779]: E1128 12:37:02.552869 4779 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:37:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:37:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:37:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:37:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:37:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:37:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:37:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:37:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"78a2023c-0feb-4049-a56a-d55919a84d1c\\\",\\\"systemUUID\\\":\\\"232cf3c8-8956-4a87-8900-bbd0298775e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:02Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.560407 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.560470 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.560491 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.560521 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.560546 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:02Z","lastTransitionTime":"2025-11-28T12:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:02 crc kubenswrapper[4779]: E1128 12:37:02.583722 4779 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:37:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:37:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:37:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:37:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:37:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:37:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:37:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:37:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"78a2023c-0feb-4049-a56a-d55919a84d1c\\\",\\\"systemUUID\\\":\\\"232cf3c8-8956-4a87-8900-bbd0298775e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:02Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.589627 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.589686 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.589704 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.589729 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.589747 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:02Z","lastTransitionTime":"2025-11-28T12:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:02 crc kubenswrapper[4779]: E1128 12:37:02.613347 4779 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:37:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:37:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:37:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:37:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:37:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:37:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:37:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:37:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"78a2023c-0feb-4049-a56a-d55919a84d1c\\\",\\\"systemUUID\\\":\\\"232cf3c8-8956-4a87-8900-bbd0298775e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:02Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.621351 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.621430 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.621449 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.621476 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.621496 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:02Z","lastTransitionTime":"2025-11-28T12:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:02 crc kubenswrapper[4779]: E1128 12:37:02.641931 4779 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:37:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:37:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:37:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:37:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:37:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:37:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:37:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:37:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"78a2023c-0feb-4049-a56a-d55919a84d1c\\\",\\\"systemUUID\\\":\\\"232cf3c8-8956-4a87-8900-bbd0298775e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:02Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:02 crc kubenswrapper[4779]: E1128 12:37:02.642203 4779 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.644154 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.644221 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.644247 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.644277 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.644302 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:02Z","lastTransitionTime":"2025-11-28T12:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.725371 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.725452 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.725676 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.725794 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:37:02 crc kubenswrapper[4779]: E1128 12:37:02.725889 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:37:02 crc kubenswrapper[4779]: E1128 12:37:02.726016 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c2psj" podUID="2d9943eb-ea06-476d-8736-0a45e588d9f4" Nov 28 12:37:02 crc kubenswrapper[4779]: E1128 12:37:02.726247 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:37:02 crc kubenswrapper[4779]: E1128 12:37:02.726351 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.747740 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.747798 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.747816 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.747839 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.747857 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:02Z","lastTransitionTime":"2025-11-28T12:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.850374 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.850450 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.850479 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.850511 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.850536 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:02Z","lastTransitionTime":"2025-11-28T12:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.953789 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.953851 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.953865 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.953887 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:02 crc kubenswrapper[4779]: I1128 12:37:02.953902 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:02Z","lastTransitionTime":"2025-11-28T12:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:03 crc kubenswrapper[4779]: I1128 12:37:03.057737 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:03 crc kubenswrapper[4779]: I1128 12:37:03.057804 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:03 crc kubenswrapper[4779]: I1128 12:37:03.057819 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:03 crc kubenswrapper[4779]: I1128 12:37:03.057838 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:03 crc kubenswrapper[4779]: I1128 12:37:03.057848 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:03Z","lastTransitionTime":"2025-11-28T12:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:03 crc kubenswrapper[4779]: I1128 12:37:03.161531 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:03 crc kubenswrapper[4779]: I1128 12:37:03.161614 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:03 crc kubenswrapper[4779]: I1128 12:37:03.161636 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:03 crc kubenswrapper[4779]: I1128 12:37:03.161665 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:03 crc kubenswrapper[4779]: I1128 12:37:03.161686 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:03Z","lastTransitionTime":"2025-11-28T12:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:03 crc kubenswrapper[4779]: I1128 12:37:03.264545 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:03 crc kubenswrapper[4779]: I1128 12:37:03.264626 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:03 crc kubenswrapper[4779]: I1128 12:37:03.264648 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:03 crc kubenswrapper[4779]: I1128 12:37:03.264676 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:03 crc kubenswrapper[4779]: I1128 12:37:03.264700 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:03Z","lastTransitionTime":"2025-11-28T12:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:03 crc kubenswrapper[4779]: I1128 12:37:03.366962 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:03 crc kubenswrapper[4779]: I1128 12:37:03.367021 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:03 crc kubenswrapper[4779]: I1128 12:37:03.367033 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:03 crc kubenswrapper[4779]: I1128 12:37:03.367055 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:03 crc kubenswrapper[4779]: I1128 12:37:03.367071 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:03Z","lastTransitionTime":"2025-11-28T12:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:03 crc kubenswrapper[4779]: I1128 12:37:03.471242 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:03 crc kubenswrapper[4779]: I1128 12:37:03.471285 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:03 crc kubenswrapper[4779]: I1128 12:37:03.471294 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:03 crc kubenswrapper[4779]: I1128 12:37:03.471312 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:03 crc kubenswrapper[4779]: I1128 12:37:03.471321 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:03Z","lastTransitionTime":"2025-11-28T12:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:03 crc kubenswrapper[4779]: I1128 12:37:03.574228 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:03 crc kubenswrapper[4779]: I1128 12:37:03.574458 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:03 crc kubenswrapper[4779]: I1128 12:37:03.574486 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:03 crc kubenswrapper[4779]: I1128 12:37:03.574520 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:03 crc kubenswrapper[4779]: I1128 12:37:03.574544 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:03Z","lastTransitionTime":"2025-11-28T12:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:03 crc kubenswrapper[4779]: I1128 12:37:03.678586 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:03 crc kubenswrapper[4779]: I1128 12:37:03.678648 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:03 crc kubenswrapper[4779]: I1128 12:37:03.678661 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:03 crc kubenswrapper[4779]: I1128 12:37:03.678684 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:03 crc kubenswrapper[4779]: I1128 12:37:03.678700 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:03Z","lastTransitionTime":"2025-11-28T12:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:03 crc kubenswrapper[4779]: I1128 12:37:03.782143 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:03 crc kubenswrapper[4779]: I1128 12:37:03.782184 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:03 crc kubenswrapper[4779]: I1128 12:37:03.782191 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:03 crc kubenswrapper[4779]: I1128 12:37:03.782207 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:03 crc kubenswrapper[4779]: I1128 12:37:03.782217 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:03Z","lastTransitionTime":"2025-11-28T12:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:03 crc kubenswrapper[4779]: I1128 12:37:03.885629 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:03 crc kubenswrapper[4779]: I1128 12:37:03.885685 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:03 crc kubenswrapper[4779]: I1128 12:37:03.885698 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:03 crc kubenswrapper[4779]: I1128 12:37:03.885725 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:03 crc kubenswrapper[4779]: I1128 12:37:03.885739 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:03Z","lastTransitionTime":"2025-11-28T12:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:03 crc kubenswrapper[4779]: I1128 12:37:03.989542 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:03 crc kubenswrapper[4779]: I1128 12:37:03.989617 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:03 crc kubenswrapper[4779]: I1128 12:37:03.989635 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:03 crc kubenswrapper[4779]: I1128 12:37:03.989663 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:03 crc kubenswrapper[4779]: I1128 12:37:03.989683 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:03Z","lastTransitionTime":"2025-11-28T12:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:04 crc kubenswrapper[4779]: I1128 12:37:04.092814 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:04 crc kubenswrapper[4779]: I1128 12:37:04.092899 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:04 crc kubenswrapper[4779]: I1128 12:37:04.092922 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:04 crc kubenswrapper[4779]: I1128 12:37:04.092951 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:04 crc kubenswrapper[4779]: I1128 12:37:04.092974 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:04Z","lastTransitionTime":"2025-11-28T12:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:04 crc kubenswrapper[4779]: I1128 12:37:04.196680 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:04 crc kubenswrapper[4779]: I1128 12:37:04.196729 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:04 crc kubenswrapper[4779]: I1128 12:37:04.196742 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:04 crc kubenswrapper[4779]: I1128 12:37:04.196761 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:04 crc kubenswrapper[4779]: I1128 12:37:04.196774 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:04Z","lastTransitionTime":"2025-11-28T12:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:04 crc kubenswrapper[4779]: I1128 12:37:04.299184 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:04 crc kubenswrapper[4779]: I1128 12:37:04.299256 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:04 crc kubenswrapper[4779]: I1128 12:37:04.299275 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:04 crc kubenswrapper[4779]: I1128 12:37:04.299306 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:04 crc kubenswrapper[4779]: I1128 12:37:04.299328 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:04Z","lastTransitionTime":"2025-11-28T12:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:04 crc kubenswrapper[4779]: I1128 12:37:04.402331 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:04 crc kubenswrapper[4779]: I1128 12:37:04.402400 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:04 crc kubenswrapper[4779]: I1128 12:37:04.402421 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:04 crc kubenswrapper[4779]: I1128 12:37:04.402450 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:04 crc kubenswrapper[4779]: I1128 12:37:04.402472 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:04Z","lastTransitionTime":"2025-11-28T12:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:04 crc kubenswrapper[4779]: I1128 12:37:04.509606 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:04 crc kubenswrapper[4779]: I1128 12:37:04.509997 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:04 crc kubenswrapper[4779]: I1128 12:37:04.510027 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:04 crc kubenswrapper[4779]: I1128 12:37:04.510055 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:04 crc kubenswrapper[4779]: I1128 12:37:04.510083 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:04Z","lastTransitionTime":"2025-11-28T12:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:04 crc kubenswrapper[4779]: I1128 12:37:04.614158 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:04 crc kubenswrapper[4779]: I1128 12:37:04.614226 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:04 crc kubenswrapper[4779]: I1128 12:37:04.614243 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:04 crc kubenswrapper[4779]: I1128 12:37:04.614269 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:04 crc kubenswrapper[4779]: I1128 12:37:04.614287 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:04Z","lastTransitionTime":"2025-11-28T12:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:04 crc kubenswrapper[4779]: I1128 12:37:04.717553 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:04 crc kubenswrapper[4779]: I1128 12:37:04.717605 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:04 crc kubenswrapper[4779]: I1128 12:37:04.717621 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:04 crc kubenswrapper[4779]: I1128 12:37:04.717645 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:04 crc kubenswrapper[4779]: I1128 12:37:04.717665 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:04Z","lastTransitionTime":"2025-11-28T12:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:04 crc kubenswrapper[4779]: I1128 12:37:04.725728 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:37:04 crc kubenswrapper[4779]: I1128 12:37:04.725756 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:37:04 crc kubenswrapper[4779]: I1128 12:37:04.725944 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:37:04 crc kubenswrapper[4779]: E1128 12:37:04.725949 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:37:04 crc kubenswrapper[4779]: E1128 12:37:04.726164 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c2psj" podUID="2d9943eb-ea06-476d-8736-0a45e588d9f4" Nov 28 12:37:04 crc kubenswrapper[4779]: E1128 12:37:04.726231 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:37:04 crc kubenswrapper[4779]: I1128 12:37:04.726202 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:37:04 crc kubenswrapper[4779]: E1128 12:37:04.726564 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:37:04 crc kubenswrapper[4779]: I1128 12:37:04.821206 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:04 crc kubenswrapper[4779]: I1128 12:37:04.821273 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:04 crc kubenswrapper[4779]: I1128 12:37:04.821298 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:04 crc kubenswrapper[4779]: I1128 12:37:04.821331 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:04 crc kubenswrapper[4779]: I1128 12:37:04.821349 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:04Z","lastTransitionTime":"2025-11-28T12:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:04 crc kubenswrapper[4779]: I1128 12:37:04.923880 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:04 crc kubenswrapper[4779]: I1128 12:37:04.923928 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:04 crc kubenswrapper[4779]: I1128 12:37:04.923942 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:04 crc kubenswrapper[4779]: I1128 12:37:04.923964 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:04 crc kubenswrapper[4779]: I1128 12:37:04.923979 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:04Z","lastTransitionTime":"2025-11-28T12:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:05 crc kubenswrapper[4779]: I1128 12:37:05.026336 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:05 crc kubenswrapper[4779]: I1128 12:37:05.026398 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:05 crc kubenswrapper[4779]: I1128 12:37:05.026410 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:05 crc kubenswrapper[4779]: I1128 12:37:05.026432 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:05 crc kubenswrapper[4779]: I1128 12:37:05.026444 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:05Z","lastTransitionTime":"2025-11-28T12:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:05 crc kubenswrapper[4779]: I1128 12:37:05.130318 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:05 crc kubenswrapper[4779]: I1128 12:37:05.130448 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:05 crc kubenswrapper[4779]: I1128 12:37:05.130466 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:05 crc kubenswrapper[4779]: I1128 12:37:05.130492 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:05 crc kubenswrapper[4779]: I1128 12:37:05.130512 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:05Z","lastTransitionTime":"2025-11-28T12:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:05 crc kubenswrapper[4779]: I1128 12:37:05.234054 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:05 crc kubenswrapper[4779]: I1128 12:37:05.234187 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:05 crc kubenswrapper[4779]: I1128 12:37:05.234205 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:05 crc kubenswrapper[4779]: I1128 12:37:05.234254 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:05 crc kubenswrapper[4779]: I1128 12:37:05.234274 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:05Z","lastTransitionTime":"2025-11-28T12:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:05 crc kubenswrapper[4779]: I1128 12:37:05.337543 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:05 crc kubenswrapper[4779]: I1128 12:37:05.337620 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:05 crc kubenswrapper[4779]: I1128 12:37:05.337640 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:05 crc kubenswrapper[4779]: I1128 12:37:05.337666 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:05 crc kubenswrapper[4779]: I1128 12:37:05.337683 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:05Z","lastTransitionTime":"2025-11-28T12:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:05 crc kubenswrapper[4779]: I1128 12:37:05.440665 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:05 crc kubenswrapper[4779]: I1128 12:37:05.440725 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:05 crc kubenswrapper[4779]: I1128 12:37:05.440741 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:05 crc kubenswrapper[4779]: I1128 12:37:05.440769 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:05 crc kubenswrapper[4779]: I1128 12:37:05.440786 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:05Z","lastTransitionTime":"2025-11-28T12:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:05 crc kubenswrapper[4779]: I1128 12:37:05.544195 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:05 crc kubenswrapper[4779]: I1128 12:37:05.544263 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:05 crc kubenswrapper[4779]: I1128 12:37:05.544283 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:05 crc kubenswrapper[4779]: I1128 12:37:05.544307 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:05 crc kubenswrapper[4779]: I1128 12:37:05.544324 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:05Z","lastTransitionTime":"2025-11-28T12:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:05 crc kubenswrapper[4779]: I1128 12:37:05.650189 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:05 crc kubenswrapper[4779]: I1128 12:37:05.650259 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:05 crc kubenswrapper[4779]: I1128 12:37:05.650278 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:05 crc kubenswrapper[4779]: I1128 12:37:05.650305 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:05 crc kubenswrapper[4779]: I1128 12:37:05.650323 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:05Z","lastTransitionTime":"2025-11-28T12:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:05 crc kubenswrapper[4779]: I1128 12:37:05.752873 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:05 crc kubenswrapper[4779]: I1128 12:37:05.752943 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:05 crc kubenswrapper[4779]: I1128 12:37:05.752961 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:05 crc kubenswrapper[4779]: I1128 12:37:05.752986 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:05 crc kubenswrapper[4779]: I1128 12:37:05.753004 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:05Z","lastTransitionTime":"2025-11-28T12:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:05 crc kubenswrapper[4779]: I1128 12:37:05.856626 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:05 crc kubenswrapper[4779]: I1128 12:37:05.856700 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:05 crc kubenswrapper[4779]: I1128 12:37:05.856718 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:05 crc kubenswrapper[4779]: I1128 12:37:05.856745 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:05 crc kubenswrapper[4779]: I1128 12:37:05.856763 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:05Z","lastTransitionTime":"2025-11-28T12:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:05 crc kubenswrapper[4779]: I1128 12:37:05.959548 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:05 crc kubenswrapper[4779]: I1128 12:37:05.959592 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:05 crc kubenswrapper[4779]: I1128 12:37:05.959601 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:05 crc kubenswrapper[4779]: I1128 12:37:05.959616 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:05 crc kubenswrapper[4779]: I1128 12:37:05.959625 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:05Z","lastTransitionTime":"2025-11-28T12:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:06 crc kubenswrapper[4779]: I1128 12:37:06.062040 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:06 crc kubenswrapper[4779]: I1128 12:37:06.062155 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:06 crc kubenswrapper[4779]: I1128 12:37:06.062182 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:06 crc kubenswrapper[4779]: I1128 12:37:06.062213 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:06 crc kubenswrapper[4779]: I1128 12:37:06.062235 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:06Z","lastTransitionTime":"2025-11-28T12:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:06 crc kubenswrapper[4779]: I1128 12:37:06.165187 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:06 crc kubenswrapper[4779]: I1128 12:37:06.165258 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:06 crc kubenswrapper[4779]: I1128 12:37:06.165282 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:06 crc kubenswrapper[4779]: I1128 12:37:06.165315 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:06 crc kubenswrapper[4779]: I1128 12:37:06.165339 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:06Z","lastTransitionTime":"2025-11-28T12:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:06 crc kubenswrapper[4779]: I1128 12:37:06.267425 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:06 crc kubenswrapper[4779]: I1128 12:37:06.267469 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:06 crc kubenswrapper[4779]: I1128 12:37:06.267480 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:06 crc kubenswrapper[4779]: I1128 12:37:06.267496 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:06 crc kubenswrapper[4779]: I1128 12:37:06.267509 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:06Z","lastTransitionTime":"2025-11-28T12:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:06 crc kubenswrapper[4779]: I1128 12:37:06.370751 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:06 crc kubenswrapper[4779]: I1128 12:37:06.370807 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:06 crc kubenswrapper[4779]: I1128 12:37:06.370818 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:06 crc kubenswrapper[4779]: I1128 12:37:06.370842 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:06 crc kubenswrapper[4779]: I1128 12:37:06.370854 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:06Z","lastTransitionTime":"2025-11-28T12:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:06 crc kubenswrapper[4779]: I1128 12:37:06.473873 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:06 crc kubenswrapper[4779]: I1128 12:37:06.473935 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:06 crc kubenswrapper[4779]: I1128 12:37:06.473946 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:06 crc kubenswrapper[4779]: I1128 12:37:06.473966 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:06 crc kubenswrapper[4779]: I1128 12:37:06.473981 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:06Z","lastTransitionTime":"2025-11-28T12:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:06 crc kubenswrapper[4779]: I1128 12:37:06.577551 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:06 crc kubenswrapper[4779]: I1128 12:37:06.577594 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:06 crc kubenswrapper[4779]: I1128 12:37:06.577634 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:06 crc kubenswrapper[4779]: I1128 12:37:06.577676 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:06 crc kubenswrapper[4779]: I1128 12:37:06.577727 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:06Z","lastTransitionTime":"2025-11-28T12:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:06 crc kubenswrapper[4779]: I1128 12:37:06.680765 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:06 crc kubenswrapper[4779]: I1128 12:37:06.680828 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:06 crc kubenswrapper[4779]: I1128 12:37:06.680850 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:06 crc kubenswrapper[4779]: I1128 12:37:06.680880 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:06 crc kubenswrapper[4779]: I1128 12:37:06.680899 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:06Z","lastTransitionTime":"2025-11-28T12:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:06 crc kubenswrapper[4779]: I1128 12:37:06.725628 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:37:06 crc kubenswrapper[4779]: I1128 12:37:06.725723 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:37:06 crc kubenswrapper[4779]: I1128 12:37:06.725730 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:37:06 crc kubenswrapper[4779]: E1128 12:37:06.725853 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:37:06 crc kubenswrapper[4779]: I1128 12:37:06.725870 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:37:06 crc kubenswrapper[4779]: E1128 12:37:06.726060 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c2psj" podUID="2d9943eb-ea06-476d-8736-0a45e588d9f4" Nov 28 12:37:06 crc kubenswrapper[4779]: E1128 12:37:06.726472 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:37:06 crc kubenswrapper[4779]: E1128 12:37:06.726579 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:37:06 crc kubenswrapper[4779]: I1128 12:37:06.783685 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:06 crc kubenswrapper[4779]: I1128 12:37:06.783755 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:06 crc kubenswrapper[4779]: I1128 12:37:06.783776 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:06 crc kubenswrapper[4779]: I1128 12:37:06.783808 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:06 crc kubenswrapper[4779]: I1128 12:37:06.783861 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:06Z","lastTransitionTime":"2025-11-28T12:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:06 crc kubenswrapper[4779]: I1128 12:37:06.887794 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:06 crc kubenswrapper[4779]: I1128 12:37:06.887868 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:06 crc kubenswrapper[4779]: I1128 12:37:06.887906 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:06 crc kubenswrapper[4779]: I1128 12:37:06.887933 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:06 crc kubenswrapper[4779]: I1128 12:37:06.887947 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:06Z","lastTransitionTime":"2025-11-28T12:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:06 crc kubenswrapper[4779]: I1128 12:37:06.991506 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:06 crc kubenswrapper[4779]: I1128 12:37:06.991580 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:06 crc kubenswrapper[4779]: I1128 12:37:06.991590 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:06 crc kubenswrapper[4779]: I1128 12:37:06.991608 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:06 crc kubenswrapper[4779]: I1128 12:37:06.991623 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:06Z","lastTransitionTime":"2025-11-28T12:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:07 crc kubenswrapper[4779]: I1128 12:37:07.094663 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:07 crc kubenswrapper[4779]: I1128 12:37:07.094758 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:07 crc kubenswrapper[4779]: I1128 12:37:07.094777 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:07 crc kubenswrapper[4779]: I1128 12:37:07.094802 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:07 crc kubenswrapper[4779]: I1128 12:37:07.094818 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:07Z","lastTransitionTime":"2025-11-28T12:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:07 crc kubenswrapper[4779]: I1128 12:37:07.198714 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:07 crc kubenswrapper[4779]: I1128 12:37:07.198791 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:07 crc kubenswrapper[4779]: I1128 12:37:07.198815 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:07 crc kubenswrapper[4779]: I1128 12:37:07.198846 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:07 crc kubenswrapper[4779]: I1128 12:37:07.198869 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:07Z","lastTransitionTime":"2025-11-28T12:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:07 crc kubenswrapper[4779]: I1128 12:37:07.302427 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:07 crc kubenswrapper[4779]: I1128 12:37:07.302513 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:07 crc kubenswrapper[4779]: I1128 12:37:07.302540 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:07 crc kubenswrapper[4779]: I1128 12:37:07.302576 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:07 crc kubenswrapper[4779]: I1128 12:37:07.302601 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:07Z","lastTransitionTime":"2025-11-28T12:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:07 crc kubenswrapper[4779]: I1128 12:37:07.406433 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:07 crc kubenswrapper[4779]: I1128 12:37:07.406482 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:07 crc kubenswrapper[4779]: I1128 12:37:07.406534 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:07 crc kubenswrapper[4779]: I1128 12:37:07.406559 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:07 crc kubenswrapper[4779]: I1128 12:37:07.406576 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:07Z","lastTransitionTime":"2025-11-28T12:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:07 crc kubenswrapper[4779]: I1128 12:37:07.509053 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:07 crc kubenswrapper[4779]: I1128 12:37:07.509283 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:07 crc kubenswrapper[4779]: I1128 12:37:07.509304 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:07 crc kubenswrapper[4779]: I1128 12:37:07.509330 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:07 crc kubenswrapper[4779]: I1128 12:37:07.509347 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:07Z","lastTransitionTime":"2025-11-28T12:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:07 crc kubenswrapper[4779]: I1128 12:37:07.612803 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:07 crc kubenswrapper[4779]: I1128 12:37:07.612985 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:07 crc kubenswrapper[4779]: I1128 12:37:07.613005 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:07 crc kubenswrapper[4779]: I1128 12:37:07.613071 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:07 crc kubenswrapper[4779]: I1128 12:37:07.613148 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:07Z","lastTransitionTime":"2025-11-28T12:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:07 crc kubenswrapper[4779]: I1128 12:37:07.716283 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:07 crc kubenswrapper[4779]: I1128 12:37:07.716339 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:07 crc kubenswrapper[4779]: I1128 12:37:07.716351 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:07 crc kubenswrapper[4779]: I1128 12:37:07.716371 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:07 crc kubenswrapper[4779]: I1128 12:37:07.716383 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:07Z","lastTransitionTime":"2025-11-28T12:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:07 crc kubenswrapper[4779]: I1128 12:37:07.819726 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:07 crc kubenswrapper[4779]: I1128 12:37:07.819772 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:07 crc kubenswrapper[4779]: I1128 12:37:07.819780 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:07 crc kubenswrapper[4779]: I1128 12:37:07.819794 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:07 crc kubenswrapper[4779]: I1128 12:37:07.819803 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:07Z","lastTransitionTime":"2025-11-28T12:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:07 crc kubenswrapper[4779]: I1128 12:37:07.923052 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:07 crc kubenswrapper[4779]: I1128 12:37:07.923144 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:07 crc kubenswrapper[4779]: I1128 12:37:07.923162 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:07 crc kubenswrapper[4779]: I1128 12:37:07.923187 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:07 crc kubenswrapper[4779]: I1128 12:37:07.923205 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:07Z","lastTransitionTime":"2025-11-28T12:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:08 crc kubenswrapper[4779]: I1128 12:37:08.026635 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:08 crc kubenswrapper[4779]: I1128 12:37:08.026699 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:08 crc kubenswrapper[4779]: I1128 12:37:08.026717 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:08 crc kubenswrapper[4779]: I1128 12:37:08.026743 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:08 crc kubenswrapper[4779]: I1128 12:37:08.026761 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:08Z","lastTransitionTime":"2025-11-28T12:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:08 crc kubenswrapper[4779]: I1128 12:37:08.130210 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:08 crc kubenswrapper[4779]: I1128 12:37:08.130278 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:08 crc kubenswrapper[4779]: I1128 12:37:08.130305 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:08 crc kubenswrapper[4779]: I1128 12:37:08.130340 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:08 crc kubenswrapper[4779]: I1128 12:37:08.130363 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:08Z","lastTransitionTime":"2025-11-28T12:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:08 crc kubenswrapper[4779]: I1128 12:37:08.233874 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:08 crc kubenswrapper[4779]: I1128 12:37:08.233967 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:08 crc kubenswrapper[4779]: I1128 12:37:08.234082 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:08 crc kubenswrapper[4779]: I1128 12:37:08.234177 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:08 crc kubenswrapper[4779]: I1128 12:37:08.234199 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:08Z","lastTransitionTime":"2025-11-28T12:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:08 crc kubenswrapper[4779]: I1128 12:37:08.336747 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:08 crc kubenswrapper[4779]: I1128 12:37:08.336799 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:08 crc kubenswrapper[4779]: I1128 12:37:08.336814 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:08 crc kubenswrapper[4779]: I1128 12:37:08.336837 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:08 crc kubenswrapper[4779]: I1128 12:37:08.336855 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:08Z","lastTransitionTime":"2025-11-28T12:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:08 crc kubenswrapper[4779]: I1128 12:37:08.440171 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:08 crc kubenswrapper[4779]: I1128 12:37:08.440241 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:08 crc kubenswrapper[4779]: I1128 12:37:08.440260 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:08 crc kubenswrapper[4779]: I1128 12:37:08.440288 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:08 crc kubenswrapper[4779]: I1128 12:37:08.440306 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:08Z","lastTransitionTime":"2025-11-28T12:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:08 crc kubenswrapper[4779]: I1128 12:37:08.545188 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:08 crc kubenswrapper[4779]: I1128 12:37:08.545263 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:08 crc kubenswrapper[4779]: I1128 12:37:08.545280 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:08 crc kubenswrapper[4779]: I1128 12:37:08.545306 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:08 crc kubenswrapper[4779]: I1128 12:37:08.545327 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:08Z","lastTransitionTime":"2025-11-28T12:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:08 crc kubenswrapper[4779]: I1128 12:37:08.649398 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:08 crc kubenswrapper[4779]: I1128 12:37:08.649560 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:08 crc kubenswrapper[4779]: I1128 12:37:08.649582 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:08 crc kubenswrapper[4779]: I1128 12:37:08.649608 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:08 crc kubenswrapper[4779]: I1128 12:37:08.649625 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:08Z","lastTransitionTime":"2025-11-28T12:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:08 crc kubenswrapper[4779]: I1128 12:37:08.726253 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:37:08 crc kubenswrapper[4779]: I1128 12:37:08.726367 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:37:08 crc kubenswrapper[4779]: I1128 12:37:08.726269 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:37:08 crc kubenswrapper[4779]: I1128 12:37:08.726513 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:37:08 crc kubenswrapper[4779]: E1128 12:37:08.726459 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:37:08 crc kubenswrapper[4779]: E1128 12:37:08.726708 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:37:08 crc kubenswrapper[4779]: E1128 12:37:08.726970 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:37:08 crc kubenswrapper[4779]: E1128 12:37:08.727053 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c2psj" podUID="2d9943eb-ea06-476d-8736-0a45e588d9f4" Nov 28 12:37:08 crc kubenswrapper[4779]: I1128 12:37:08.753036 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:08 crc kubenswrapper[4779]: I1128 12:37:08.753133 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:08 crc kubenswrapper[4779]: I1128 12:37:08.753150 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:08 crc kubenswrapper[4779]: I1128 12:37:08.753175 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:08 crc kubenswrapper[4779]: I1128 12:37:08.753191 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:08Z","lastTransitionTime":"2025-11-28T12:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:08 crc kubenswrapper[4779]: I1128 12:37:08.856950 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:08 crc kubenswrapper[4779]: I1128 12:37:08.857029 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:08 crc kubenswrapper[4779]: I1128 12:37:08.857047 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:08 crc kubenswrapper[4779]: I1128 12:37:08.857076 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:08 crc kubenswrapper[4779]: I1128 12:37:08.857154 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:08Z","lastTransitionTime":"2025-11-28T12:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:08 crc kubenswrapper[4779]: I1128 12:37:08.959984 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:08 crc kubenswrapper[4779]: I1128 12:37:08.960051 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:08 crc kubenswrapper[4779]: I1128 12:37:08.960068 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:08 crc kubenswrapper[4779]: I1128 12:37:08.960138 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:08 crc kubenswrapper[4779]: I1128 12:37:08.960171 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:08Z","lastTransitionTime":"2025-11-28T12:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:09 crc kubenswrapper[4779]: I1128 12:37:09.063556 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:09 crc kubenswrapper[4779]: I1128 12:37:09.063630 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:09 crc kubenswrapper[4779]: I1128 12:37:09.063649 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:09 crc kubenswrapper[4779]: I1128 12:37:09.063680 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:09 crc kubenswrapper[4779]: I1128 12:37:09.063700 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:09Z","lastTransitionTime":"2025-11-28T12:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:09 crc kubenswrapper[4779]: I1128 12:37:09.167776 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:09 crc kubenswrapper[4779]: I1128 12:37:09.167838 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:09 crc kubenswrapper[4779]: I1128 12:37:09.167858 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:09 crc kubenswrapper[4779]: I1128 12:37:09.167887 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:09 crc kubenswrapper[4779]: I1128 12:37:09.167906 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:09Z","lastTransitionTime":"2025-11-28T12:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:09 crc kubenswrapper[4779]: I1128 12:37:09.272498 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:09 crc kubenswrapper[4779]: I1128 12:37:09.272579 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:09 crc kubenswrapper[4779]: I1128 12:37:09.272598 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:09 crc kubenswrapper[4779]: I1128 12:37:09.272663 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:09 crc kubenswrapper[4779]: I1128 12:37:09.272689 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:09Z","lastTransitionTime":"2025-11-28T12:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:09 crc kubenswrapper[4779]: I1128 12:37:09.376804 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:09 crc kubenswrapper[4779]: I1128 12:37:09.376896 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:09 crc kubenswrapper[4779]: I1128 12:37:09.376922 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:09 crc kubenswrapper[4779]: I1128 12:37:09.376950 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:09 crc kubenswrapper[4779]: I1128 12:37:09.376970 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:09Z","lastTransitionTime":"2025-11-28T12:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:09 crc kubenswrapper[4779]: I1128 12:37:09.480663 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:09 crc kubenswrapper[4779]: I1128 12:37:09.480714 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:09 crc kubenswrapper[4779]: I1128 12:37:09.480728 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:09 crc kubenswrapper[4779]: I1128 12:37:09.480748 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:09 crc kubenswrapper[4779]: I1128 12:37:09.480762 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:09Z","lastTransitionTime":"2025-11-28T12:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:09 crc kubenswrapper[4779]: I1128 12:37:09.584088 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:09 crc kubenswrapper[4779]: I1128 12:37:09.584152 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:09 crc kubenswrapper[4779]: I1128 12:37:09.584164 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:09 crc kubenswrapper[4779]: I1128 12:37:09.584181 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:09 crc kubenswrapper[4779]: I1128 12:37:09.584193 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:09Z","lastTransitionTime":"2025-11-28T12:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:09 crc kubenswrapper[4779]: I1128 12:37:09.687960 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:09 crc kubenswrapper[4779]: I1128 12:37:09.688003 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:09 crc kubenswrapper[4779]: I1128 12:37:09.688014 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:09 crc kubenswrapper[4779]: I1128 12:37:09.688034 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:09 crc kubenswrapper[4779]: I1128 12:37:09.688045 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:09Z","lastTransitionTime":"2025-11-28T12:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:09 crc kubenswrapper[4779]: I1128 12:37:09.743267 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwgdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13786eba-201c-40ca-89b7-174795999a9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec60bab90c7fee1fd38c00da4f84d5133876ad8f2817e5447795fcab4feb2942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwgdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:09Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:09 crc kubenswrapper[4779]: I1128 12:37:09.764498 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c2psj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d9943eb-ea06-476d-8736-0a45e588d9f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8vbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8vbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:36:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c2psj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:09Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:09 crc kubenswrapper[4779]: I1128 12:37:09.791397 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:09 crc kubenswrapper[4779]: I1128 12:37:09.791460 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:09 crc kubenswrapper[4779]: I1128 12:37:09.791477 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:09 crc kubenswrapper[4779]: I1128 12:37:09.791502 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:09 crc kubenswrapper[4779]: I1128 12:37:09.791520 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:09Z","lastTransitionTime":"2025-11-28T12:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:09 crc kubenswrapper[4779]: I1128 12:37:09.800242 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbbbf6f-004c-42ae-8a38-1bcc6cb88ac2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9cede79cbe4c47d953dfa702fe815cc14ee242dede33edec3c4617824c89b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4493f154b47a353308d54341114bbbd12157f9575b873e1648d1dae6a386a534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9d44446078a2bb53a5a9b0a3f7a87ecf24a8554fb968a0250fc3a4cfb2d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://123567b9e202a9aae6ab83bca1ea909a496c476395703ab65e855be02f7af06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c959e0d582f2f01523650db7c0a1d6483dda34c3fcdfaa29d2d25e4d0b0f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:09Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:09 crc kubenswrapper[4779]: I1128 12:37:09.832445 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f4f43e-a921-41b2-aa88-506055daff60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fae861b14ca36a4b482a48b94ffda32e0d188204f356dfe60e2d8778b284dc1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fae861b14ca36a4b482a48b94ffda32e0d188204f356dfe60e2d8778b284dc1b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T12:36:59Z\\\",\\\"message\\\":\\\"128 12:36:59.339929 6826 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1128 12:36:59.339995 6826 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1128 12:36:59.340052 6826 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1128 12:36:59.340142 6826 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 12:36:59.340166 6826 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 12:36:59.340187 6826 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1128 12:36:59.340197 6826 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1128 12:36:59.340234 6826 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 12:36:59.340231 6826 handler.go:208] Removed *v1.Node event handler 2\\\\nI1128 12:36:59.340253 6826 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1128 12:36:59.340268 6826 handler.go:208] Removed *v1.Node event handler 7\\\\nI1128 12:36:59.340268 6826 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1128 12:36:59.340288 6826 factory.go:656] Stopping watch factory\\\\nI1128 12:36:59.340293 6826 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 12:36:59.340293 6826 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 12:36:59.340300 6826 ovnkube.go:599] Stopped ovnkube\\\\nI1128 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pbmbn_openshift-ovn-kubernetes(35f4f43e-a921-41b2-aa88-506055daff60)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pbmbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:09Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:09 crc kubenswrapper[4779]: I1128 12:37:09.856960 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c9857379117d130ce02fa4a153dfc01c9f41ba65663ae918bd82c9b14291e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:09Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:09 crc kubenswrapper[4779]: I1128 12:37:09.874627 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dlvj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8b3aa68-52ee-40cd-a059-6e410b826ce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2e852aeb571e85a95f4581550ee5f911d9c67fbbc4fc699e9af667a9c4b531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db55w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dlvj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:09Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:09 crc kubenswrapper[4779]: I1128 12:37:09.894715 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jf46d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd0b81f7-c868-4f90-b20d-9d1b53f5216f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e8508450f924b6b8509b5d06c78535915557c5a7362b50c41515ad15f35e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smlr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383fc6deecc04584b130b3fdc9c1fded751c521513ce60898fdf1927748cd4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smlr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:36:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jf46d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:09Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:09 crc kubenswrapper[4779]: I1128 12:37:09.895303 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:09 crc kubenswrapper[4779]: I1128 12:37:09.895386 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:09 crc kubenswrapper[4779]: I1128 12:37:09.895405 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:09 crc kubenswrapper[4779]: I1128 12:37:09.895875 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:09 crc kubenswrapper[4779]: I1128 12:37:09.895982 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:09Z","lastTransitionTime":"2025-11-28T12:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:09 crc kubenswrapper[4779]: I1128 12:37:09.911489 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d2732-7fd1-4fa8-9da7-74872484e3f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ca35c83bfed6e6b9e11bc2acb282ab619c3a04941a8ed540853cdd43531a00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9e9439db88e70aa53dff88d8b0a4f533ad90c8652e9a4d58e93fda87fa7f5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9e9439db88e70aa53dff88d8b0a4f533ad90c8652e9a4d58e93fda87fa7f5f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:09Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:09 crc kubenswrapper[4779]: I1128 12:37:09.933626 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b303d954-23c9-4fc9-8e79-981009172099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6912a42c418059dabf07c7d940bf1c4102c8dcf91cd4dd6ca0b177f4acd276ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf14e5e2229156dc442c92253ef1f23c75a5a6f5dec2d2537cddcdd1df54b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a76dbc5b41ebf68792cd449e4a245678be24151f0c980eedd06f956674b2435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3db38b748527004df103120db865f7848491344dfdf5c89a6db10f4d15e6a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9026b47ba3a0076e3f66e452bc9a223292a17659f2b80d04ef6eb6a5c0448710\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 12:35:52.373678 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 12:35:52.376135 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3230331060/tls.crt::/tmp/serving-cert-3230331060/tls.key\\\\\\\"\\\\nI1128 12:35:57.821147 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 12:35:57.824398 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 12:35:57.824424 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 12:35:57.824444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 12:35:57.824450 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 12:35:57.831411 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 12:35:57.831445 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831467 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 12:35:57.831472 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 12:35:57.831476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 12:35:57.831480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 12:35:57.831686 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 12:35:57.839127 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bafddd2d81f67f1445e3714d50eba5cfd6f75d60c2cb47d16f2086861a10bd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:09Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:09 crc kubenswrapper[4779]: I1128 12:37:09.953008 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:09Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:09 crc kubenswrapper[4779]: I1128 12:37:09.972742 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3544f7f72339878b2314fde813e8a92a8341fb05a34a4440c7c37b983d8d23f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19dcc5041b0cbae9167c41c808ece2651eac928f93422722ae28825b5ea4f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:09Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:09 crc kubenswrapper[4779]: I1128 12:37:09.991483 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:09Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.000907 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.000992 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.001019 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.001053 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.001076 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:10Z","lastTransitionTime":"2025-11-28T12:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.011725 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d290cf8678216cdf66a68b32edea2be30af7f7fa4ff7ccac629d9e690b23b13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:10Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.036898 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2gg4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9e9a74657b078824a5614dc894178aed5ca4cb11445b900485e9a6c4378f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2gg4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:10Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.056413 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91a3b1-3cec-4dcd-8f16-bc721aaedc52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7be2ce5bc20d31216029627f86e27657d444334d72ba98e4ae9923c9d23cf35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9512174ef01c8751a11fc5e6193513236518b4a9d5b63b05020544b8708b70b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54bf19864670db9dbeda1e3b133e9246f9e4027714f684783efed888890af9ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dd288476ad4d58bebb413208bbe2f45bf3997fd7587a90b08ff3af6bdc2ad10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dd288476ad4d58bebb413208bbe2f45bf3997fd7587a90b08ff3af6bdc2ad10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:10Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.078435 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:10Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.100591 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pzwdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba664a9e-76d2-4d02-889a-e7062bfc903c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c11decc7085592a2a1e13b74049f378421293a7a1929f765860c47824c4b7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5598fdba6afba30cd00c8abdae6c80300fb10dfcde40afab0f15f848addddd47\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T12:36:44Z\\\",\\\"message\\\":\\\"2025-11-28T12:35:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4ae75205-766c-4cf0-bf74-190c15ad266e\\\\n2025-11-28T12:35:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4ae75205-766c-4cf0-bf74-190c15ad266e to /host/opt/cni/bin/\\\\n2025-11-28T12:35:59Z [verbose] multus-daemon started\\\\n2025-11-28T12:35:59Z [verbose] Readiness Indicator file check\\\\n2025-11-28T12:36:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfslc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pzwdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:10Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.104534 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.104568 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.104580 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.104599 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.104612 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:10Z","lastTransitionTime":"2025-11-28T12:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.122513 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373d4c2a-0b03-4671-945a-0583fa342b3d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e79e9cc7bdaacc427604d12cf94272c7ed3d93519b1d285ba336edded1b3642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0417da6607c0d549767642332fa4fb21bbef525d7073d0a352120092d3450f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b887fb78d1be13c77a88ce49c84ff0839a51056e29d59d571ab7da133dd0d897\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a538ac7a3b48f9c58a68688a95342fb3a9d26ee3e5d7c65f1e3b8d99993294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:10Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.140333 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b2a3eb4-4de5-491b-b466-3a35b7d745ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23df7a96829b4103254d6da3740caab05538ddbd3235ce16e8d768e681041c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f92b1378efd9146ee3cb61fef14092136e47b318d132a400c768bedf50d034e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kj9g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:10Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.207033 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.207533 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.207724 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.207890 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.208075 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:10Z","lastTransitionTime":"2025-11-28T12:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.311187 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.311693 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.312033 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.312286 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.312532 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:10Z","lastTransitionTime":"2025-11-28T12:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.416473 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.416865 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.417313 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.417552 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.417795 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:10Z","lastTransitionTime":"2025-11-28T12:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.520812 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.520886 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.520909 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.520942 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.520963 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:10Z","lastTransitionTime":"2025-11-28T12:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.624604 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.624672 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.624689 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.624719 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.624738 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:10Z","lastTransitionTime":"2025-11-28T12:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.726184 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.726242 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.726334 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.726353 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:37:10 crc kubenswrapper[4779]: E1128 12:37:10.726414 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:37:10 crc kubenswrapper[4779]: E1128 12:37:10.726681 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:37:10 crc kubenswrapper[4779]: E1128 12:37:10.726654 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:37:10 crc kubenswrapper[4779]: E1128 12:37:10.726744 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c2psj" podUID="2d9943eb-ea06-476d-8736-0a45e588d9f4" Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.732476 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.732561 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.732590 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.732624 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.732658 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:10Z","lastTransitionTime":"2025-11-28T12:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.835530 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.835567 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.835577 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.835592 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.835604 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:10Z","lastTransitionTime":"2025-11-28T12:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.937411 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.937720 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.937964 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.938034 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:10 crc kubenswrapper[4779]: I1128 12:37:10.938120 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:10Z","lastTransitionTime":"2025-11-28T12:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:11 crc kubenswrapper[4779]: I1128 12:37:11.041583 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:11 crc kubenswrapper[4779]: I1128 12:37:11.041646 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:11 crc kubenswrapper[4779]: I1128 12:37:11.041665 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:11 crc kubenswrapper[4779]: I1128 12:37:11.041691 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:11 crc kubenswrapper[4779]: I1128 12:37:11.041709 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:11Z","lastTransitionTime":"2025-11-28T12:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:11 crc kubenswrapper[4779]: I1128 12:37:11.145333 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:11 crc kubenswrapper[4779]: I1128 12:37:11.145385 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:11 crc kubenswrapper[4779]: I1128 12:37:11.145402 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:11 crc kubenswrapper[4779]: I1128 12:37:11.145426 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:11 crc kubenswrapper[4779]: I1128 12:37:11.145444 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:11Z","lastTransitionTime":"2025-11-28T12:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:11 crc kubenswrapper[4779]: I1128 12:37:11.248937 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:11 crc kubenswrapper[4779]: I1128 12:37:11.249001 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:11 crc kubenswrapper[4779]: I1128 12:37:11.249017 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:11 crc kubenswrapper[4779]: I1128 12:37:11.249042 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:11 crc kubenswrapper[4779]: I1128 12:37:11.249059 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:11Z","lastTransitionTime":"2025-11-28T12:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:11 crc kubenswrapper[4779]: I1128 12:37:11.352412 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:11 crc kubenswrapper[4779]: I1128 12:37:11.352484 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:11 crc kubenswrapper[4779]: I1128 12:37:11.352506 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:11 crc kubenswrapper[4779]: I1128 12:37:11.352536 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:11 crc kubenswrapper[4779]: I1128 12:37:11.352562 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:11Z","lastTransitionTime":"2025-11-28T12:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:11 crc kubenswrapper[4779]: I1128 12:37:11.455947 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:11 crc kubenswrapper[4779]: I1128 12:37:11.456018 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:11 crc kubenswrapper[4779]: I1128 12:37:11.456036 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:11 crc kubenswrapper[4779]: I1128 12:37:11.456064 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:11 crc kubenswrapper[4779]: I1128 12:37:11.456081 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:11Z","lastTransitionTime":"2025-11-28T12:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:11 crc kubenswrapper[4779]: I1128 12:37:11.560637 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:11 crc kubenswrapper[4779]: I1128 12:37:11.560843 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:11 crc kubenswrapper[4779]: I1128 12:37:11.560869 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:11 crc kubenswrapper[4779]: I1128 12:37:11.560907 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:11 crc kubenswrapper[4779]: I1128 12:37:11.560937 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:11Z","lastTransitionTime":"2025-11-28T12:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:11 crc kubenswrapper[4779]: I1128 12:37:11.664498 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:11 crc kubenswrapper[4779]: I1128 12:37:11.664577 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:11 crc kubenswrapper[4779]: I1128 12:37:11.664604 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:11 crc kubenswrapper[4779]: I1128 12:37:11.664635 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:11 crc kubenswrapper[4779]: I1128 12:37:11.664656 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:11Z","lastTransitionTime":"2025-11-28T12:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:11 crc kubenswrapper[4779]: I1128 12:37:11.767656 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:11 crc kubenswrapper[4779]: I1128 12:37:11.767727 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:11 crc kubenswrapper[4779]: I1128 12:37:11.767745 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:11 crc kubenswrapper[4779]: I1128 12:37:11.767771 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:11 crc kubenswrapper[4779]: I1128 12:37:11.767787 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:11Z","lastTransitionTime":"2025-11-28T12:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:11 crc kubenswrapper[4779]: I1128 12:37:11.871235 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:11 crc kubenswrapper[4779]: I1128 12:37:11.871307 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:11 crc kubenswrapper[4779]: I1128 12:37:11.871324 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:11 crc kubenswrapper[4779]: I1128 12:37:11.871351 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:11 crc kubenswrapper[4779]: I1128 12:37:11.871370 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:11Z","lastTransitionTime":"2025-11-28T12:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:11 crc kubenswrapper[4779]: I1128 12:37:11.974935 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:11 crc kubenswrapper[4779]: I1128 12:37:11.975028 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:11 crc kubenswrapper[4779]: I1128 12:37:11.975058 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:11 crc kubenswrapper[4779]: I1128 12:37:11.975098 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:11 crc kubenswrapper[4779]: I1128 12:37:11.975172 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:11Z","lastTransitionTime":"2025-11-28T12:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.078244 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.078315 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.078331 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.078355 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.078374 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:12Z","lastTransitionTime":"2025-11-28T12:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.181272 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.181335 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.181354 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.181378 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.181394 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:12Z","lastTransitionTime":"2025-11-28T12:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.284325 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.284378 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.284395 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.284513 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.284576 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:12Z","lastTransitionTime":"2025-11-28T12:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.388197 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.388270 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.388293 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.388326 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.388350 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:12Z","lastTransitionTime":"2025-11-28T12:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.492243 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.492302 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.492320 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.492346 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.492363 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:12Z","lastTransitionTime":"2025-11-28T12:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.595663 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.595726 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.595750 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.595783 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.595807 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:12Z","lastTransitionTime":"2025-11-28T12:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.701892 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.701962 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.701978 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.702007 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.702027 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:12Z","lastTransitionTime":"2025-11-28T12:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.725364 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.725430 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.725441 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.725373 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:37:12 crc kubenswrapper[4779]: E1128 12:37:12.725649 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c2psj" podUID="2d9943eb-ea06-476d-8736-0a45e588d9f4" Nov 28 12:37:12 crc kubenswrapper[4779]: E1128 12:37:12.725794 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:37:12 crc kubenswrapper[4779]: E1128 12:37:12.726075 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:37:12 crc kubenswrapper[4779]: E1128 12:37:12.726184 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.805289 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.805347 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.805364 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.805392 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.805411 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:12Z","lastTransitionTime":"2025-11-28T12:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.887561 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.887639 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.887656 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.887688 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.887713 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:12Z","lastTransitionTime":"2025-11-28T12:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:12 crc kubenswrapper[4779]: E1128 12:37:12.910258 4779 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:37:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:37:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:37:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:37:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:37:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:37:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:37:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:37:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"78a2023c-0feb-4049-a56a-d55919a84d1c\\\",\\\"systemUUID\\\":\\\"232cf3c8-8956-4a87-8900-bbd0298775e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:12Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.915719 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.915786 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.915812 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.915860 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.915890 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:12Z","lastTransitionTime":"2025-11-28T12:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:12 crc kubenswrapper[4779]: E1128 12:37:12.936566 4779 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:37:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:37:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:37:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:37:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:37:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:37:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:37:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:37:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"78a2023c-0feb-4049-a56a-d55919a84d1c\\\",\\\"systemUUID\\\":\\\"232cf3c8-8956-4a87-8900-bbd0298775e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:12Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.942438 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.942507 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.942687 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.942721 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.942745 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:12Z","lastTransitionTime":"2025-11-28T12:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:12 crc kubenswrapper[4779]: E1128 12:37:12.970379 4779 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:37:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:37:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:37:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:37:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:37:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:37:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:37:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:37:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"78a2023c-0feb-4049-a56a-d55919a84d1c\\\",\\\"systemUUID\\\":\\\"232cf3c8-8956-4a87-8900-bbd0298775e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:12Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.976229 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.976306 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.976330 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.976361 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.976387 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:12Z","lastTransitionTime":"2025-11-28T12:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:12 crc kubenswrapper[4779]: E1128 12:37:12.993830 4779 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:37:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:37:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:37:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:37:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:37:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:37:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:37:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:37:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"78a2023c-0feb-4049-a56a-d55919a84d1c\\\",\\\"systemUUID\\\":\\\"232cf3c8-8956-4a87-8900-bbd0298775e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:12Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.999422 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.999490 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.999509 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.999536 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:12 crc kubenswrapper[4779]: I1128 12:37:12.999554 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:12Z","lastTransitionTime":"2025-11-28T12:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:13 crc kubenswrapper[4779]: E1128 12:37:13.018615 4779 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:37:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:37:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:37:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:37:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:37:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:37:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T12:37:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T12:37:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"78a2023c-0feb-4049-a56a-d55919a84d1c\\\",\\\"systemUUID\\\":\\\"232cf3c8-8956-4a87-8900-bbd0298775e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:13Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:13 crc kubenswrapper[4779]: E1128 12:37:13.018805 4779 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 12:37:13 crc kubenswrapper[4779]: I1128 12:37:13.021510 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:13 crc kubenswrapper[4779]: I1128 12:37:13.021556 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:13 crc kubenswrapper[4779]: I1128 12:37:13.021565 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:13 crc kubenswrapper[4779]: I1128 12:37:13.021583 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:13 crc kubenswrapper[4779]: I1128 12:37:13.021593 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:13Z","lastTransitionTime":"2025-11-28T12:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:13 crc kubenswrapper[4779]: I1128 12:37:13.124419 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:13 crc kubenswrapper[4779]: I1128 12:37:13.124480 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:13 crc kubenswrapper[4779]: I1128 12:37:13.124490 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:13 crc kubenswrapper[4779]: I1128 12:37:13.124509 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:13 crc kubenswrapper[4779]: I1128 12:37:13.124521 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:13Z","lastTransitionTime":"2025-11-28T12:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:13 crc kubenswrapper[4779]: I1128 12:37:13.228373 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:13 crc kubenswrapper[4779]: I1128 12:37:13.228425 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:13 crc kubenswrapper[4779]: I1128 12:37:13.228437 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:13 crc kubenswrapper[4779]: I1128 12:37:13.228459 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:13 crc kubenswrapper[4779]: I1128 12:37:13.228473 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:13Z","lastTransitionTime":"2025-11-28T12:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:13 crc kubenswrapper[4779]: I1128 12:37:13.331310 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:13 crc kubenswrapper[4779]: I1128 12:37:13.331368 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:13 crc kubenswrapper[4779]: I1128 12:37:13.331380 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:13 crc kubenswrapper[4779]: I1128 12:37:13.331399 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:13 crc kubenswrapper[4779]: I1128 12:37:13.331414 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:13Z","lastTransitionTime":"2025-11-28T12:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:13 crc kubenswrapper[4779]: I1128 12:37:13.434809 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:13 crc kubenswrapper[4779]: I1128 12:37:13.434879 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:13 crc kubenswrapper[4779]: I1128 12:37:13.434896 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:13 crc kubenswrapper[4779]: I1128 12:37:13.434923 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:13 crc kubenswrapper[4779]: I1128 12:37:13.434942 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:13Z","lastTransitionTime":"2025-11-28T12:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:13 crc kubenswrapper[4779]: I1128 12:37:13.538415 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:13 crc kubenswrapper[4779]: I1128 12:37:13.538493 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:13 crc kubenswrapper[4779]: I1128 12:37:13.538515 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:13 crc kubenswrapper[4779]: I1128 12:37:13.538545 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:13 crc kubenswrapper[4779]: I1128 12:37:13.538564 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:13Z","lastTransitionTime":"2025-11-28T12:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:13 crc kubenswrapper[4779]: I1128 12:37:13.642005 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:13 crc kubenswrapper[4779]: I1128 12:37:13.642087 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:13 crc kubenswrapper[4779]: I1128 12:37:13.642194 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:13 crc kubenswrapper[4779]: I1128 12:37:13.642289 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:13 crc kubenswrapper[4779]: I1128 12:37:13.642323 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:13Z","lastTransitionTime":"2025-11-28T12:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:13 crc kubenswrapper[4779]: I1128 12:37:13.727096 4779 scope.go:117] "RemoveContainer" containerID="fae861b14ca36a4b482a48b94ffda32e0d188204f356dfe60e2d8778b284dc1b" Nov 28 12:37:13 crc kubenswrapper[4779]: E1128 12:37:13.727473 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pbmbn_openshift-ovn-kubernetes(35f4f43e-a921-41b2-aa88-506055daff60)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" podUID="35f4f43e-a921-41b2-aa88-506055daff60" Nov 28 12:37:13 crc kubenswrapper[4779]: I1128 12:37:13.745560 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:13 crc kubenswrapper[4779]: I1128 12:37:13.745611 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:13 crc kubenswrapper[4779]: I1128 12:37:13.745630 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:13 crc kubenswrapper[4779]: I1128 12:37:13.745653 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:13 crc kubenswrapper[4779]: I1128 12:37:13.745672 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:13Z","lastTransitionTime":"2025-11-28T12:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:13 crc kubenswrapper[4779]: I1128 12:37:13.848984 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:13 crc kubenswrapper[4779]: I1128 12:37:13.849036 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:13 crc kubenswrapper[4779]: I1128 12:37:13.849047 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:13 crc kubenswrapper[4779]: I1128 12:37:13.849064 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:13 crc kubenswrapper[4779]: I1128 12:37:13.849077 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:13Z","lastTransitionTime":"2025-11-28T12:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:13 crc kubenswrapper[4779]: I1128 12:37:13.951967 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:13 crc kubenswrapper[4779]: I1128 12:37:13.952376 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:13 crc kubenswrapper[4779]: I1128 12:37:13.952579 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:13 crc kubenswrapper[4779]: I1128 12:37:13.952849 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:13 crc kubenswrapper[4779]: I1128 12:37:13.953072 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:13Z","lastTransitionTime":"2025-11-28T12:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:14 crc kubenswrapper[4779]: I1128 12:37:14.056799 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:14 crc kubenswrapper[4779]: I1128 12:37:14.056889 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:14 crc kubenswrapper[4779]: I1128 12:37:14.056910 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:14 crc kubenswrapper[4779]: I1128 12:37:14.056937 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:14 crc kubenswrapper[4779]: I1128 12:37:14.056957 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:14Z","lastTransitionTime":"2025-11-28T12:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:14 crc kubenswrapper[4779]: I1128 12:37:14.161958 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:14 crc kubenswrapper[4779]: I1128 12:37:14.162038 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:14 crc kubenswrapper[4779]: I1128 12:37:14.162062 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:14 crc kubenswrapper[4779]: I1128 12:37:14.162101 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:14 crc kubenswrapper[4779]: I1128 12:37:14.162158 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:14Z","lastTransitionTime":"2025-11-28T12:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:14 crc kubenswrapper[4779]: I1128 12:37:14.268593 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:14 crc kubenswrapper[4779]: I1128 12:37:14.268643 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:14 crc kubenswrapper[4779]: I1128 12:37:14.268659 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:14 crc kubenswrapper[4779]: I1128 12:37:14.268684 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:14 crc kubenswrapper[4779]: I1128 12:37:14.268700 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:14Z","lastTransitionTime":"2025-11-28T12:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:14 crc kubenswrapper[4779]: I1128 12:37:14.372220 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:14 crc kubenswrapper[4779]: I1128 12:37:14.372294 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:14 crc kubenswrapper[4779]: I1128 12:37:14.372312 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:14 crc kubenswrapper[4779]: I1128 12:37:14.372340 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:14 crc kubenswrapper[4779]: I1128 12:37:14.372358 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:14Z","lastTransitionTime":"2025-11-28T12:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:14 crc kubenswrapper[4779]: I1128 12:37:14.477644 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:14 crc kubenswrapper[4779]: I1128 12:37:14.477719 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:14 crc kubenswrapper[4779]: I1128 12:37:14.477742 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:14 crc kubenswrapper[4779]: I1128 12:37:14.477769 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:14 crc kubenswrapper[4779]: I1128 12:37:14.477789 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:14Z","lastTransitionTime":"2025-11-28T12:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:14 crc kubenswrapper[4779]: I1128 12:37:14.580575 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:14 crc kubenswrapper[4779]: I1128 12:37:14.580647 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:14 crc kubenswrapper[4779]: I1128 12:37:14.580669 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:14 crc kubenswrapper[4779]: I1128 12:37:14.580749 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:14 crc kubenswrapper[4779]: I1128 12:37:14.580783 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:14Z","lastTransitionTime":"2025-11-28T12:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:14 crc kubenswrapper[4779]: I1128 12:37:14.683014 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:14 crc kubenswrapper[4779]: I1128 12:37:14.683112 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:14 crc kubenswrapper[4779]: I1128 12:37:14.683131 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:14 crc kubenswrapper[4779]: I1128 12:37:14.683161 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:14 crc kubenswrapper[4779]: I1128 12:37:14.683176 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:14Z","lastTransitionTime":"2025-11-28T12:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:14 crc kubenswrapper[4779]: I1128 12:37:14.725235 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:37:14 crc kubenswrapper[4779]: I1128 12:37:14.725347 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:37:14 crc kubenswrapper[4779]: I1128 12:37:14.725347 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:37:14 crc kubenswrapper[4779]: E1128 12:37:14.725566 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c2psj" podUID="2d9943eb-ea06-476d-8736-0a45e588d9f4" Nov 28 12:37:14 crc kubenswrapper[4779]: I1128 12:37:14.725656 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:37:14 crc kubenswrapper[4779]: E1128 12:37:14.725854 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:37:14 crc kubenswrapper[4779]: E1128 12:37:14.725946 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:37:14 crc kubenswrapper[4779]: E1128 12:37:14.726029 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:37:14 crc kubenswrapper[4779]: I1128 12:37:14.786469 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:14 crc kubenswrapper[4779]: I1128 12:37:14.786557 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:14 crc kubenswrapper[4779]: I1128 12:37:14.786569 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:14 crc kubenswrapper[4779]: I1128 12:37:14.786585 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:14 crc kubenswrapper[4779]: I1128 12:37:14.786595 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:14Z","lastTransitionTime":"2025-11-28T12:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:14 crc kubenswrapper[4779]: I1128 12:37:14.891993 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:14 crc kubenswrapper[4779]: I1128 12:37:14.892031 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:14 crc kubenswrapper[4779]: I1128 12:37:14.892051 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:14 crc kubenswrapper[4779]: I1128 12:37:14.892068 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:14 crc kubenswrapper[4779]: I1128 12:37:14.892081 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:14Z","lastTransitionTime":"2025-11-28T12:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:14 crc kubenswrapper[4779]: I1128 12:37:14.995375 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:14 crc kubenswrapper[4779]: I1128 12:37:14.995623 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:14 crc kubenswrapper[4779]: I1128 12:37:14.995685 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:14 crc kubenswrapper[4779]: I1128 12:37:14.995747 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:14 crc kubenswrapper[4779]: I1128 12:37:14.995859 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:14Z","lastTransitionTime":"2025-11-28T12:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:15 crc kubenswrapper[4779]: I1128 12:37:15.099579 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:15 crc kubenswrapper[4779]: I1128 12:37:15.099644 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:15 crc kubenswrapper[4779]: I1128 12:37:15.099662 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:15 crc kubenswrapper[4779]: I1128 12:37:15.099686 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:15 crc kubenswrapper[4779]: I1128 12:37:15.099705 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:15Z","lastTransitionTime":"2025-11-28T12:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:15 crc kubenswrapper[4779]: I1128 12:37:15.202851 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:15 crc kubenswrapper[4779]: I1128 12:37:15.202901 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:15 crc kubenswrapper[4779]: I1128 12:37:15.202911 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:15 crc kubenswrapper[4779]: I1128 12:37:15.202929 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:15 crc kubenswrapper[4779]: I1128 12:37:15.202941 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:15Z","lastTransitionTime":"2025-11-28T12:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:15 crc kubenswrapper[4779]: I1128 12:37:15.306245 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:15 crc kubenswrapper[4779]: I1128 12:37:15.306318 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:15 crc kubenswrapper[4779]: I1128 12:37:15.306336 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:15 crc kubenswrapper[4779]: I1128 12:37:15.306364 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:15 crc kubenswrapper[4779]: I1128 12:37:15.306384 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:15Z","lastTransitionTime":"2025-11-28T12:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:15 crc kubenswrapper[4779]: I1128 12:37:15.409192 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:15 crc kubenswrapper[4779]: I1128 12:37:15.409265 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:15 crc kubenswrapper[4779]: I1128 12:37:15.409288 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:15 crc kubenswrapper[4779]: I1128 12:37:15.409318 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:15 crc kubenswrapper[4779]: I1128 12:37:15.409339 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:15Z","lastTransitionTime":"2025-11-28T12:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:15 crc kubenswrapper[4779]: I1128 12:37:15.512237 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:15 crc kubenswrapper[4779]: I1128 12:37:15.512288 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:15 crc kubenswrapper[4779]: I1128 12:37:15.512303 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:15 crc kubenswrapper[4779]: I1128 12:37:15.512324 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:15 crc kubenswrapper[4779]: I1128 12:37:15.512341 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:15Z","lastTransitionTime":"2025-11-28T12:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:15 crc kubenswrapper[4779]: I1128 12:37:15.615707 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:15 crc kubenswrapper[4779]: I1128 12:37:15.615833 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:15 crc kubenswrapper[4779]: I1128 12:37:15.615855 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:15 crc kubenswrapper[4779]: I1128 12:37:15.615881 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:15 crc kubenswrapper[4779]: I1128 12:37:15.615898 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:15Z","lastTransitionTime":"2025-11-28T12:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:15 crc kubenswrapper[4779]: I1128 12:37:15.719217 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:15 crc kubenswrapper[4779]: I1128 12:37:15.719295 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:15 crc kubenswrapper[4779]: I1128 12:37:15.719315 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:15 crc kubenswrapper[4779]: I1128 12:37:15.719345 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:15 crc kubenswrapper[4779]: I1128 12:37:15.719366 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:15Z","lastTransitionTime":"2025-11-28T12:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:15 crc kubenswrapper[4779]: I1128 12:37:15.822818 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:15 crc kubenswrapper[4779]: I1128 12:37:15.822880 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:15 crc kubenswrapper[4779]: I1128 12:37:15.822903 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:15 crc kubenswrapper[4779]: I1128 12:37:15.822932 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:15 crc kubenswrapper[4779]: I1128 12:37:15.822954 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:15Z","lastTransitionTime":"2025-11-28T12:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:15 crc kubenswrapper[4779]: I1128 12:37:15.925087 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:15 crc kubenswrapper[4779]: I1128 12:37:15.925187 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:15 crc kubenswrapper[4779]: I1128 12:37:15.925211 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:15 crc kubenswrapper[4779]: I1128 12:37:15.925238 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:15 crc kubenswrapper[4779]: I1128 12:37:15.925262 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:15Z","lastTransitionTime":"2025-11-28T12:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:16 crc kubenswrapper[4779]: I1128 12:37:16.027737 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:16 crc kubenswrapper[4779]: I1128 12:37:16.027811 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:16 crc kubenswrapper[4779]: I1128 12:37:16.027835 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:16 crc kubenswrapper[4779]: I1128 12:37:16.027867 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:16 crc kubenswrapper[4779]: I1128 12:37:16.027890 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:16Z","lastTransitionTime":"2025-11-28T12:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:16 crc kubenswrapper[4779]: I1128 12:37:16.130834 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:16 crc kubenswrapper[4779]: I1128 12:37:16.130909 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:16 crc kubenswrapper[4779]: I1128 12:37:16.130963 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:16 crc kubenswrapper[4779]: I1128 12:37:16.130994 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:16 crc kubenswrapper[4779]: I1128 12:37:16.131013 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:16Z","lastTransitionTime":"2025-11-28T12:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:16 crc kubenswrapper[4779]: I1128 12:37:16.234383 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:16 crc kubenswrapper[4779]: I1128 12:37:16.234442 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:16 crc kubenswrapper[4779]: I1128 12:37:16.234455 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:16 crc kubenswrapper[4779]: I1128 12:37:16.234475 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:16 crc kubenswrapper[4779]: I1128 12:37:16.234489 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:16Z","lastTransitionTime":"2025-11-28T12:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:16 crc kubenswrapper[4779]: I1128 12:37:16.337438 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:16 crc kubenswrapper[4779]: I1128 12:37:16.337503 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:16 crc kubenswrapper[4779]: I1128 12:37:16.337520 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:16 crc kubenswrapper[4779]: I1128 12:37:16.337546 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:16 crc kubenswrapper[4779]: I1128 12:37:16.337564 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:16Z","lastTransitionTime":"2025-11-28T12:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:16 crc kubenswrapper[4779]: I1128 12:37:16.440777 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:16 crc kubenswrapper[4779]: I1128 12:37:16.440834 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:16 crc kubenswrapper[4779]: I1128 12:37:16.440846 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:16 crc kubenswrapper[4779]: I1128 12:37:16.440866 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:16 crc kubenswrapper[4779]: I1128 12:37:16.440879 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:16Z","lastTransitionTime":"2025-11-28T12:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:16 crc kubenswrapper[4779]: I1128 12:37:16.543962 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:16 crc kubenswrapper[4779]: I1128 12:37:16.544008 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:16 crc kubenswrapper[4779]: I1128 12:37:16.544018 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:16 crc kubenswrapper[4779]: I1128 12:37:16.544035 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:16 crc kubenswrapper[4779]: I1128 12:37:16.544049 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:16Z","lastTransitionTime":"2025-11-28T12:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:16 crc kubenswrapper[4779]: I1128 12:37:16.646896 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:16 crc kubenswrapper[4779]: I1128 12:37:16.646943 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:16 crc kubenswrapper[4779]: I1128 12:37:16.646958 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:16 crc kubenswrapper[4779]: I1128 12:37:16.646977 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:16 crc kubenswrapper[4779]: I1128 12:37:16.646990 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:16Z","lastTransitionTime":"2025-11-28T12:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:16 crc kubenswrapper[4779]: I1128 12:37:16.685929 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2d9943eb-ea06-476d-8736-0a45e588d9f4-metrics-certs\") pod \"network-metrics-daemon-c2psj\" (UID: \"2d9943eb-ea06-476d-8736-0a45e588d9f4\") " pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:37:16 crc kubenswrapper[4779]: E1128 12:37:16.686060 4779 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 12:37:16 crc kubenswrapper[4779]: E1128 12:37:16.686164 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d9943eb-ea06-476d-8736-0a45e588d9f4-metrics-certs podName:2d9943eb-ea06-476d-8736-0a45e588d9f4 nodeName:}" failed. No retries permitted until 2025-11-28 12:38:20.686145097 +0000 UTC m=+161.251820441 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2d9943eb-ea06-476d-8736-0a45e588d9f4-metrics-certs") pod "network-metrics-daemon-c2psj" (UID: "2d9943eb-ea06-476d-8736-0a45e588d9f4") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 12:37:16 crc kubenswrapper[4779]: I1128 12:37:16.725487 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:37:16 crc kubenswrapper[4779]: I1128 12:37:16.725515 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:37:16 crc kubenswrapper[4779]: I1128 12:37:16.725582 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:37:16 crc kubenswrapper[4779]: I1128 12:37:16.725598 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:37:16 crc kubenswrapper[4779]: E1128 12:37:16.726267 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:37:16 crc kubenswrapper[4779]: E1128 12:37:16.726404 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:37:16 crc kubenswrapper[4779]: E1128 12:37:16.726529 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:37:16 crc kubenswrapper[4779]: E1128 12:37:16.726669 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c2psj" podUID="2d9943eb-ea06-476d-8736-0a45e588d9f4" Nov 28 12:37:16 crc kubenswrapper[4779]: I1128 12:37:16.749062 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:16 crc kubenswrapper[4779]: I1128 12:37:16.749114 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:16 crc kubenswrapper[4779]: I1128 12:37:16.749126 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:16 crc kubenswrapper[4779]: I1128 12:37:16.749145 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:16 crc kubenswrapper[4779]: I1128 12:37:16.749160 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:16Z","lastTransitionTime":"2025-11-28T12:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:16 crc kubenswrapper[4779]: I1128 12:37:16.852023 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:16 crc kubenswrapper[4779]: I1128 12:37:16.852078 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:16 crc kubenswrapper[4779]: I1128 12:37:16.852135 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:16 crc kubenswrapper[4779]: I1128 12:37:16.852161 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:16 crc kubenswrapper[4779]: I1128 12:37:16.852179 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:16Z","lastTransitionTime":"2025-11-28T12:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:16 crc kubenswrapper[4779]: I1128 12:37:16.955166 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:16 crc kubenswrapper[4779]: I1128 12:37:16.955234 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:16 crc kubenswrapper[4779]: I1128 12:37:16.955255 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:16 crc kubenswrapper[4779]: I1128 12:37:16.955400 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:16 crc kubenswrapper[4779]: I1128 12:37:16.955421 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:16Z","lastTransitionTime":"2025-11-28T12:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:17 crc kubenswrapper[4779]: I1128 12:37:17.058205 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:17 crc kubenswrapper[4779]: I1128 12:37:17.058278 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:17 crc kubenswrapper[4779]: I1128 12:37:17.058296 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:17 crc kubenswrapper[4779]: I1128 12:37:17.058322 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:17 crc kubenswrapper[4779]: I1128 12:37:17.058341 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:17Z","lastTransitionTime":"2025-11-28T12:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:17 crc kubenswrapper[4779]: I1128 12:37:17.160951 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:17 crc kubenswrapper[4779]: I1128 12:37:17.161036 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:17 crc kubenswrapper[4779]: I1128 12:37:17.161064 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:17 crc kubenswrapper[4779]: I1128 12:37:17.161131 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:17 crc kubenswrapper[4779]: I1128 12:37:17.161159 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:17Z","lastTransitionTime":"2025-11-28T12:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:17 crc kubenswrapper[4779]: I1128 12:37:17.263976 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:17 crc kubenswrapper[4779]: I1128 12:37:17.264029 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:17 crc kubenswrapper[4779]: I1128 12:37:17.264063 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:17 crc kubenswrapper[4779]: I1128 12:37:17.264133 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:17 crc kubenswrapper[4779]: I1128 12:37:17.264157 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:17Z","lastTransitionTime":"2025-11-28T12:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:17 crc kubenswrapper[4779]: I1128 12:37:17.366604 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:17 crc kubenswrapper[4779]: I1128 12:37:17.366672 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:17 crc kubenswrapper[4779]: I1128 12:37:17.366716 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:17 crc kubenswrapper[4779]: I1128 12:37:17.366744 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:17 crc kubenswrapper[4779]: I1128 12:37:17.366765 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:17Z","lastTransitionTime":"2025-11-28T12:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:17 crc kubenswrapper[4779]: I1128 12:37:17.469986 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:17 crc kubenswrapper[4779]: I1128 12:37:17.470044 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:17 crc kubenswrapper[4779]: I1128 12:37:17.470063 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:17 crc kubenswrapper[4779]: I1128 12:37:17.470134 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:17 crc kubenswrapper[4779]: I1128 12:37:17.470159 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:17Z","lastTransitionTime":"2025-11-28T12:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:17 crc kubenswrapper[4779]: I1128 12:37:17.573552 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:17 crc kubenswrapper[4779]: I1128 12:37:17.573617 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:17 crc kubenswrapper[4779]: I1128 12:37:17.573637 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:17 crc kubenswrapper[4779]: I1128 12:37:17.573662 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:17 crc kubenswrapper[4779]: I1128 12:37:17.573678 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:17Z","lastTransitionTime":"2025-11-28T12:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:17 crc kubenswrapper[4779]: I1128 12:37:17.676983 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:17 crc kubenswrapper[4779]: I1128 12:37:17.677048 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:17 crc kubenswrapper[4779]: I1128 12:37:17.677084 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:17 crc kubenswrapper[4779]: I1128 12:37:17.677155 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:17 crc kubenswrapper[4779]: I1128 12:37:17.677182 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:17Z","lastTransitionTime":"2025-11-28T12:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:17 crc kubenswrapper[4779]: I1128 12:37:17.780794 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:17 crc kubenswrapper[4779]: I1128 12:37:17.780971 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:17 crc kubenswrapper[4779]: I1128 12:37:17.781001 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:17 crc kubenswrapper[4779]: I1128 12:37:17.781030 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:17 crc kubenswrapper[4779]: I1128 12:37:17.781052 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:17Z","lastTransitionTime":"2025-11-28T12:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:17 crc kubenswrapper[4779]: I1128 12:37:17.884488 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:17 crc kubenswrapper[4779]: I1128 12:37:17.884538 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:17 crc kubenswrapper[4779]: I1128 12:37:17.884555 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:17 crc kubenswrapper[4779]: I1128 12:37:17.884583 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:17 crc kubenswrapper[4779]: I1128 12:37:17.884604 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:17Z","lastTransitionTime":"2025-11-28T12:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:17 crc kubenswrapper[4779]: I1128 12:37:17.988142 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:17 crc kubenswrapper[4779]: I1128 12:37:17.988180 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:17 crc kubenswrapper[4779]: I1128 12:37:17.988188 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:17 crc kubenswrapper[4779]: I1128 12:37:17.988206 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:17 crc kubenswrapper[4779]: I1128 12:37:17.988217 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:17Z","lastTransitionTime":"2025-11-28T12:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:18 crc kubenswrapper[4779]: I1128 12:37:18.090975 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:18 crc kubenswrapper[4779]: I1128 12:37:18.091047 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:18 crc kubenswrapper[4779]: I1128 12:37:18.091085 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:18 crc kubenswrapper[4779]: I1128 12:37:18.091166 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:18 crc kubenswrapper[4779]: I1128 12:37:18.091194 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:18Z","lastTransitionTime":"2025-11-28T12:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:18 crc kubenswrapper[4779]: I1128 12:37:18.195157 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:18 crc kubenswrapper[4779]: I1128 12:37:18.195201 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:18 crc kubenswrapper[4779]: I1128 12:37:18.195218 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:18 crc kubenswrapper[4779]: I1128 12:37:18.195238 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:18 crc kubenswrapper[4779]: I1128 12:37:18.195252 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:18Z","lastTransitionTime":"2025-11-28T12:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:18 crc kubenswrapper[4779]: I1128 12:37:18.298666 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:18 crc kubenswrapper[4779]: I1128 12:37:18.298731 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:18 crc kubenswrapper[4779]: I1128 12:37:18.298740 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:18 crc kubenswrapper[4779]: I1128 12:37:18.298757 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:18 crc kubenswrapper[4779]: I1128 12:37:18.298768 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:18Z","lastTransitionTime":"2025-11-28T12:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:18 crc kubenswrapper[4779]: I1128 12:37:18.401651 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:18 crc kubenswrapper[4779]: I1128 12:37:18.401719 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:18 crc kubenswrapper[4779]: I1128 12:37:18.401736 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:18 crc kubenswrapper[4779]: I1128 12:37:18.401763 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:18 crc kubenswrapper[4779]: I1128 12:37:18.401785 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:18Z","lastTransitionTime":"2025-11-28T12:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:18 crc kubenswrapper[4779]: I1128 12:37:18.504802 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:18 crc kubenswrapper[4779]: I1128 12:37:18.504924 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:18 crc kubenswrapper[4779]: I1128 12:37:18.504950 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:18 crc kubenswrapper[4779]: I1128 12:37:18.504982 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:18 crc kubenswrapper[4779]: I1128 12:37:18.505005 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:18Z","lastTransitionTime":"2025-11-28T12:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:18 crc kubenswrapper[4779]: I1128 12:37:18.607991 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:18 crc kubenswrapper[4779]: I1128 12:37:18.608058 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:18 crc kubenswrapper[4779]: I1128 12:37:18.608080 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:18 crc kubenswrapper[4779]: I1128 12:37:18.608143 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:18 crc kubenswrapper[4779]: I1128 12:37:18.608164 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:18Z","lastTransitionTime":"2025-11-28T12:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:18 crc kubenswrapper[4779]: I1128 12:37:18.710738 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:18 crc kubenswrapper[4779]: I1128 12:37:18.710822 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:18 crc kubenswrapper[4779]: I1128 12:37:18.710845 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:18 crc kubenswrapper[4779]: I1128 12:37:18.710880 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:18 crc kubenswrapper[4779]: I1128 12:37:18.710903 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:18Z","lastTransitionTime":"2025-11-28T12:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:18 crc kubenswrapper[4779]: I1128 12:37:18.725195 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:37:18 crc kubenswrapper[4779]: I1128 12:37:18.725239 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:37:18 crc kubenswrapper[4779]: I1128 12:37:18.725297 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:37:18 crc kubenswrapper[4779]: I1128 12:37:18.725251 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:37:18 crc kubenswrapper[4779]: E1128 12:37:18.725391 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:37:18 crc kubenswrapper[4779]: E1128 12:37:18.725475 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:37:18 crc kubenswrapper[4779]: E1128 12:37:18.725639 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:37:18 crc kubenswrapper[4779]: E1128 12:37:18.725787 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c2psj" podUID="2d9943eb-ea06-476d-8736-0a45e588d9f4" Nov 28 12:37:18 crc kubenswrapper[4779]: I1128 12:37:18.814697 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:18 crc kubenswrapper[4779]: I1128 12:37:18.814774 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:18 crc kubenswrapper[4779]: I1128 12:37:18.814797 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:18 crc kubenswrapper[4779]: I1128 12:37:18.814833 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:18 crc kubenswrapper[4779]: I1128 12:37:18.814859 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:18Z","lastTransitionTime":"2025-11-28T12:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:18 crc kubenswrapper[4779]: I1128 12:37:18.918719 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:18 crc kubenswrapper[4779]: I1128 12:37:18.918803 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:18 crc kubenswrapper[4779]: I1128 12:37:18.918822 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:18 crc kubenswrapper[4779]: I1128 12:37:18.918849 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:18 crc kubenswrapper[4779]: I1128 12:37:18.918870 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:18Z","lastTransitionTime":"2025-11-28T12:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.022674 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.022756 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.022781 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.022814 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.022865 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:19Z","lastTransitionTime":"2025-11-28T12:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.128667 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.128771 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.128796 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.128830 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.128852 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:19Z","lastTransitionTime":"2025-11-28T12:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.232293 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.232360 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.232379 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.232406 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.232424 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:19Z","lastTransitionTime":"2025-11-28T12:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.336660 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.336755 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.336779 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.336815 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.336843 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:19Z","lastTransitionTime":"2025-11-28T12:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.439811 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.439893 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.439916 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.439949 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.439973 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:19Z","lastTransitionTime":"2025-11-28T12:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.543505 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.543545 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.543556 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.543574 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.543586 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:19Z","lastTransitionTime":"2025-11-28T12:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.646564 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.646608 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.646617 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.646634 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.646645 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:19Z","lastTransitionTime":"2025-11-28T12:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.738457 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c91a3b1-3cec-4dcd-8f16-bc721aaedc52\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7be2ce5bc20d31216029627f86e27657d444334d72ba98e4ae9923c9d23cf35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9512174ef01c8751a11fc5e6193513236518b4a9d5b63b05020544b8708b70b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54bf19864670db9dbeda1e3b133e9246f9e4027714f684783efed888890af9ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dd288476ad4d58bebb413208bbe2f45bf3997fd7587a90b08ff3af6bdc2ad10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dd288476ad4d58bebb413208bbe2f45bf3997fd7587a90b08ff3af6bdc2ad10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:19Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.749256 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.749318 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.749332 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.749355 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.749368 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:19Z","lastTransitionTime":"2025-11-28T12:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.750165 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:19Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.771330 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:19Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.783125 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3544f7f72339878b2314fde813e8a92a8341fb05a34a4440c7c37b983d8d23f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19dcc5041b0cbae9167c41c808ece2651eac928f93422722ae28825b5ea4f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:19Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.795338 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:19Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.806718 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d290cf8678216cdf66a68b32edea2be30af7f7fa4ff7ccac629d9e690b23b13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:19Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.828310 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2gg4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bd5bc7d-159f-4f4e-8647-8a373e47d35f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea9e9a74657b078824a5614dc894178aed5ca4cb11445b900485e9a6c4378f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca04f620148f38d2ef507c6906bae30fdab27c264988ca882a4c08353a6820ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ee3982c694eb3aa2f972285c233875f798c562d6c0fd198c1e5217c3cc5a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95174b2c3e26d584e9d51b80005f99235e23b2497c172f169363bbc2982c8de5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3be34c6f829a06db17b8eec4b96709555b41c7b739f47deda17526537bf4dcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1255bd2e68205f572897f57913ab45ce6308b63b621f136a045b353e7703f0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff4882cab38e49b24d93cdf9c087f9c776d86074b24614e3eece46859be0fc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-js6cp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2gg4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:19Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.845656 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373d4c2a-0b03-4671-945a-0583fa342b3d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e79e9cc7bdaacc427604d12cf94272c7ed3d93519b1d285ba336edded1b3642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0417da6607c0d549767642332fa4fb21bbef525d7073d0a352120092d3450f2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b887fb78d1be13c77a88ce49c84ff0839a51056e29d59d571ab7da133dd0d897\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5a538ac7a3b48f9c58a68688a95342fb3a9d26ee3e5d7c65f1e3b8d99993294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:19Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.851472 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.851671 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.851773 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.851846 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.851915 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:19Z","lastTransitionTime":"2025-11-28T12:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.858910 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b2a3eb4-4de5-491b-b466-3a35b7d745ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23df7a96829b4103254d6da3740caab05538ddbd3235ce16e8d768e681041c56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f92b1378efd9146ee3cb61fef14092136e47b318d132a400c768bedf50d034e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzg5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kj9g2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:19Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.873505 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pzwdx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba664a9e-76d2-4d02-889a-e7062bfc903c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c11decc7085592a2a1e13b74049f378421293a7a1929f765860c47824c4b7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5598fdba6afba30cd00c8abdae6c80300fb10dfcde40afab0f15f848addddd47\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T12:36:44Z\\\",\\\"message\\\":\\\"2025-11-28T12:35:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4ae75205-766c-4cf0-bf74-190c15ad266e\\\\n2025-11-28T12:35:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4ae75205-766c-4cf0-bf74-190c15ad266e to /host/opt/cni/bin/\\\\n2025-11-28T12:35:59Z [verbose] multus-daemon started\\\\n2025-11-28T12:35:59Z [verbose] Readiness Indicator file check\\\\n2025-11-28T12:36:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfslc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pzwdx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:19Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.896833 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebbbbf6f-004c-42ae-8a38-1bcc6cb88ac2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9cede79cbe4c47d953dfa702fe815cc14ee242dede33edec3c4617824c89b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4493f154b47a353308d54341114bbbd12157f9575b873e1648d1dae6a386a534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71b9d44446078a2bb53a5a9b0a3f7a87ecf24a8554fb968a0250fc3a4cfb2d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://123567b9e202a9aae6ab83bca1ea909a496c476395703ab65e855be02f7af06e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16c959e0d582f2f01523650db7c0a1d6483dda34c3fcdfaa29d2d25e4d0b0f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://945da93d30348b22192d00c28e4158ed2bf6ce6267f6b99ce092845a6af26eb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9724b3f15fac1f997f14851c4ae0f7cda6561c9175c1498c6b04e9f49d06300a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bab8c24c1b48cc363e432899f13cc54e879ea5546d00bb58d1874a5f627fc6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:19Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.918473 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35f4f43e-a921-41b2-aa88-506055daff60\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fae861b14ca36a4b482a48b94ffda32e0d188204f356dfe60e2d8778b284dc1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fae861b14ca36a4b482a48b94ffda32e0d188204f356dfe60e2d8778b284dc1b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T12:36:59Z\\\",\\\"message\\\":\\\"128 12:36:59.339929 6826 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1128 12:36:59.339995 6826 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1128 12:36:59.340052 6826 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1128 12:36:59.340142 6826 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 12:36:59.340166 6826 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 12:36:59.340187 6826 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1128 12:36:59.340197 6826 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1128 12:36:59.340234 6826 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 12:36:59.340231 6826 handler.go:208] Removed *v1.Node event handler 2\\\\nI1128 12:36:59.340253 6826 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1128 12:36:59.340268 6826 handler.go:208] Removed *v1.Node event handler 7\\\\nI1128 12:36:59.340268 6826 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1128 12:36:59.340288 6826 factory.go:656] Stopping watch factory\\\\nI1128 12:36:59.340293 6826 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 12:36:59.340293 6826 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 12:36:59.340300 6826 ovnkube.go:599] Stopped ovnkube\\\\nI1128 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pbmbn_openshift-ovn-kubernetes(35f4f43e-a921-41b2-aa88-506055daff60)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5msg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pbmbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:19Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.929777 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwgdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13786eba-201c-40ca-89b7-174795999a9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec60bab90c7fee1fd38c00da4f84d5133876ad8f2817e5447795fcab4feb2942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6zn6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwgdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:19Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.942582 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c2psj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d9943eb-ea06-476d-8736-0a45e588d9f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8vbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8vbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:36:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c2psj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:19Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.954835 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.954864 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.954874 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.954891 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.954900 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:19Z","lastTransitionTime":"2025-11-28T12:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.955468 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d2732-7fd1-4fa8-9da7-74872484e3f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ca35c83bfed6e6b9e11bc2acb282ab619c3a04941a8ed540853cdd43531a00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9e9439db88e70aa53dff88d8b0a4f533ad90c8652e9a4d58e93fda87fa7f5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9e9439db88e70aa53dff88d8b0a4f533ad90c8652e9a4d58e93fda87fa7f5f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:19Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.972798 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b303d954-23c9-4fc9-8e79-981009172099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6912a42c418059dabf07c7d940bf1c4102c8dcf91cd4dd6ca0b177f4acd276ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaf14e5e2229156dc442c92253ef1f23c75a5a6f5dec2d2537cddcdd1df54b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a76dbc5b41ebf68792cd449e4a245678be24151f0c980eedd06f956674b2435\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3db38b748527004df103120db865f7848491344dfdf5c89a6db10f4d15e6a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9026b47ba3a0076e3f66e452bc9a223292a17659f2b80d04ef6eb6a5c0448710\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T12:35:57Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 12:35:52.373678 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 12:35:52.376135 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3230331060/tls.crt::/tmp/serving-cert-3230331060/tls.key\\\\\\\"\\\\nI1128 12:35:57.821147 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 12:35:57.824398 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 12:35:57.824424 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 12:35:57.824444 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 12:35:57.824450 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 12:35:57.831411 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 12:35:57.831445 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 12:35:57.831467 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 12:35:57.831472 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 12:35:57.831476 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 12:35:57.831480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 12:35:57.831686 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 12:35:57.839127 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bafddd2d81f67f1445e3714d50eba5cfd6f75d60c2cb47d16f2086861a10bd6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T12:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T12:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:19Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:19 crc kubenswrapper[4779]: I1128 12:37:19.991073 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c9857379117d130ce02fa4a153dfc01c9f41ba65663ae918bd82c9b14291e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:19Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:20 crc kubenswrapper[4779]: I1128 12:37:20.004614 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dlvj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8b3aa68-52ee-40cd-a059-6e410b826ce7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b2e852aeb571e85a95f4581550ee5f911d9c67fbbc4fc699e9af667a9c4b531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db55w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:35:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dlvj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:20Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:20 crc kubenswrapper[4779]: I1128 12:37:20.018713 4779 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jf46d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd0b81f7-c868-4f90-b20d-9d1b53f5216f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T12:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e8508450f924b6b8509b5d06c78535915557c5a7362b50c41515ad15f35e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smlr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383fc6deecc04584b130b3fdc9c1fded751c521513ce60898fdf1927748cd4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T12:36:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-smlr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T12:36:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jf46d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T12:37:20Z is after 2025-08-24T17:21:41Z" Nov 28 12:37:20 crc kubenswrapper[4779]: I1128 12:37:20.057331 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:20 crc kubenswrapper[4779]: I1128 12:37:20.057387 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:20 crc kubenswrapper[4779]: I1128 12:37:20.057399 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:20 crc kubenswrapper[4779]: I1128 12:37:20.057415 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:20 crc kubenswrapper[4779]: I1128 12:37:20.057426 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:20Z","lastTransitionTime":"2025-11-28T12:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:20 crc kubenswrapper[4779]: I1128 12:37:20.160286 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:20 crc kubenswrapper[4779]: I1128 12:37:20.160345 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:20 crc kubenswrapper[4779]: I1128 12:37:20.160361 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:20 crc kubenswrapper[4779]: I1128 12:37:20.160385 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:20 crc kubenswrapper[4779]: I1128 12:37:20.160405 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:20Z","lastTransitionTime":"2025-11-28T12:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:20 crc kubenswrapper[4779]: I1128 12:37:20.263620 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:20 crc kubenswrapper[4779]: I1128 12:37:20.263678 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:20 crc kubenswrapper[4779]: I1128 12:37:20.263689 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:20 crc kubenswrapper[4779]: I1128 12:37:20.263704 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:20 crc kubenswrapper[4779]: I1128 12:37:20.263714 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:20Z","lastTransitionTime":"2025-11-28T12:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:20 crc kubenswrapper[4779]: I1128 12:37:20.366398 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:20 crc kubenswrapper[4779]: I1128 12:37:20.366472 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:20 crc kubenswrapper[4779]: I1128 12:37:20.366491 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:20 crc kubenswrapper[4779]: I1128 12:37:20.366523 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:20 crc kubenswrapper[4779]: I1128 12:37:20.366547 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:20Z","lastTransitionTime":"2025-11-28T12:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:20 crc kubenswrapper[4779]: I1128 12:37:20.469950 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:20 crc kubenswrapper[4779]: I1128 12:37:20.470021 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:20 crc kubenswrapper[4779]: I1128 12:37:20.470053 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:20 crc kubenswrapper[4779]: I1128 12:37:20.470084 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:20 crc kubenswrapper[4779]: I1128 12:37:20.470150 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:20Z","lastTransitionTime":"2025-11-28T12:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:20 crc kubenswrapper[4779]: I1128 12:37:20.573364 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:20 crc kubenswrapper[4779]: I1128 12:37:20.573458 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:20 crc kubenswrapper[4779]: I1128 12:37:20.573485 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:20 crc kubenswrapper[4779]: I1128 12:37:20.573522 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:20 crc kubenswrapper[4779]: I1128 12:37:20.573548 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:20Z","lastTransitionTime":"2025-11-28T12:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:20 crc kubenswrapper[4779]: I1128 12:37:20.676900 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:20 crc kubenswrapper[4779]: I1128 12:37:20.676988 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:20 crc kubenswrapper[4779]: I1128 12:37:20.677011 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:20 crc kubenswrapper[4779]: I1128 12:37:20.677047 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:20 crc kubenswrapper[4779]: I1128 12:37:20.677071 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:20Z","lastTransitionTime":"2025-11-28T12:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:20 crc kubenswrapper[4779]: I1128 12:37:20.725251 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:37:20 crc kubenswrapper[4779]: I1128 12:37:20.725301 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:37:20 crc kubenswrapper[4779]: I1128 12:37:20.725277 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:37:20 crc kubenswrapper[4779]: I1128 12:37:20.725277 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:37:20 crc kubenswrapper[4779]: E1128 12:37:20.725418 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:37:20 crc kubenswrapper[4779]: E1128 12:37:20.725640 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:37:20 crc kubenswrapper[4779]: E1128 12:37:20.725679 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:37:20 crc kubenswrapper[4779]: E1128 12:37:20.725863 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c2psj" podUID="2d9943eb-ea06-476d-8736-0a45e588d9f4" Nov 28 12:37:20 crc kubenswrapper[4779]: I1128 12:37:20.779989 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:20 crc kubenswrapper[4779]: I1128 12:37:20.780014 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:20 crc kubenswrapper[4779]: I1128 12:37:20.780024 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:20 crc kubenswrapper[4779]: I1128 12:37:20.780039 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:20 crc kubenswrapper[4779]: I1128 12:37:20.780049 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:20Z","lastTransitionTime":"2025-11-28T12:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:20 crc kubenswrapper[4779]: I1128 12:37:20.882823 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:20 crc kubenswrapper[4779]: I1128 12:37:20.882888 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:20 crc kubenswrapper[4779]: I1128 12:37:20.882910 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:20 crc kubenswrapper[4779]: I1128 12:37:20.882942 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:20 crc kubenswrapper[4779]: I1128 12:37:20.882965 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:20Z","lastTransitionTime":"2025-11-28T12:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:20 crc kubenswrapper[4779]: I1128 12:37:20.985264 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:20 crc kubenswrapper[4779]: I1128 12:37:20.985331 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:20 crc kubenswrapper[4779]: I1128 12:37:20.985350 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:20 crc kubenswrapper[4779]: I1128 12:37:20.985376 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:20 crc kubenswrapper[4779]: I1128 12:37:20.985395 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:20Z","lastTransitionTime":"2025-11-28T12:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:21 crc kubenswrapper[4779]: I1128 12:37:21.088871 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:21 crc kubenswrapper[4779]: I1128 12:37:21.088973 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:21 crc kubenswrapper[4779]: I1128 12:37:21.088997 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:21 crc kubenswrapper[4779]: I1128 12:37:21.089031 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:21 crc kubenswrapper[4779]: I1128 12:37:21.089056 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:21Z","lastTransitionTime":"2025-11-28T12:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:21 crc kubenswrapper[4779]: I1128 12:37:21.192153 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:21 crc kubenswrapper[4779]: I1128 12:37:21.192238 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:21 crc kubenswrapper[4779]: I1128 12:37:21.192313 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:21 crc kubenswrapper[4779]: I1128 12:37:21.192341 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:21 crc kubenswrapper[4779]: I1128 12:37:21.192362 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:21Z","lastTransitionTime":"2025-11-28T12:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:21 crc kubenswrapper[4779]: I1128 12:37:21.297673 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:21 crc kubenswrapper[4779]: I1128 12:37:21.297743 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:21 crc kubenswrapper[4779]: I1128 12:37:21.297764 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:21 crc kubenswrapper[4779]: I1128 12:37:21.297795 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:21 crc kubenswrapper[4779]: I1128 12:37:21.297826 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:21Z","lastTransitionTime":"2025-11-28T12:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:21 crc kubenswrapper[4779]: I1128 12:37:21.400955 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:21 crc kubenswrapper[4779]: I1128 12:37:21.401010 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:21 crc kubenswrapper[4779]: I1128 12:37:21.401025 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:21 crc kubenswrapper[4779]: I1128 12:37:21.401053 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:21 crc kubenswrapper[4779]: I1128 12:37:21.401069 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:21Z","lastTransitionTime":"2025-11-28T12:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:21 crc kubenswrapper[4779]: I1128 12:37:21.504397 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:21 crc kubenswrapper[4779]: I1128 12:37:21.504461 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:21 crc kubenswrapper[4779]: I1128 12:37:21.504481 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:21 crc kubenswrapper[4779]: I1128 12:37:21.504507 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:21 crc kubenswrapper[4779]: I1128 12:37:21.504529 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:21Z","lastTransitionTime":"2025-11-28T12:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:21 crc kubenswrapper[4779]: I1128 12:37:21.607647 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:21 crc kubenswrapper[4779]: I1128 12:37:21.607710 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:21 crc kubenswrapper[4779]: I1128 12:37:21.607725 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:21 crc kubenswrapper[4779]: I1128 12:37:21.607746 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:21 crc kubenswrapper[4779]: I1128 12:37:21.607758 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:21Z","lastTransitionTime":"2025-11-28T12:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:21 crc kubenswrapper[4779]: I1128 12:37:21.711182 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:21 crc kubenswrapper[4779]: I1128 12:37:21.711231 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:21 crc kubenswrapper[4779]: I1128 12:37:21.711240 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:21 crc kubenswrapper[4779]: I1128 12:37:21.711258 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:21 crc kubenswrapper[4779]: I1128 12:37:21.711269 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:21Z","lastTransitionTime":"2025-11-28T12:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:21 crc kubenswrapper[4779]: I1128 12:37:21.815436 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:21 crc kubenswrapper[4779]: I1128 12:37:21.815501 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:21 crc kubenswrapper[4779]: I1128 12:37:21.815514 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:21 crc kubenswrapper[4779]: I1128 12:37:21.815538 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:21 crc kubenswrapper[4779]: I1128 12:37:21.815552 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:21Z","lastTransitionTime":"2025-11-28T12:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:21 crc kubenswrapper[4779]: I1128 12:37:21.918914 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:21 crc kubenswrapper[4779]: I1128 12:37:21.918975 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:21 crc kubenswrapper[4779]: I1128 12:37:21.918988 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:21 crc kubenswrapper[4779]: I1128 12:37:21.919009 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:21 crc kubenswrapper[4779]: I1128 12:37:21.919023 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:21Z","lastTransitionTime":"2025-11-28T12:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:22 crc kubenswrapper[4779]: I1128 12:37:22.022251 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:22 crc kubenswrapper[4779]: I1128 12:37:22.022314 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:22 crc kubenswrapper[4779]: I1128 12:37:22.022332 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:22 crc kubenswrapper[4779]: I1128 12:37:22.022360 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:22 crc kubenswrapper[4779]: I1128 12:37:22.022380 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:22Z","lastTransitionTime":"2025-11-28T12:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:22 crc kubenswrapper[4779]: I1128 12:37:22.125861 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:22 crc kubenswrapper[4779]: I1128 12:37:22.125919 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:22 crc kubenswrapper[4779]: I1128 12:37:22.125934 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:22 crc kubenswrapper[4779]: I1128 12:37:22.125957 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:22 crc kubenswrapper[4779]: I1128 12:37:22.125975 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:22Z","lastTransitionTime":"2025-11-28T12:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:22 crc kubenswrapper[4779]: I1128 12:37:22.229255 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:22 crc kubenswrapper[4779]: I1128 12:37:22.229592 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:22 crc kubenswrapper[4779]: I1128 12:37:22.229609 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:22 crc kubenswrapper[4779]: I1128 12:37:22.229635 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:22 crc kubenswrapper[4779]: I1128 12:37:22.229655 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:22Z","lastTransitionTime":"2025-11-28T12:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:22 crc kubenswrapper[4779]: I1128 12:37:22.333544 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:22 crc kubenswrapper[4779]: I1128 12:37:22.333613 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:22 crc kubenswrapper[4779]: I1128 12:37:22.333633 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:22 crc kubenswrapper[4779]: I1128 12:37:22.333661 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:22 crc kubenswrapper[4779]: I1128 12:37:22.333681 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:22Z","lastTransitionTime":"2025-11-28T12:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:22 crc kubenswrapper[4779]: I1128 12:37:22.436620 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:22 crc kubenswrapper[4779]: I1128 12:37:22.436665 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:22 crc kubenswrapper[4779]: I1128 12:37:22.436677 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:22 crc kubenswrapper[4779]: I1128 12:37:22.436700 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:22 crc kubenswrapper[4779]: I1128 12:37:22.436712 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:22Z","lastTransitionTime":"2025-11-28T12:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:22 crc kubenswrapper[4779]: I1128 12:37:22.539728 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:22 crc kubenswrapper[4779]: I1128 12:37:22.539827 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:22 crc kubenswrapper[4779]: I1128 12:37:22.539849 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:22 crc kubenswrapper[4779]: I1128 12:37:22.539876 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:22 crc kubenswrapper[4779]: I1128 12:37:22.539926 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:22Z","lastTransitionTime":"2025-11-28T12:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:22 crc kubenswrapper[4779]: I1128 12:37:22.642048 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:22 crc kubenswrapper[4779]: I1128 12:37:22.642086 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:22 crc kubenswrapper[4779]: I1128 12:37:22.642114 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:22 crc kubenswrapper[4779]: I1128 12:37:22.642130 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:22 crc kubenswrapper[4779]: I1128 12:37:22.642139 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:22Z","lastTransitionTime":"2025-11-28T12:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:22 crc kubenswrapper[4779]: I1128 12:37:22.725719 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:37:22 crc kubenswrapper[4779]: I1128 12:37:22.725819 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:37:22 crc kubenswrapper[4779]: I1128 12:37:22.725921 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:37:22 crc kubenswrapper[4779]: I1128 12:37:22.726186 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:37:22 crc kubenswrapper[4779]: E1128 12:37:22.726187 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c2psj" podUID="2d9943eb-ea06-476d-8736-0a45e588d9f4" Nov 28 12:37:22 crc kubenswrapper[4779]: E1128 12:37:22.726433 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:37:22 crc kubenswrapper[4779]: E1128 12:37:22.726562 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:37:22 crc kubenswrapper[4779]: E1128 12:37:22.726700 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:37:22 crc kubenswrapper[4779]: I1128 12:37:22.745262 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:22 crc kubenswrapper[4779]: I1128 12:37:22.745326 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:22 crc kubenswrapper[4779]: I1128 12:37:22.745344 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:22 crc kubenswrapper[4779]: I1128 12:37:22.745371 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:22 crc kubenswrapper[4779]: I1128 12:37:22.745390 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:22Z","lastTransitionTime":"2025-11-28T12:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:22 crc kubenswrapper[4779]: I1128 12:37:22.849293 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:22 crc kubenswrapper[4779]: I1128 12:37:22.849335 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:22 crc kubenswrapper[4779]: I1128 12:37:22.849344 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:22 crc kubenswrapper[4779]: I1128 12:37:22.849364 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:22 crc kubenswrapper[4779]: I1128 12:37:22.849373 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:22Z","lastTransitionTime":"2025-11-28T12:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:22 crc kubenswrapper[4779]: I1128 12:37:22.952505 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:22 crc kubenswrapper[4779]: I1128 12:37:22.952566 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:22 crc kubenswrapper[4779]: I1128 12:37:22.952582 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:22 crc kubenswrapper[4779]: I1128 12:37:22.952606 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:22 crc kubenswrapper[4779]: I1128 12:37:22.952623 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:22Z","lastTransitionTime":"2025-11-28T12:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:23 crc kubenswrapper[4779]: I1128 12:37:23.055658 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:23 crc kubenswrapper[4779]: I1128 12:37:23.055697 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:23 crc kubenswrapper[4779]: I1128 12:37:23.055707 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:23 crc kubenswrapper[4779]: I1128 12:37:23.055724 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:23 crc kubenswrapper[4779]: I1128 12:37:23.055734 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:23Z","lastTransitionTime":"2025-11-28T12:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:23 crc kubenswrapper[4779]: I1128 12:37:23.056724 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 12:37:23 crc kubenswrapper[4779]: I1128 12:37:23.056790 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 12:37:23 crc kubenswrapper[4779]: I1128 12:37:23.056813 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 12:37:23 crc kubenswrapper[4779]: I1128 12:37:23.056847 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 12:37:23 crc kubenswrapper[4779]: I1128 12:37:23.056867 4779 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T12:37:23Z","lastTransitionTime":"2025-11-28T12:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 12:37:23 crc kubenswrapper[4779]: I1128 12:37:23.128462 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-4bgj2"] Nov 28 12:37:23 crc kubenswrapper[4779]: I1128 12:37:23.129248 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4bgj2" Nov 28 12:37:23 crc kubenswrapper[4779]: I1128 12:37:23.131442 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 28 12:37:23 crc kubenswrapper[4779]: I1128 12:37:23.131527 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 28 12:37:23 crc kubenswrapper[4779]: I1128 12:37:23.131573 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 28 12:37:23 crc kubenswrapper[4779]: I1128 12:37:23.132316 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 28 12:37:23 crc kubenswrapper[4779]: I1128 12:37:23.171520 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=27.171488744 podStartE2EDuration="27.171488744s" podCreationTimestamp="2025-11-28 12:36:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:37:23.14852405 +0000 UTC m=+103.714199454" watchObservedRunningTime="2025-11-28 12:37:23.171488744 +0000 UTC m=+103.737164138" Nov 28 12:37:23 crc kubenswrapper[4779]: I1128 12:37:23.171929 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=85.171918786 podStartE2EDuration="1m25.171918786s" podCreationTimestamp="2025-11-28 12:35:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:37:23.171162006 +0000 UTC m=+103.736837380" watchObservedRunningTime="2025-11-28 12:37:23.171918786 +0000 UTC m=+103.737594180" Nov 28 12:37:23 crc kubenswrapper[4779]: I1128 12:37:23.200338 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-dlvj8" podStartSLOduration=86.200317863 podStartE2EDuration="1m26.200317863s" podCreationTimestamp="2025-11-28 12:35:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:37:23.199254835 +0000 UTC m=+103.764930289" watchObservedRunningTime="2025-11-28 12:37:23.200317863 +0000 UTC m=+103.765993247" Nov 28 12:37:23 crc kubenswrapper[4779]: I1128 12:37:23.240599 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jf46d" podStartSLOduration=85.240561912 podStartE2EDuration="1m25.240561912s" podCreationTimestamp="2025-11-28 12:35:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:37:23.217589697 +0000 UTC m=+103.783265161" watchObservedRunningTime="2025-11-28 12:37:23.240561912 +0000 UTC m=+103.806237306" Nov 28 12:37:23 crc kubenswrapper[4779]: I1128 12:37:23.259265 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=54.259229783 podStartE2EDuration="54.259229783s" podCreationTimestamp="2025-11-28 12:36:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:37:23.258174185 +0000 UTC m=+103.823849569" watchObservedRunningTime="2025-11-28 12:37:23.259229783 +0000 UTC m=+103.824905177" Nov 28 12:37:23 crc kubenswrapper[4779]: I1128 12:37:23.261796 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4c8c0b9d-a7c0-43c9-92b2-01177d996901-service-ca\") pod \"cluster-version-operator-5c965bbfc6-4bgj2\" (UID: \"4c8c0b9d-a7c0-43c9-92b2-01177d996901\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4bgj2" Nov 28 12:37:23 crc kubenswrapper[4779]: I1128 12:37:23.261855 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4c8c0b9d-a7c0-43c9-92b2-01177d996901-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-4bgj2\" (UID: \"4c8c0b9d-a7c0-43c9-92b2-01177d996901\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4bgj2" Nov 28 12:37:23 crc kubenswrapper[4779]: I1128 12:37:23.261881 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4c8c0b9d-a7c0-43c9-92b2-01177d996901-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-4bgj2\" (UID: \"4c8c0b9d-a7c0-43c9-92b2-01177d996901\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4bgj2" Nov 28 12:37:23 crc kubenswrapper[4779]: I1128 12:37:23.261905 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c8c0b9d-a7c0-43c9-92b2-01177d996901-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-4bgj2\" (UID: \"4c8c0b9d-a7c0-43c9-92b2-01177d996901\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4bgj2" Nov 28 12:37:23 crc kubenswrapper[4779]: I1128 12:37:23.261955 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c8c0b9d-a7c0-43c9-92b2-01177d996901-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-4bgj2\" (UID: \"4c8c0b9d-a7c0-43c9-92b2-01177d996901\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4bgj2" Nov 28 12:37:23 crc kubenswrapper[4779]: I1128 12:37:23.362795 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c8c0b9d-a7c0-43c9-92b2-01177d996901-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-4bgj2\" (UID: \"4c8c0b9d-a7c0-43c9-92b2-01177d996901\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4bgj2" Nov 28 12:37:23 crc kubenswrapper[4779]: I1128 12:37:23.363186 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4c8c0b9d-a7c0-43c9-92b2-01177d996901-service-ca\") pod \"cluster-version-operator-5c965bbfc6-4bgj2\" (UID: \"4c8c0b9d-a7c0-43c9-92b2-01177d996901\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4bgj2" Nov 28 12:37:23 crc kubenswrapper[4779]: I1128 12:37:23.363338 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4c8c0b9d-a7c0-43c9-92b2-01177d996901-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-4bgj2\" (UID: \"4c8c0b9d-a7c0-43c9-92b2-01177d996901\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4bgj2" Nov 28 12:37:23 crc kubenswrapper[4779]: I1128 12:37:23.363451 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4c8c0b9d-a7c0-43c9-92b2-01177d996901-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-4bgj2\" (UID: \"4c8c0b9d-a7c0-43c9-92b2-01177d996901\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4bgj2" Nov 28 12:37:23 crc kubenswrapper[4779]: I1128 12:37:23.363557 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c8c0b9d-a7c0-43c9-92b2-01177d996901-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-4bgj2\" (UID: \"4c8c0b9d-a7c0-43c9-92b2-01177d996901\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4bgj2" Nov 28 12:37:23 crc kubenswrapper[4779]: I1128 12:37:23.363549 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4c8c0b9d-a7c0-43c9-92b2-01177d996901-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-4bgj2\" (UID: \"4c8c0b9d-a7c0-43c9-92b2-01177d996901\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4bgj2" Nov 28 12:37:23 crc kubenswrapper[4779]: I1128 12:37:23.363448 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4c8c0b9d-a7c0-43c9-92b2-01177d996901-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-4bgj2\" (UID: \"4c8c0b9d-a7c0-43c9-92b2-01177d996901\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4bgj2" Nov 28 12:37:23 crc kubenswrapper[4779]: I1128 12:37:23.365178 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4c8c0b9d-a7c0-43c9-92b2-01177d996901-service-ca\") pod \"cluster-version-operator-5c965bbfc6-4bgj2\" (UID: \"4c8c0b9d-a7c0-43c9-92b2-01177d996901\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4bgj2" Nov 28 12:37:23 crc kubenswrapper[4779]: I1128 12:37:23.369659 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c8c0b9d-a7c0-43c9-92b2-01177d996901-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-4bgj2\" (UID: \"4c8c0b9d-a7c0-43c9-92b2-01177d996901\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4bgj2" Nov 28 12:37:23 crc kubenswrapper[4779]: I1128 12:37:23.396438 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-2gg4m" podStartSLOduration=85.396417083 podStartE2EDuration="1m25.396417083s" podCreationTimestamp="2025-11-28 12:35:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:37:23.376951321 +0000 UTC m=+103.942626785" watchObservedRunningTime="2025-11-28 12:37:23.396417083 +0000 UTC m=+103.962092447" Nov 28 12:37:23 crc kubenswrapper[4779]: I1128 12:37:23.397155 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=85.397147602 podStartE2EDuration="1m25.397147602s" podCreationTimestamp="2025-11-28 12:35:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:37:23.396331621 +0000 UTC m=+103.962006985" watchObservedRunningTime="2025-11-28 12:37:23.397147602 +0000 UTC m=+103.962822966" Nov 28 12:37:23 crc kubenswrapper[4779]: I1128 12:37:23.403220 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c8c0b9d-a7c0-43c9-92b2-01177d996901-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-4bgj2\" (UID: \"4c8c0b9d-a7c0-43c9-92b2-01177d996901\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4bgj2" Nov 28 12:37:23 crc kubenswrapper[4779]: I1128 12:37:23.416032 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podStartSLOduration=86.416009749 podStartE2EDuration="1m26.416009749s" podCreationTimestamp="2025-11-28 12:35:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:37:23.415841294 +0000 UTC m=+103.981516708" watchObservedRunningTime="2025-11-28 12:37:23.416009749 +0000 UTC m=+103.981685123" Nov 28 12:37:23 crc kubenswrapper[4779]: I1128 12:37:23.439667 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-pzwdx" podStartSLOduration=85.43964545 podStartE2EDuration="1m25.43964545s" podCreationTimestamp="2025-11-28 12:35:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:37:23.439167818 +0000 UTC m=+104.004843182" watchObservedRunningTime="2025-11-28 12:37:23.43964545 +0000 UTC m=+104.005320824" Nov 28 12:37:23 crc kubenswrapper[4779]: I1128 12:37:23.454990 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4bgj2" Nov 28 12:37:23 crc kubenswrapper[4779]: I1128 12:37:23.469709 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=86.469694701 podStartE2EDuration="1m26.469694701s" podCreationTimestamp="2025-11-28 12:35:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:37:23.467473583 +0000 UTC m=+104.033148947" watchObservedRunningTime="2025-11-28 12:37:23.469694701 +0000 UTC m=+104.035370065" Nov 28 12:37:23 crc kubenswrapper[4779]: W1128 12:37:23.473909 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c8c0b9d_a7c0_43c9_92b2_01177d996901.slice/crio-d8be37037151c78619cad864ec9a8027576a0fe1afb9b28d936c21839dadedee WatchSource:0}: Error finding container d8be37037151c78619cad864ec9a8027576a0fe1afb9b28d936c21839dadedee: Status 404 returned error can't find the container with id d8be37037151c78619cad864ec9a8027576a0fe1afb9b28d936c21839dadedee Nov 28 12:37:23 crc kubenswrapper[4779]: I1128 12:37:23.530859 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-dwgdn" podStartSLOduration=86.53083366 podStartE2EDuration="1m26.53083366s" podCreationTimestamp="2025-11-28 12:35:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:37:23.518030063 +0000 UTC m=+104.083705427" watchObservedRunningTime="2025-11-28 12:37:23.53083366 +0000 UTC m=+104.096509024" Nov 28 12:37:24 crc kubenswrapper[4779]: I1128 12:37:24.348831 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4bgj2" event={"ID":"4c8c0b9d-a7c0-43c9-92b2-01177d996901","Type":"ContainerStarted","Data":"fbf849e28d7c7c5ffb511d989b5ac6f847f4e4f9bd71450e647442bf90a3b442"} Nov 28 12:37:24 crc kubenswrapper[4779]: I1128 12:37:24.348923 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4bgj2" event={"ID":"4c8c0b9d-a7c0-43c9-92b2-01177d996901","Type":"ContainerStarted","Data":"d8be37037151c78619cad864ec9a8027576a0fe1afb9b28d936c21839dadedee"} Nov 28 12:37:24 crc kubenswrapper[4779]: I1128 12:37:24.725584 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:37:24 crc kubenswrapper[4779]: I1128 12:37:24.725637 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:37:24 crc kubenswrapper[4779]: I1128 12:37:24.725678 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:37:24 crc kubenswrapper[4779]: I1128 12:37:24.725639 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:37:24 crc kubenswrapper[4779]: E1128 12:37:24.725751 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:37:24 crc kubenswrapper[4779]: E1128 12:37:24.725850 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:37:24 crc kubenswrapper[4779]: E1128 12:37:24.725895 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:37:24 crc kubenswrapper[4779]: E1128 12:37:24.725963 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c2psj" podUID="2d9943eb-ea06-476d-8736-0a45e588d9f4" Nov 28 12:37:25 crc kubenswrapper[4779]: I1128 12:37:25.726967 4779 scope.go:117] "RemoveContainer" containerID="fae861b14ca36a4b482a48b94ffda32e0d188204f356dfe60e2d8778b284dc1b" Nov 28 12:37:25 crc kubenswrapper[4779]: E1128 12:37:25.727300 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pbmbn_openshift-ovn-kubernetes(35f4f43e-a921-41b2-aa88-506055daff60)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" podUID="35f4f43e-a921-41b2-aa88-506055daff60" Nov 28 12:37:26 crc kubenswrapper[4779]: I1128 12:37:26.725697 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:37:26 crc kubenswrapper[4779]: I1128 12:37:26.725787 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:37:26 crc kubenswrapper[4779]: I1128 12:37:26.725869 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:37:26 crc kubenswrapper[4779]: E1128 12:37:26.725874 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c2psj" podUID="2d9943eb-ea06-476d-8736-0a45e588d9f4" Nov 28 12:37:26 crc kubenswrapper[4779]: I1128 12:37:26.726051 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:37:26 crc kubenswrapper[4779]: E1128 12:37:26.726036 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:37:26 crc kubenswrapper[4779]: E1128 12:37:26.726304 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:37:26 crc kubenswrapper[4779]: E1128 12:37:26.726809 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:37:28 crc kubenswrapper[4779]: I1128 12:37:28.725830 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:37:28 crc kubenswrapper[4779]: I1128 12:37:28.725893 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:37:28 crc kubenswrapper[4779]: E1128 12:37:28.725979 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:37:28 crc kubenswrapper[4779]: I1128 12:37:28.725975 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:37:28 crc kubenswrapper[4779]: I1128 12:37:28.726043 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:37:28 crc kubenswrapper[4779]: E1128 12:37:28.726119 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:37:28 crc kubenswrapper[4779]: E1128 12:37:28.726327 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:37:28 crc kubenswrapper[4779]: E1128 12:37:28.726436 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c2psj" podUID="2d9943eb-ea06-476d-8736-0a45e588d9f4" Nov 28 12:37:30 crc kubenswrapper[4779]: I1128 12:37:30.726342 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:37:30 crc kubenswrapper[4779]: I1128 12:37:30.726342 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:37:30 crc kubenswrapper[4779]: E1128 12:37:30.726881 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c2psj" podUID="2d9943eb-ea06-476d-8736-0a45e588d9f4" Nov 28 12:37:30 crc kubenswrapper[4779]: E1128 12:37:30.726990 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:37:30 crc kubenswrapper[4779]: I1128 12:37:30.726459 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:37:30 crc kubenswrapper[4779]: E1128 12:37:30.727141 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:37:30 crc kubenswrapper[4779]: I1128 12:37:30.727272 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:37:30 crc kubenswrapper[4779]: E1128 12:37:30.727352 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:37:31 crc kubenswrapper[4779]: I1128 12:37:31.376787 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-pzwdx_ba664a9e-76d2-4d02-889a-e7062bfc903c/kube-multus/1.log" Nov 28 12:37:31 crc kubenswrapper[4779]: I1128 12:37:31.377973 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-pzwdx_ba664a9e-76d2-4d02-889a-e7062bfc903c/kube-multus/0.log" Nov 28 12:37:31 crc kubenswrapper[4779]: I1128 12:37:31.378073 4779 generic.go:334] "Generic (PLEG): container finished" podID="ba664a9e-76d2-4d02-889a-e7062bfc903c" containerID="3c11decc7085592a2a1e13b74049f378421293a7a1929f765860c47824c4b7a5" exitCode=1 Nov 28 12:37:31 crc kubenswrapper[4779]: I1128 12:37:31.378162 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-pzwdx" event={"ID":"ba664a9e-76d2-4d02-889a-e7062bfc903c","Type":"ContainerDied","Data":"3c11decc7085592a2a1e13b74049f378421293a7a1929f765860c47824c4b7a5"} Nov 28 12:37:31 crc kubenswrapper[4779]: I1128 12:37:31.378241 4779 scope.go:117] "RemoveContainer" containerID="5598fdba6afba30cd00c8abdae6c80300fb10dfcde40afab0f15f848addddd47" Nov 28 12:37:31 crc kubenswrapper[4779]: I1128 12:37:31.380385 4779 scope.go:117] "RemoveContainer" containerID="3c11decc7085592a2a1e13b74049f378421293a7a1929f765860c47824c4b7a5" Nov 28 12:37:31 crc kubenswrapper[4779]: E1128 12:37:31.381073 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-pzwdx_openshift-multus(ba664a9e-76d2-4d02-889a-e7062bfc903c)\"" pod="openshift-multus/multus-pzwdx" podUID="ba664a9e-76d2-4d02-889a-e7062bfc903c" Nov 28 12:37:31 crc kubenswrapper[4779]: I1128 12:37:31.407560 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4bgj2" podStartSLOduration=93.407539209 podStartE2EDuration="1m33.407539209s" podCreationTimestamp="2025-11-28 12:35:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:37:24.366241112 +0000 UTC m=+104.931916506" watchObservedRunningTime="2025-11-28 12:37:31.407539209 +0000 UTC m=+111.973214573" Nov 28 12:37:32 crc kubenswrapper[4779]: I1128 12:37:32.383245 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-pzwdx_ba664a9e-76d2-4d02-889a-e7062bfc903c/kube-multus/1.log" Nov 28 12:37:32 crc kubenswrapper[4779]: I1128 12:37:32.725714 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:37:32 crc kubenswrapper[4779]: I1128 12:37:32.725820 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:37:32 crc kubenswrapper[4779]: I1128 12:37:32.725821 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:37:32 crc kubenswrapper[4779]: I1128 12:37:32.725818 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:37:32 crc kubenswrapper[4779]: E1128 12:37:32.725930 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c2psj" podUID="2d9943eb-ea06-476d-8736-0a45e588d9f4" Nov 28 12:37:32 crc kubenswrapper[4779]: E1128 12:37:32.726166 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:37:32 crc kubenswrapper[4779]: E1128 12:37:32.726259 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:37:32 crc kubenswrapper[4779]: E1128 12:37:32.726334 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:37:34 crc kubenswrapper[4779]: I1128 12:37:34.725264 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:37:34 crc kubenswrapper[4779]: I1128 12:37:34.725318 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:37:34 crc kubenswrapper[4779]: I1128 12:37:34.725272 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:37:34 crc kubenswrapper[4779]: I1128 12:37:34.725362 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:37:34 crc kubenswrapper[4779]: E1128 12:37:34.725499 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:37:34 crc kubenswrapper[4779]: E1128 12:37:34.725625 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:37:34 crc kubenswrapper[4779]: E1128 12:37:34.725798 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c2psj" podUID="2d9943eb-ea06-476d-8736-0a45e588d9f4" Nov 28 12:37:34 crc kubenswrapper[4779]: E1128 12:37:34.725983 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:37:36 crc kubenswrapper[4779]: I1128 12:37:36.725142 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:37:36 crc kubenswrapper[4779]: I1128 12:37:36.725286 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:37:36 crc kubenswrapper[4779]: I1128 12:37:36.725178 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:37:36 crc kubenswrapper[4779]: E1128 12:37:36.725359 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:37:36 crc kubenswrapper[4779]: I1128 12:37:36.725491 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:37:36 crc kubenswrapper[4779]: E1128 12:37:36.725713 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:37:36 crc kubenswrapper[4779]: E1128 12:37:36.725955 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:37:36 crc kubenswrapper[4779]: E1128 12:37:36.726153 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c2psj" podUID="2d9943eb-ea06-476d-8736-0a45e588d9f4" Nov 28 12:37:37 crc kubenswrapper[4779]: I1128 12:37:37.727239 4779 scope.go:117] "RemoveContainer" containerID="fae861b14ca36a4b482a48b94ffda32e0d188204f356dfe60e2d8778b284dc1b" Nov 28 12:37:37 crc kubenswrapper[4779]: E1128 12:37:37.728386 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pbmbn_openshift-ovn-kubernetes(35f4f43e-a921-41b2-aa88-506055daff60)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" podUID="35f4f43e-a921-41b2-aa88-506055daff60" Nov 28 12:37:38 crc kubenswrapper[4779]: I1128 12:37:38.725774 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:37:38 crc kubenswrapper[4779]: I1128 12:37:38.725831 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:37:38 crc kubenswrapper[4779]: I1128 12:37:38.725827 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:37:38 crc kubenswrapper[4779]: I1128 12:37:38.725832 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:37:38 crc kubenswrapper[4779]: E1128 12:37:38.726269 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:37:38 crc kubenswrapper[4779]: E1128 12:37:38.726475 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c2psj" podUID="2d9943eb-ea06-476d-8736-0a45e588d9f4" Nov 28 12:37:38 crc kubenswrapper[4779]: E1128 12:37:38.726582 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:37:38 crc kubenswrapper[4779]: E1128 12:37:38.726682 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:37:39 crc kubenswrapper[4779]: E1128 12:37:39.687320 4779 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Nov 28 12:37:39 crc kubenswrapper[4779]: E1128 12:37:39.829684 4779 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 12:37:40 crc kubenswrapper[4779]: I1128 12:37:40.725341 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:37:40 crc kubenswrapper[4779]: I1128 12:37:40.725404 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:37:40 crc kubenswrapper[4779]: I1128 12:37:40.725560 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:37:40 crc kubenswrapper[4779]: E1128 12:37:40.725743 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:37:40 crc kubenswrapper[4779]: E1128 12:37:40.726189 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c2psj" podUID="2d9943eb-ea06-476d-8736-0a45e588d9f4" Nov 28 12:37:40 crc kubenswrapper[4779]: I1128 12:37:40.726289 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:37:40 crc kubenswrapper[4779]: E1128 12:37:40.726399 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:37:40 crc kubenswrapper[4779]: E1128 12:37:40.726595 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:37:42 crc kubenswrapper[4779]: I1128 12:37:42.726289 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:37:42 crc kubenswrapper[4779]: I1128 12:37:42.726315 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:37:42 crc kubenswrapper[4779]: I1128 12:37:42.726396 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:37:42 crc kubenswrapper[4779]: E1128 12:37:42.726491 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:37:42 crc kubenswrapper[4779]: I1128 12:37:42.726310 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:37:42 crc kubenswrapper[4779]: E1128 12:37:42.726765 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:37:42 crc kubenswrapper[4779]: E1128 12:37:42.726995 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c2psj" podUID="2d9943eb-ea06-476d-8736-0a45e588d9f4" Nov 28 12:37:42 crc kubenswrapper[4779]: E1128 12:37:42.727240 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:37:44 crc kubenswrapper[4779]: I1128 12:37:44.725577 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:37:44 crc kubenswrapper[4779]: I1128 12:37:44.725705 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:37:44 crc kubenswrapper[4779]: I1128 12:37:44.725621 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:37:44 crc kubenswrapper[4779]: E1128 12:37:44.725866 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:37:44 crc kubenswrapper[4779]: I1128 12:37:44.725909 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:37:44 crc kubenswrapper[4779]: E1128 12:37:44.726401 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:37:44 crc kubenswrapper[4779]: E1128 12:37:44.726640 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:37:44 crc kubenswrapper[4779]: E1128 12:37:44.726847 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c2psj" podUID="2d9943eb-ea06-476d-8736-0a45e588d9f4" Nov 28 12:37:44 crc kubenswrapper[4779]: I1128 12:37:44.727380 4779 scope.go:117] "RemoveContainer" containerID="3c11decc7085592a2a1e13b74049f378421293a7a1929f765860c47824c4b7a5" Nov 28 12:37:44 crc kubenswrapper[4779]: E1128 12:37:44.830765 4779 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 12:37:45 crc kubenswrapper[4779]: I1128 12:37:45.430285 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-pzwdx_ba664a9e-76d2-4d02-889a-e7062bfc903c/kube-multus/1.log" Nov 28 12:37:45 crc kubenswrapper[4779]: I1128 12:37:45.430675 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-pzwdx" event={"ID":"ba664a9e-76d2-4d02-889a-e7062bfc903c","Type":"ContainerStarted","Data":"f1a944d63eb31fd058243070791b29847489e8e4a0cd31d1b188b45c0790f5f2"} Nov 28 12:37:46 crc kubenswrapper[4779]: I1128 12:37:46.726291 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:37:46 crc kubenswrapper[4779]: I1128 12:37:46.726355 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:37:46 crc kubenswrapper[4779]: I1128 12:37:46.726291 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:37:46 crc kubenswrapper[4779]: I1128 12:37:46.726486 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:37:46 crc kubenswrapper[4779]: E1128 12:37:46.726573 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:37:46 crc kubenswrapper[4779]: E1128 12:37:46.726670 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:37:46 crc kubenswrapper[4779]: E1128 12:37:46.726788 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:37:46 crc kubenswrapper[4779]: E1128 12:37:46.726988 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c2psj" podUID="2d9943eb-ea06-476d-8736-0a45e588d9f4" Nov 28 12:37:48 crc kubenswrapper[4779]: I1128 12:37:48.725847 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:37:48 crc kubenswrapper[4779]: I1128 12:37:48.725900 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:37:48 crc kubenswrapper[4779]: I1128 12:37:48.725918 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:37:48 crc kubenswrapper[4779]: I1128 12:37:48.725889 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:37:48 crc kubenswrapper[4779]: E1128 12:37:48.726060 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:37:48 crc kubenswrapper[4779]: E1128 12:37:48.726211 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c2psj" podUID="2d9943eb-ea06-476d-8736-0a45e588d9f4" Nov 28 12:37:48 crc kubenswrapper[4779]: E1128 12:37:48.726362 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:37:48 crc kubenswrapper[4779]: E1128 12:37:48.726488 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:37:49 crc kubenswrapper[4779]: E1128 12:37:49.832274 4779 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 12:37:50 crc kubenswrapper[4779]: I1128 12:37:50.726063 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:37:50 crc kubenswrapper[4779]: E1128 12:37:50.726308 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:37:50 crc kubenswrapper[4779]: I1128 12:37:50.726443 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:37:50 crc kubenswrapper[4779]: I1128 12:37:50.726489 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:37:50 crc kubenswrapper[4779]: I1128 12:37:50.726644 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:37:50 crc kubenswrapper[4779]: E1128 12:37:50.726670 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:37:50 crc kubenswrapper[4779]: E1128 12:37:50.726780 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:37:50 crc kubenswrapper[4779]: E1128 12:37:50.727035 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c2psj" podUID="2d9943eb-ea06-476d-8736-0a45e588d9f4" Nov 28 12:37:52 crc kubenswrapper[4779]: I1128 12:37:52.725275 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:37:52 crc kubenswrapper[4779]: I1128 12:37:52.725318 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:37:52 crc kubenswrapper[4779]: E1128 12:37:52.725473 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:37:52 crc kubenswrapper[4779]: I1128 12:37:52.725514 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:37:52 crc kubenswrapper[4779]: I1128 12:37:52.725597 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:37:52 crc kubenswrapper[4779]: E1128 12:37:52.725702 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:37:52 crc kubenswrapper[4779]: E1128 12:37:52.725858 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:37:52 crc kubenswrapper[4779]: E1128 12:37:52.725998 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c2psj" podUID="2d9943eb-ea06-476d-8736-0a45e588d9f4" Nov 28 12:37:52 crc kubenswrapper[4779]: I1128 12:37:52.727136 4779 scope.go:117] "RemoveContainer" containerID="fae861b14ca36a4b482a48b94ffda32e0d188204f356dfe60e2d8778b284dc1b" Nov 28 12:37:53 crc kubenswrapper[4779]: I1128 12:37:53.470469 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pbmbn_35f4f43e-a921-41b2-aa88-506055daff60/ovnkube-controller/3.log" Nov 28 12:37:53 crc kubenswrapper[4779]: I1128 12:37:53.473433 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" event={"ID":"35f4f43e-a921-41b2-aa88-506055daff60","Type":"ContainerStarted","Data":"356f9859366b2c85978bdc8b4d408a84029a55da3f4ccc8c50875af41e078241"} Nov 28 12:37:53 crc kubenswrapper[4779]: I1128 12:37:53.473814 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:37:53 crc kubenswrapper[4779]: I1128 12:37:53.506468 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" podStartSLOduration=115.506444755 podStartE2EDuration="1m55.506444755s" podCreationTimestamp="2025-11-28 12:35:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:37:53.505939231 +0000 UTC m=+134.071614615" watchObservedRunningTime="2025-11-28 12:37:53.506444755 +0000 UTC m=+134.072120149" Nov 28 12:37:53 crc kubenswrapper[4779]: I1128 12:37:53.827200 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-c2psj"] Nov 28 12:37:53 crc kubenswrapper[4779]: I1128 12:37:53.827297 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:37:53 crc kubenswrapper[4779]: E1128 12:37:53.827378 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c2psj" podUID="2d9943eb-ea06-476d-8736-0a45e588d9f4" Nov 28 12:37:54 crc kubenswrapper[4779]: I1128 12:37:54.726431 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:37:54 crc kubenswrapper[4779]: I1128 12:37:54.726485 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:37:54 crc kubenswrapper[4779]: I1128 12:37:54.726465 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:37:54 crc kubenswrapper[4779]: E1128 12:37:54.726643 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:37:54 crc kubenswrapper[4779]: E1128 12:37:54.726906 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:37:54 crc kubenswrapper[4779]: E1128 12:37:54.727298 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:37:54 crc kubenswrapper[4779]: E1128 12:37:54.833884 4779 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 12:37:55 crc kubenswrapper[4779]: I1128 12:37:55.725714 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:37:55 crc kubenswrapper[4779]: E1128 12:37:55.725991 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c2psj" podUID="2d9943eb-ea06-476d-8736-0a45e588d9f4" Nov 28 12:37:56 crc kubenswrapper[4779]: I1128 12:37:56.725705 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:37:56 crc kubenswrapper[4779]: I1128 12:37:56.725800 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:37:56 crc kubenswrapper[4779]: I1128 12:37:56.725823 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:37:56 crc kubenswrapper[4779]: E1128 12:37:56.725981 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:37:56 crc kubenswrapper[4779]: E1128 12:37:56.726163 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:37:56 crc kubenswrapper[4779]: E1128 12:37:56.726285 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:37:57 crc kubenswrapper[4779]: I1128 12:37:57.725912 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:37:57 crc kubenswrapper[4779]: E1128 12:37:57.726119 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c2psj" podUID="2d9943eb-ea06-476d-8736-0a45e588d9f4" Nov 28 12:37:58 crc kubenswrapper[4779]: I1128 12:37:58.725240 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:37:58 crc kubenswrapper[4779]: I1128 12:37:58.725264 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:37:58 crc kubenswrapper[4779]: E1128 12:37:58.725569 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 12:37:58 crc kubenswrapper[4779]: I1128 12:37:58.725258 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:37:58 crc kubenswrapper[4779]: E1128 12:37:58.725687 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 12:37:58 crc kubenswrapper[4779]: E1128 12:37:58.725759 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 12:37:59 crc kubenswrapper[4779]: I1128 12:37:59.725335 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:37:59 crc kubenswrapper[4779]: E1128 12:37:59.728025 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c2psj" podUID="2d9943eb-ea06-476d-8736-0a45e588d9f4" Nov 28 12:38:00 crc kubenswrapper[4779]: I1128 12:38:00.726155 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:38:00 crc kubenswrapper[4779]: I1128 12:38:00.726199 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:38:00 crc kubenswrapper[4779]: I1128 12:38:00.726266 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:38:00 crc kubenswrapper[4779]: I1128 12:38:00.729534 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 28 12:38:00 crc kubenswrapper[4779]: I1128 12:38:00.730688 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 28 12:38:00 crc kubenswrapper[4779]: I1128 12:38:00.732174 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 28 12:38:00 crc kubenswrapper[4779]: I1128 12:38:00.735068 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 28 12:38:01 crc kubenswrapper[4779]: I1128 12:38:01.725496 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:38:01 crc kubenswrapper[4779]: I1128 12:38:01.728472 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 28 12:38:01 crc kubenswrapper[4779]: I1128 12:38:01.729579 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.856562 4779 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.899624 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-hp9zp"] Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.900589 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-hp9zp" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.901243 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-bz4fl"] Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.906267 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-bz4fl" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.908204 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8lqfg"] Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.910646 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8lqfg" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.917198 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tvc5s"] Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.917810 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tvc5s" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.918770 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.918916 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.919030 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.919157 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.919456 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.919689 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.919979 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.920126 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.920184 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.920333 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.920389 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.920341 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.920488 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.920598 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.924241 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.924372 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.924384 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.928170 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-5p2wz"] Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.929273 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5p2wz" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.930492 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-t85kw"] Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.931085 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t85kw" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.931348 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bd6l4"] Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.931700 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-bd6l4" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.933080 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-n97k6"] Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.933471 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-n97k6" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.938551 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.938791 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-lfk66"] Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.938761 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.939305 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7w7kl"] Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.939697 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6jxrr"] Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.939756 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-lfk66" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.939781 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7w7kl" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.940006 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6jxrr" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.943510 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-4jt92"] Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.944061 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-4jt92" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.947195 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.954720 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2hxrs"] Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.955294 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2hxrs" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.955303 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-xnq47"] Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.955951 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-xnq47" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.958540 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-audit-policies\") pod \"oauth-openshift-558db77b4-n97k6\" (UID: \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\") " pod="openshift-authentication/oauth-openshift-558db77b4-n97k6" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.958591 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnm5g\" (UniqueName: \"kubernetes.io/projected/0aaef2ff-dfeb-4e1c-aeaa-151eec6d15fb-kube-api-access-cnm5g\") pod \"machine-approver-56656f9798-t85kw\" (UID: \"0aaef2ff-dfeb-4e1c-aeaa-151eec6d15fb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t85kw" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.958626 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-n97k6\" (UID: \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\") " pod="openshift-authentication/oauth-openshift-558db77b4-n97k6" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.958658 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5629efeb-c910-46f3-aa69-be7863bfb6f1-audit\") pod \"apiserver-76f77b778f-hp9zp\" (UID: \"5629efeb-c910-46f3-aa69-be7863bfb6f1\") " pod="openshift-apiserver/apiserver-76f77b778f-hp9zp" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.958690 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0aaef2ff-dfeb-4e1c-aeaa-151eec6d15fb-config\") pod \"machine-approver-56656f9798-t85kw\" (UID: \"0aaef2ff-dfeb-4e1c-aeaa-151eec6d15fb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t85kw" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.958720 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-n97k6\" (UID: \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\") " pod="openshift-authentication/oauth-openshift-558db77b4-n97k6" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.958749 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfsww\" (UniqueName: \"kubernetes.io/projected/c3eebda0-cd9c-448c-8e0c-c25aea48fd54-kube-api-access-kfsww\") pod \"machine-api-operator-5694c8668f-bz4fl\" (UID: \"c3eebda0-cd9c-448c-8e0c-c25aea48fd54\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bz4fl" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.958778 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d86661dc-bc7e-43ff-9c4b-035a8afecace-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-6jxrr\" (UID: \"d86661dc-bc7e-43ff-9c4b-035a8afecace\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6jxrr" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.958805 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd16f3bc-76f4-4731-9141-19cf2aaf926d-config\") pod \"authentication-operator-69f744f599-bd6l4\" (UID: \"fd16f3bc-76f4-4731-9141-19cf2aaf926d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bd6l4" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.958832 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-n97k6\" (UID: \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\") " pod="openshift-authentication/oauth-openshift-558db77b4-n97k6" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.958858 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-n97k6\" (UID: \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\") " pod="openshift-authentication/oauth-openshift-558db77b4-n97k6" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.958900 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0aaef2ff-dfeb-4e1c-aeaa-151eec6d15fb-auth-proxy-config\") pod \"machine-approver-56656f9798-t85kw\" (UID: \"0aaef2ff-dfeb-4e1c-aeaa-151eec6d15fb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t85kw" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.958929 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd16f3bc-76f4-4731-9141-19cf2aaf926d-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bd6l4\" (UID: \"fd16f3bc-76f4-4731-9141-19cf2aaf926d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bd6l4" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.958959 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5705070-06f5-4ad4-b5df-4d82f90f8e27-config\") pod \"route-controller-manager-6576b87f9c-tvc5s\" (UID: \"b5705070-06f5-4ad4-b5df-4d82f90f8e27\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tvc5s" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.958986 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87lvc\" (UniqueName: \"kubernetes.io/projected/b5705070-06f5-4ad4-b5df-4d82f90f8e27-kube-api-access-87lvc\") pod \"route-controller-manager-6576b87f9c-tvc5s\" (UID: \"b5705070-06f5-4ad4-b5df-4d82f90f8e27\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tvc5s" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.959017 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5629efeb-c910-46f3-aa69-be7863bfb6f1-etcd-serving-ca\") pod \"apiserver-76f77b778f-hp9zp\" (UID: \"5629efeb-c910-46f3-aa69-be7863bfb6f1\") " pod="openshift-apiserver/apiserver-76f77b778f-hp9zp" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.959051 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5629efeb-c910-46f3-aa69-be7863bfb6f1-trusted-ca-bundle\") pod \"apiserver-76f77b778f-hp9zp\" (UID: \"5629efeb-c910-46f3-aa69-be7863bfb6f1\") " pod="openshift-apiserver/apiserver-76f77b778f-hp9zp" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.959082 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-audit-dir\") pod \"oauth-openshift-558db77b4-n97k6\" (UID: \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\") " pod="openshift-authentication/oauth-openshift-558db77b4-n97k6" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.959142 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5629efeb-c910-46f3-aa69-be7863bfb6f1-serving-cert\") pod \"apiserver-76f77b778f-hp9zp\" (UID: \"5629efeb-c910-46f3-aa69-be7863bfb6f1\") " pod="openshift-apiserver/apiserver-76f77b778f-hp9zp" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.959176 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fca68f2a-06ef-4c6a-8971-026d05045c4a-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-5p2wz\" (UID: \"fca68f2a-06ef-4c6a-8971-026d05045c4a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5p2wz" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.959204 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fca68f2a-06ef-4c6a-8971-026d05045c4a-serving-cert\") pod \"apiserver-7bbb656c7d-5p2wz\" (UID: \"fca68f2a-06ef-4c6a-8971-026d05045c4a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5p2wz" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.959235 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6l2n\" (UniqueName: \"kubernetes.io/projected/fca68f2a-06ef-4c6a-8971-026d05045c4a-kube-api-access-r6l2n\") pod \"apiserver-7bbb656c7d-5p2wz\" (UID: \"fca68f2a-06ef-4c6a-8971-026d05045c4a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5p2wz" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.959278 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0aaef2ff-dfeb-4e1c-aeaa-151eec6d15fb-machine-approver-tls\") pod \"machine-approver-56656f9798-t85kw\" (UID: \"0aaef2ff-dfeb-4e1c-aeaa-151eec6d15fb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t85kw" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.959306 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvn4g\" (UniqueName: \"kubernetes.io/projected/fd16f3bc-76f4-4731-9141-19cf2aaf926d-kube-api-access-xvn4g\") pod \"authentication-operator-69f744f599-bd6l4\" (UID: \"fd16f3bc-76f4-4731-9141-19cf2aaf926d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bd6l4" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.959338 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5629efeb-c910-46f3-aa69-be7863bfb6f1-config\") pod \"apiserver-76f77b778f-hp9zp\" (UID: \"5629efeb-c910-46f3-aa69-be7863bfb6f1\") " pod="openshift-apiserver/apiserver-76f77b778f-hp9zp" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.959367 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b5705070-06f5-4ad4-b5df-4d82f90f8e27-client-ca\") pod \"route-controller-manager-6576b87f9c-tvc5s\" (UID: \"b5705070-06f5-4ad4-b5df-4d82f90f8e27\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tvc5s" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.959395 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5629efeb-c910-46f3-aa69-be7863bfb6f1-etcd-client\") pod \"apiserver-76f77b778f-hp9zp\" (UID: \"5629efeb-c910-46f3-aa69-be7863bfb6f1\") " pod="openshift-apiserver/apiserver-76f77b778f-hp9zp" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.959423 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5629efeb-c910-46f3-aa69-be7863bfb6f1-encryption-config\") pod \"apiserver-76f77b778f-hp9zp\" (UID: \"5629efeb-c910-46f3-aa69-be7863bfb6f1\") " pod="openshift-apiserver/apiserver-76f77b778f-hp9zp" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.959454 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-n97k6\" (UID: \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\") " pod="openshift-authentication/oauth-openshift-558db77b4-n97k6" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.959489 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v69lg\" (UniqueName: \"kubernetes.io/projected/2516f68b-0c44-4a09-abc8-7c4cba0cbb60-kube-api-access-v69lg\") pod \"cluster-samples-operator-665b6dd947-7w7kl\" (UID: \"2516f68b-0c44-4a09-abc8-7c4cba0cbb60\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7w7kl" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.959513 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-n97k6\" (UID: \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\") " pod="openshift-authentication/oauth-openshift-558db77b4-n97k6" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.959533 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-n97k6\" (UID: \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\") " pod="openshift-authentication/oauth-openshift-558db77b4-n97k6" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.959562 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-n97k6\" (UID: \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\") " pod="openshift-authentication/oauth-openshift-558db77b4-n97k6" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.959586 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/2516f68b-0c44-4a09-abc8-7c4cba0cbb60-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-7w7kl\" (UID: \"2516f68b-0c44-4a09-abc8-7c4cba0cbb60\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7w7kl" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.959609 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fca68f2a-06ef-4c6a-8971-026d05045c4a-audit-policies\") pod \"apiserver-7bbb656c7d-5p2wz\" (UID: \"fca68f2a-06ef-4c6a-8971-026d05045c4a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5p2wz" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.959633 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c922ab64-1708-4b9b-bb3d-a0e1e4b5eaf0-serving-cert\") pod \"console-operator-58897d9998-lfk66\" (UID: \"c922ab64-1708-4b9b-bb3d-a0e1e4b5eaf0\") " pod="openshift-console-operator/console-operator-58897d9998-lfk66" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.959660 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c922ab64-1708-4b9b-bb3d-a0e1e4b5eaf0-trusted-ca\") pod \"console-operator-58897d9998-lfk66\" (UID: \"c922ab64-1708-4b9b-bb3d-a0e1e4b5eaf0\") " pod="openshift-console-operator/console-operator-58897d9998-lfk66" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.959693 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13c936d9-26fd-46c4-9099-05a09312e511-serving-cert\") pod \"controller-manager-879f6c89f-8lqfg\" (UID: \"13c936d9-26fd-46c4-9099-05a09312e511\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8lqfg" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.959724 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3eebda0-cd9c-448c-8e0c-c25aea48fd54-config\") pod \"machine-api-operator-5694c8668f-bz4fl\" (UID: \"c3eebda0-cd9c-448c-8e0c-c25aea48fd54\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bz4fl" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.959752 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d86661dc-bc7e-43ff-9c4b-035a8afecace-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-6jxrr\" (UID: \"d86661dc-bc7e-43ff-9c4b-035a8afecace\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6jxrr" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.959786 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-n97k6\" (UID: \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\") " pod="openshift-authentication/oauth-openshift-558db77b4-n97k6" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.959807 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c3eebda0-cd9c-448c-8e0c-c25aea48fd54-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-bz4fl\" (UID: \"c3eebda0-cd9c-448c-8e0c-c25aea48fd54\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bz4fl" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.959847 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5705070-06f5-4ad4-b5df-4d82f90f8e27-serving-cert\") pod \"route-controller-manager-6576b87f9c-tvc5s\" (UID: \"b5705070-06f5-4ad4-b5df-4d82f90f8e27\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tvc5s" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.959868 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tv2t\" (UniqueName: \"kubernetes.io/projected/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-kube-api-access-7tv2t\") pod \"oauth-openshift-558db77b4-n97k6\" (UID: \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\") " pod="openshift-authentication/oauth-openshift-558db77b4-n97k6" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.959888 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/13c936d9-26fd-46c4-9099-05a09312e511-client-ca\") pod \"controller-manager-879f6c89f-8lqfg\" (UID: \"13c936d9-26fd-46c4-9099-05a09312e511\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8lqfg" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.959917 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c3eebda0-cd9c-448c-8e0c-c25aea48fd54-images\") pod \"machine-api-operator-5694c8668f-bz4fl\" (UID: \"c3eebda0-cd9c-448c-8e0c-c25aea48fd54\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bz4fl" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.959938 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8n8t\" (UniqueName: \"kubernetes.io/projected/8adac5a2-60c1-4c11-a7bd-62c113d8caca-kube-api-access-f8n8t\") pod \"downloads-7954f5f757-4jt92\" (UID: \"8adac5a2-60c1-4c11-a7bd-62c113d8caca\") " pod="openshift-console/downloads-7954f5f757-4jt92" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.959959 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-n97k6\" (UID: \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\") " pod="openshift-authentication/oauth-openshift-558db77b4-n97k6" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.959981 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13c936d9-26fd-46c4-9099-05a09312e511-config\") pod \"controller-manager-879f6c89f-8lqfg\" (UID: \"13c936d9-26fd-46c4-9099-05a09312e511\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8lqfg" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.960001 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m75sm\" (UniqueName: \"kubernetes.io/projected/13c936d9-26fd-46c4-9099-05a09312e511-kube-api-access-m75sm\") pod \"controller-manager-879f6c89f-8lqfg\" (UID: \"13c936d9-26fd-46c4-9099-05a09312e511\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8lqfg" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.960022 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5629efeb-c910-46f3-aa69-be7863bfb6f1-node-pullsecrets\") pod \"apiserver-76f77b778f-hp9zp\" (UID: \"5629efeb-c910-46f3-aa69-be7863bfb6f1\") " pod="openshift-apiserver/apiserver-76f77b778f-hp9zp" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.960043 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d86661dc-bc7e-43ff-9c4b-035a8afecace-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-6jxrr\" (UID: \"d86661dc-bc7e-43ff-9c4b-035a8afecace\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6jxrr" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.960064 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c922ab64-1708-4b9b-bb3d-a0e1e4b5eaf0-config\") pod \"console-operator-58897d9998-lfk66\" (UID: \"c922ab64-1708-4b9b-bb3d-a0e1e4b5eaf0\") " pod="openshift-console-operator/console-operator-58897d9998-lfk66" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.960083 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj4w6\" (UniqueName: \"kubernetes.io/projected/c922ab64-1708-4b9b-bb3d-a0e1e4b5eaf0-kube-api-access-rj4w6\") pod \"console-operator-58897d9998-lfk66\" (UID: \"c922ab64-1708-4b9b-bb3d-a0e1e4b5eaf0\") " pod="openshift-console-operator/console-operator-58897d9998-lfk66" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.960129 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fca68f2a-06ef-4c6a-8971-026d05045c4a-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-5p2wz\" (UID: \"fca68f2a-06ef-4c6a-8971-026d05045c4a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5p2wz" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.960151 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5jbx\" (UniqueName: \"kubernetes.io/projected/5629efeb-c910-46f3-aa69-be7863bfb6f1-kube-api-access-v5jbx\") pod \"apiserver-76f77b778f-hp9zp\" (UID: \"5629efeb-c910-46f3-aa69-be7863bfb6f1\") " pod="openshift-apiserver/apiserver-76f77b778f-hp9zp" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.960173 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd16f3bc-76f4-4731-9141-19cf2aaf926d-service-ca-bundle\") pod \"authentication-operator-69f744f599-bd6l4\" (UID: \"fd16f3bc-76f4-4731-9141-19cf2aaf926d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bd6l4" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.960195 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd16f3bc-76f4-4731-9141-19cf2aaf926d-serving-cert\") pod \"authentication-operator-69f744f599-bd6l4\" (UID: \"fd16f3bc-76f4-4731-9141-19cf2aaf926d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bd6l4" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.960217 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5629efeb-c910-46f3-aa69-be7863bfb6f1-audit-dir\") pod \"apiserver-76f77b778f-hp9zp\" (UID: \"5629efeb-c910-46f3-aa69-be7863bfb6f1\") " pod="openshift-apiserver/apiserver-76f77b778f-hp9zp" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.960238 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fca68f2a-06ef-4c6a-8971-026d05045c4a-encryption-config\") pod \"apiserver-7bbb656c7d-5p2wz\" (UID: \"fca68f2a-06ef-4c6a-8971-026d05045c4a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5p2wz" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.960261 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fca68f2a-06ef-4c6a-8971-026d05045c4a-audit-dir\") pod \"apiserver-7bbb656c7d-5p2wz\" (UID: \"fca68f2a-06ef-4c6a-8971-026d05045c4a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5p2wz" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.960282 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlq8k\" (UniqueName: \"kubernetes.io/projected/d86661dc-bc7e-43ff-9c4b-035a8afecace-kube-api-access-rlq8k\") pod \"cluster-image-registry-operator-dc59b4c8b-6jxrr\" (UID: \"d86661dc-bc7e-43ff-9c4b-035a8afecace\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6jxrr" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.960302 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fca68f2a-06ef-4c6a-8971-026d05045c4a-etcd-client\") pod \"apiserver-7bbb656c7d-5p2wz\" (UID: \"fca68f2a-06ef-4c6a-8971-026d05045c4a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5p2wz" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.960321 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/13c936d9-26fd-46c4-9099-05a09312e511-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-8lqfg\" (UID: \"13c936d9-26fd-46c4-9099-05a09312e511\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8lqfg" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.960343 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-n97k6\" (UID: \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\") " pod="openshift-authentication/oauth-openshift-558db77b4-n97k6" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.960362 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5629efeb-c910-46f3-aa69-be7863bfb6f1-image-import-ca\") pod \"apiserver-76f77b778f-hp9zp\" (UID: \"5629efeb-c910-46f3-aa69-be7863bfb6f1\") " pod="openshift-apiserver/apiserver-76f77b778f-hp9zp" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.960844 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-ctt57"] Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.971842 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tvc5s"] Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.971891 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-gfm2w"] Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.972259 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.976565 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-ctt57" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.978028 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.993708 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.994062 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.994200 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.994317 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.994448 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.994573 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.994733 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.994858 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.994944 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.995035 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.995139 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.995211 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.995277 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.995337 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.996107 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.996367 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.996492 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.996638 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.996883 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.997037 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.997182 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.997260 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-92x8q"] Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.997274 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.997371 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.997481 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.997576 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.997599 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-tqz88"] Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.997665 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.997668 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.997764 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.997858 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-tqz88" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.998108 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-92x8q" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.997876 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.997987 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.998015 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.998623 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.999121 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-bz2gm"] Nov 28 12:38:03 crc kubenswrapper[4779]: I1128 12:38:03.999891 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-bz2gm" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.004069 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.004336 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.004480 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.030986 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.031035 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.031272 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.031276 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.032352 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.032496 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.032785 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.032793 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.032985 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.033035 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.033056 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gc2sn"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.033681 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gc2sn" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.040460 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.040807 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.041230 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.041562 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.041763 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.042202 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.043803 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.059535 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.060329 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.060574 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.061124 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x9sk6"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.061299 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.061488 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.061686 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.061733 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x9sk6" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.061871 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.062681 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c922ab64-1708-4b9b-bb3d-a0e1e4b5eaf0-config\") pod \"console-operator-58897d9998-lfk66\" (UID: \"c922ab64-1708-4b9b-bb3d-a0e1e4b5eaf0\") " pod="openshift-console-operator/console-operator-58897d9998-lfk66" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.062718 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj4w6\" (UniqueName: \"kubernetes.io/projected/c922ab64-1708-4b9b-bb3d-a0e1e4b5eaf0-kube-api-access-rj4w6\") pod \"console-operator-58897d9998-lfk66\" (UID: \"c922ab64-1708-4b9b-bb3d-a0e1e4b5eaf0\") " pod="openshift-console-operator/console-operator-58897d9998-lfk66" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.062745 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fca68f2a-06ef-4c6a-8971-026d05045c4a-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-5p2wz\" (UID: \"fca68f2a-06ef-4c6a-8971-026d05045c4a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5p2wz" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.062767 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/228c180e-b1ee-45c8-a186-03b701adc920-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-gc2sn\" (UID: \"228c180e-b1ee-45c8-a186-03b701adc920\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gc2sn" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.062783 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/7b4f9d36-3495-47b8-b0a6-31f077b718a0-etcd-service-ca\") pod \"etcd-operator-b45778765-tqz88\" (UID: \"7b4f9d36-3495-47b8-b0a6-31f077b718a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tqz88" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.062806 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5jbx\" (UniqueName: \"kubernetes.io/projected/5629efeb-c910-46f3-aa69-be7863bfb6f1-kube-api-access-v5jbx\") pod \"apiserver-76f77b778f-hp9zp\" (UID: \"5629efeb-c910-46f3-aa69-be7863bfb6f1\") " pod="openshift-apiserver/apiserver-76f77b778f-hp9zp" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.062823 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd16f3bc-76f4-4731-9141-19cf2aaf926d-service-ca-bundle\") pod \"authentication-operator-69f744f599-bd6l4\" (UID: \"fd16f3bc-76f4-4731-9141-19cf2aaf926d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bd6l4" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.062840 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd16f3bc-76f4-4731-9141-19cf2aaf926d-serving-cert\") pod \"authentication-operator-69f744f599-bd6l4\" (UID: \"fd16f3bc-76f4-4731-9141-19cf2aaf926d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bd6l4" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.062856 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/228c180e-b1ee-45c8-a186-03b701adc920-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-gc2sn\" (UID: \"228c180e-b1ee-45c8-a186-03b701adc920\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gc2sn" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.062874 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5629efeb-c910-46f3-aa69-be7863bfb6f1-audit-dir\") pod \"apiserver-76f77b778f-hp9zp\" (UID: \"5629efeb-c910-46f3-aa69-be7863bfb6f1\") " pod="openshift-apiserver/apiserver-76f77b778f-hp9zp" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.062889 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fca68f2a-06ef-4c6a-8971-026d05045c4a-encryption-config\") pod \"apiserver-7bbb656c7d-5p2wz\" (UID: \"fca68f2a-06ef-4c6a-8971-026d05045c4a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5p2wz" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.062907 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fca68f2a-06ef-4c6a-8971-026d05045c4a-audit-dir\") pod \"apiserver-7bbb656c7d-5p2wz\" (UID: \"fca68f2a-06ef-4c6a-8971-026d05045c4a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5p2wz" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.062924 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/13c936d9-26fd-46c4-9099-05a09312e511-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-8lqfg\" (UID: \"13c936d9-26fd-46c4-9099-05a09312e511\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8lqfg" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.062944 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlq8k\" (UniqueName: \"kubernetes.io/projected/d86661dc-bc7e-43ff-9c4b-035a8afecace-kube-api-access-rlq8k\") pod \"cluster-image-registry-operator-dc59b4c8b-6jxrr\" (UID: \"d86661dc-bc7e-43ff-9c4b-035a8afecace\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6jxrr" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.062960 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fca68f2a-06ef-4c6a-8971-026d05045c4a-etcd-client\") pod \"apiserver-7bbb656c7d-5p2wz\" (UID: \"fca68f2a-06ef-4c6a-8971-026d05045c4a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5p2wz" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.062975 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/7b4f9d36-3495-47b8-b0a6-31f077b718a0-etcd-ca\") pod \"etcd-operator-b45778765-tqz88\" (UID: \"7b4f9d36-3495-47b8-b0a6-31f077b718a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tqz88" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.062991 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/228c180e-b1ee-45c8-a186-03b701adc920-config\") pod \"kube-apiserver-operator-766d6c64bb-gc2sn\" (UID: \"228c180e-b1ee-45c8-a186-03b701adc920\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gc2sn" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.063010 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bb401509-3ef4-41bc-93db-fbee2b5454b9-console-config\") pod \"console-f9d7485db-ctt57\" (UID: \"bb401509-3ef4-41bc-93db-fbee2b5454b9\") " pod="openshift-console/console-f9d7485db-ctt57" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.063031 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-n97k6\" (UID: \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\") " pod="openshift-authentication/oauth-openshift-558db77b4-n97k6" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.063049 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5629efeb-c910-46f3-aa69-be7863bfb6f1-image-import-ca\") pod \"apiserver-76f77b778f-hp9zp\" (UID: \"5629efeb-c910-46f3-aa69-be7863bfb6f1\") " pod="openshift-apiserver/apiserver-76f77b778f-hp9zp" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.063070 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-audit-policies\") pod \"oauth-openshift-558db77b4-n97k6\" (UID: \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\") " pod="openshift-authentication/oauth-openshift-558db77b4-n97k6" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.063088 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnm5g\" (UniqueName: \"kubernetes.io/projected/0aaef2ff-dfeb-4e1c-aeaa-151eec6d15fb-kube-api-access-cnm5g\") pod \"machine-approver-56656f9798-t85kw\" (UID: \"0aaef2ff-dfeb-4e1c-aeaa-151eec6d15fb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t85kw" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.063126 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-n97k6\" (UID: \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\") " pod="openshift-authentication/oauth-openshift-558db77b4-n97k6" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.063142 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5629efeb-c910-46f3-aa69-be7863bfb6f1-audit\") pod \"apiserver-76f77b778f-hp9zp\" (UID: \"5629efeb-c910-46f3-aa69-be7863bfb6f1\") " pod="openshift-apiserver/apiserver-76f77b778f-hp9zp" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.063159 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0aaef2ff-dfeb-4e1c-aeaa-151eec6d15fb-config\") pod \"machine-approver-56656f9798-t85kw\" (UID: \"0aaef2ff-dfeb-4e1c-aeaa-151eec6d15fb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t85kw" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.063175 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd16f3bc-76f4-4731-9141-19cf2aaf926d-config\") pod \"authentication-operator-69f744f599-bd6l4\" (UID: \"fd16f3bc-76f4-4731-9141-19cf2aaf926d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bd6l4" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.063192 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-n97k6\" (UID: \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\") " pod="openshift-authentication/oauth-openshift-558db77b4-n97k6" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.063210 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfsww\" (UniqueName: \"kubernetes.io/projected/c3eebda0-cd9c-448c-8e0c-c25aea48fd54-kube-api-access-kfsww\") pod \"machine-api-operator-5694c8668f-bz4fl\" (UID: \"c3eebda0-cd9c-448c-8e0c-c25aea48fd54\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bz4fl" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.063229 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d86661dc-bc7e-43ff-9c4b-035a8afecace-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-6jxrr\" (UID: \"d86661dc-bc7e-43ff-9c4b-035a8afecace\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6jxrr" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.063243 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd16f3bc-76f4-4731-9141-19cf2aaf926d-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bd6l4\" (UID: \"fd16f3bc-76f4-4731-9141-19cf2aaf926d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bd6l4" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.063261 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-n97k6\" (UID: \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\") " pod="openshift-authentication/oauth-openshift-558db77b4-n97k6" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.063278 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-n97k6\" (UID: \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\") " pod="openshift-authentication/oauth-openshift-558db77b4-n97k6" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.063293 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0aaef2ff-dfeb-4e1c-aeaa-151eec6d15fb-auth-proxy-config\") pod \"machine-approver-56656f9798-t85kw\" (UID: \"0aaef2ff-dfeb-4e1c-aeaa-151eec6d15fb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t85kw" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.063314 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf5ph\" (UniqueName: \"kubernetes.io/projected/7b4f9d36-3495-47b8-b0a6-31f077b718a0-kube-api-access-wf5ph\") pod \"etcd-operator-b45778765-tqz88\" (UID: \"7b4f9d36-3495-47b8-b0a6-31f077b718a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tqz88" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.063333 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzhtk\" (UniqueName: \"kubernetes.io/projected/bb401509-3ef4-41bc-93db-fbee2b5454b9-kube-api-access-fzhtk\") pod \"console-f9d7485db-ctt57\" (UID: \"bb401509-3ef4-41bc-93db-fbee2b5454b9\") " pod="openshift-console/console-f9d7485db-ctt57" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.063353 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5705070-06f5-4ad4-b5df-4d82f90f8e27-config\") pod \"route-controller-manager-6576b87f9c-tvc5s\" (UID: \"b5705070-06f5-4ad4-b5df-4d82f90f8e27\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tvc5s" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.063376 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87lvc\" (UniqueName: \"kubernetes.io/projected/b5705070-06f5-4ad4-b5df-4d82f90f8e27-kube-api-access-87lvc\") pod \"route-controller-manager-6576b87f9c-tvc5s\" (UID: \"b5705070-06f5-4ad4-b5df-4d82f90f8e27\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tvc5s" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.063398 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5629efeb-c910-46f3-aa69-be7863bfb6f1-etcd-serving-ca\") pod \"apiserver-76f77b778f-hp9zp\" (UID: \"5629efeb-c910-46f3-aa69-be7863bfb6f1\") " pod="openshift-apiserver/apiserver-76f77b778f-hp9zp" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.063422 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5629efeb-c910-46f3-aa69-be7863bfb6f1-trusted-ca-bundle\") pod \"apiserver-76f77b778f-hp9zp\" (UID: \"5629efeb-c910-46f3-aa69-be7863bfb6f1\") " pod="openshift-apiserver/apiserver-76f77b778f-hp9zp" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.063445 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6l2n\" (UniqueName: \"kubernetes.io/projected/fca68f2a-06ef-4c6a-8971-026d05045c4a-kube-api-access-r6l2n\") pod \"apiserver-7bbb656c7d-5p2wz\" (UID: \"fca68f2a-06ef-4c6a-8971-026d05045c4a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5p2wz" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.063466 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-audit-dir\") pod \"oauth-openshift-558db77b4-n97k6\" (UID: \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\") " pod="openshift-authentication/oauth-openshift-558db77b4-n97k6" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.063491 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5629efeb-c910-46f3-aa69-be7863bfb6f1-serving-cert\") pod \"apiserver-76f77b778f-hp9zp\" (UID: \"5629efeb-c910-46f3-aa69-be7863bfb6f1\") " pod="openshift-apiserver/apiserver-76f77b778f-hp9zp" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.063510 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fca68f2a-06ef-4c6a-8971-026d05045c4a-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-5p2wz\" (UID: \"fca68f2a-06ef-4c6a-8971-026d05045c4a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5p2wz" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.063529 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fca68f2a-06ef-4c6a-8971-026d05045c4a-serving-cert\") pod \"apiserver-7bbb656c7d-5p2wz\" (UID: \"fca68f2a-06ef-4c6a-8971-026d05045c4a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5p2wz" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.063548 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5629efeb-c910-46f3-aa69-be7863bfb6f1-config\") pod \"apiserver-76f77b778f-hp9zp\" (UID: \"5629efeb-c910-46f3-aa69-be7863bfb6f1\") " pod="openshift-apiserver/apiserver-76f77b778f-hp9zp" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.063567 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0aaef2ff-dfeb-4e1c-aeaa-151eec6d15fb-machine-approver-tls\") pod \"machine-approver-56656f9798-t85kw\" (UID: \"0aaef2ff-dfeb-4e1c-aeaa-151eec6d15fb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t85kw" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.063586 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvn4g\" (UniqueName: \"kubernetes.io/projected/fd16f3bc-76f4-4731-9141-19cf2aaf926d-kube-api-access-xvn4g\") pod \"authentication-operator-69f744f599-bd6l4\" (UID: \"fd16f3bc-76f4-4731-9141-19cf2aaf926d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bd6l4" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.063607 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66175ea0-414c-4c91-9aec-e1cfa7992a5b-serving-cert\") pod \"openshift-config-operator-7777fb866f-xnq47\" (UID: \"66175ea0-414c-4c91-9aec-e1cfa7992a5b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-xnq47" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.063629 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b5705070-06f5-4ad4-b5df-4d82f90f8e27-client-ca\") pod \"route-controller-manager-6576b87f9c-tvc5s\" (UID: \"b5705070-06f5-4ad4-b5df-4d82f90f8e27\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tvc5s" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.063652 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5629efeb-c910-46f3-aa69-be7863bfb6f1-etcd-client\") pod \"apiserver-76f77b778f-hp9zp\" (UID: \"5629efeb-c910-46f3-aa69-be7863bfb6f1\") " pod="openshift-apiserver/apiserver-76f77b778f-hp9zp" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.063682 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5629efeb-c910-46f3-aa69-be7863bfb6f1-encryption-config\") pod \"apiserver-76f77b778f-hp9zp\" (UID: \"5629efeb-c910-46f3-aa69-be7863bfb6f1\") " pod="openshift-apiserver/apiserver-76f77b778f-hp9zp" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.063706 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb401509-3ef4-41bc-93db-fbee2b5454b9-trusted-ca-bundle\") pod \"console-f9d7485db-ctt57\" (UID: \"bb401509-3ef4-41bc-93db-fbee2b5454b9\") " pod="openshift-console/console-f9d7485db-ctt57" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.063741 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-n97k6\" (UID: \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\") " pod="openshift-authentication/oauth-openshift-558db77b4-n97k6" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.063764 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bb401509-3ef4-41bc-93db-fbee2b5454b9-service-ca\") pod \"console-f9d7485db-ctt57\" (UID: \"bb401509-3ef4-41bc-93db-fbee2b5454b9\") " pod="openshift-console/console-f9d7485db-ctt57" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.063791 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v69lg\" (UniqueName: \"kubernetes.io/projected/2516f68b-0c44-4a09-abc8-7c4cba0cbb60-kube-api-access-v69lg\") pod \"cluster-samples-operator-665b6dd947-7w7kl\" (UID: \"2516f68b-0c44-4a09-abc8-7c4cba0cbb60\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7w7kl" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.063813 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-n97k6\" (UID: \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\") " pod="openshift-authentication/oauth-openshift-558db77b4-n97k6" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.063837 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-n97k6\" (UID: \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\") " pod="openshift-authentication/oauth-openshift-558db77b4-n97k6" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.063875 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-n97k6\" (UID: \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\") " pod="openshift-authentication/oauth-openshift-558db77b4-n97k6" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.063898 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/2516f68b-0c44-4a09-abc8-7c4cba0cbb60-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-7w7kl\" (UID: \"2516f68b-0c44-4a09-abc8-7c4cba0cbb60\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7w7kl" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.063928 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c922ab64-1708-4b9b-bb3d-a0e1e4b5eaf0-serving-cert\") pod \"console-operator-58897d9998-lfk66\" (UID: \"c922ab64-1708-4b9b-bb3d-a0e1e4b5eaf0\") " pod="openshift-console-operator/console-operator-58897d9998-lfk66" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.063950 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c922ab64-1708-4b9b-bb3d-a0e1e4b5eaf0-trusted-ca\") pod \"console-operator-58897d9998-lfk66\" (UID: \"c922ab64-1708-4b9b-bb3d-a0e1e4b5eaf0\") " pod="openshift-console-operator/console-operator-58897d9998-lfk66" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.063953 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.063971 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fca68f2a-06ef-4c6a-8971-026d05045c4a-audit-policies\") pod \"apiserver-7bbb656c7d-5p2wz\" (UID: \"fca68f2a-06ef-4c6a-8971-026d05045c4a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5p2wz" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.063995 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b4f9d36-3495-47b8-b0a6-31f077b718a0-serving-cert\") pod \"etcd-operator-b45778765-tqz88\" (UID: \"7b4f9d36-3495-47b8-b0a6-31f077b718a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tqz88" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.064017 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13c936d9-26fd-46c4-9099-05a09312e511-serving-cert\") pod \"controller-manager-879f6c89f-8lqfg\" (UID: \"13c936d9-26fd-46c4-9099-05a09312e511\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8lqfg" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.064036 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bb401509-3ef4-41bc-93db-fbee2b5454b9-console-serving-cert\") pod \"console-f9d7485db-ctt57\" (UID: \"bb401509-3ef4-41bc-93db-fbee2b5454b9\") " pod="openshift-console/console-f9d7485db-ctt57" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.064052 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3eebda0-cd9c-448c-8e0c-c25aea48fd54-config\") pod \"machine-api-operator-5694c8668f-bz4fl\" (UID: \"c3eebda0-cd9c-448c-8e0c-c25aea48fd54\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bz4fl" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.064070 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d86661dc-bc7e-43ff-9c4b-035a8afecace-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-6jxrr\" (UID: \"d86661dc-bc7e-43ff-9c4b-035a8afecace\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6jxrr" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.064086 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7b4f9d36-3495-47b8-b0a6-31f077b718a0-etcd-client\") pod \"etcd-operator-b45778765-tqz88\" (UID: \"7b4f9d36-3495-47b8-b0a6-31f077b718a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tqz88" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.064139 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-n97k6\" (UID: \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\") " pod="openshift-authentication/oauth-openshift-558db77b4-n97k6" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.064162 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c3eebda0-cd9c-448c-8e0c-c25aea48fd54-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-bz4fl\" (UID: \"c3eebda0-cd9c-448c-8e0c-c25aea48fd54\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bz4fl" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.064187 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6jml\" (UniqueName: \"kubernetes.io/projected/66175ea0-414c-4c91-9aec-e1cfa7992a5b-kube-api-access-d6jml\") pod \"openshift-config-operator-7777fb866f-xnq47\" (UID: \"66175ea0-414c-4c91-9aec-e1cfa7992a5b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-xnq47" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.064213 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/66175ea0-414c-4c91-9aec-e1cfa7992a5b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-xnq47\" (UID: \"66175ea0-414c-4c91-9aec-e1cfa7992a5b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-xnq47" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.064242 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5705070-06f5-4ad4-b5df-4d82f90f8e27-serving-cert\") pod \"route-controller-manager-6576b87f9c-tvc5s\" (UID: \"b5705070-06f5-4ad4-b5df-4d82f90f8e27\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tvc5s" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.064261 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bb401509-3ef4-41bc-93db-fbee2b5454b9-oauth-serving-cert\") pod \"console-f9d7485db-ctt57\" (UID: \"bb401509-3ef4-41bc-93db-fbee2b5454b9\") " pod="openshift-console/console-f9d7485db-ctt57" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.064280 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tv2t\" (UniqueName: \"kubernetes.io/projected/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-kube-api-access-7tv2t\") pod \"oauth-openshift-558db77b4-n97k6\" (UID: \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\") " pod="openshift-authentication/oauth-openshift-558db77b4-n97k6" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.064298 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/13c936d9-26fd-46c4-9099-05a09312e511-client-ca\") pod \"controller-manager-879f6c89f-8lqfg\" (UID: \"13c936d9-26fd-46c4-9099-05a09312e511\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8lqfg" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.064315 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bb401509-3ef4-41bc-93db-fbee2b5454b9-console-oauth-config\") pod \"console-f9d7485db-ctt57\" (UID: \"bb401509-3ef4-41bc-93db-fbee2b5454b9\") " pod="openshift-console/console-f9d7485db-ctt57" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.064331 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8n8t\" (UniqueName: \"kubernetes.io/projected/8adac5a2-60c1-4c11-a7bd-62c113d8caca-kube-api-access-f8n8t\") pod \"downloads-7954f5f757-4jt92\" (UID: \"8adac5a2-60c1-4c11-a7bd-62c113d8caca\") " pod="openshift-console/downloads-7954f5f757-4jt92" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.064347 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b4f9d36-3495-47b8-b0a6-31f077b718a0-config\") pod \"etcd-operator-b45778765-tqz88\" (UID: \"7b4f9d36-3495-47b8-b0a6-31f077b718a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tqz88" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.064370 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c3eebda0-cd9c-448c-8e0c-c25aea48fd54-images\") pod \"machine-api-operator-5694c8668f-bz4fl\" (UID: \"c3eebda0-cd9c-448c-8e0c-c25aea48fd54\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bz4fl" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.064385 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d86661dc-bc7e-43ff-9c4b-035a8afecace-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-6jxrr\" (UID: \"d86661dc-bc7e-43ff-9c4b-035a8afecace\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6jxrr" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.064403 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-n97k6\" (UID: \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\") " pod="openshift-authentication/oauth-openshift-558db77b4-n97k6" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.064418 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13c936d9-26fd-46c4-9099-05a09312e511-config\") pod \"controller-manager-879f6c89f-8lqfg\" (UID: \"13c936d9-26fd-46c4-9099-05a09312e511\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8lqfg" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.064434 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m75sm\" (UniqueName: \"kubernetes.io/projected/13c936d9-26fd-46c4-9099-05a09312e511-kube-api-access-m75sm\") pod \"controller-manager-879f6c89f-8lqfg\" (UID: \"13c936d9-26fd-46c4-9099-05a09312e511\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8lqfg" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.064449 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5629efeb-c910-46f3-aa69-be7863bfb6f1-node-pullsecrets\") pod \"apiserver-76f77b778f-hp9zp\" (UID: \"5629efeb-c910-46f3-aa69-be7863bfb6f1\") " pod="openshift-apiserver/apiserver-76f77b778f-hp9zp" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.064550 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5629efeb-c910-46f3-aa69-be7863bfb6f1-node-pullsecrets\") pod \"apiserver-76f77b778f-hp9zp\" (UID: \"5629efeb-c910-46f3-aa69-be7863bfb6f1\") " pod="openshift-apiserver/apiserver-76f77b778f-hp9zp" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.065975 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c922ab64-1708-4b9b-bb3d-a0e1e4b5eaf0-config\") pod \"console-operator-58897d9998-lfk66\" (UID: \"c922ab64-1708-4b9b-bb3d-a0e1e4b5eaf0\") " pod="openshift-console-operator/console-operator-58897d9998-lfk66" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.067382 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fca68f2a-06ef-4c6a-8971-026d05045c4a-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-5p2wz\" (UID: \"fca68f2a-06ef-4c6a-8971-026d05045c4a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5p2wz" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.068079 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fca68f2a-06ef-4c6a-8971-026d05045c4a-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-5p2wz\" (UID: \"fca68f2a-06ef-4c6a-8971-026d05045c4a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5p2wz" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.068511 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5629efeb-c910-46f3-aa69-be7863bfb6f1-config\") pod \"apiserver-76f77b778f-hp9zp\" (UID: \"5629efeb-c910-46f3-aa69-be7863bfb6f1\") " pod="openshift-apiserver/apiserver-76f77b778f-hp9zp" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.069486 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3eebda0-cd9c-448c-8e0c-c25aea48fd54-config\") pod \"machine-api-operator-5694c8668f-bz4fl\" (UID: \"c3eebda0-cd9c-448c-8e0c-c25aea48fd54\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bz4fl" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.070114 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.073123 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/13c936d9-26fd-46c4-9099-05a09312e511-client-ca\") pod \"controller-manager-879f6c89f-8lqfg\" (UID: \"13c936d9-26fd-46c4-9099-05a09312e511\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8lqfg" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.073686 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0aaef2ff-dfeb-4e1c-aeaa-151eec6d15fb-config\") pod \"machine-approver-56656f9798-t85kw\" (UID: \"0aaef2ff-dfeb-4e1c-aeaa-151eec6d15fb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t85kw" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.073786 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5629efeb-c910-46f3-aa69-be7863bfb6f1-audit-dir\") pod \"apiserver-76f77b778f-hp9zp\" (UID: \"5629efeb-c910-46f3-aa69-be7863bfb6f1\") " pod="openshift-apiserver/apiserver-76f77b778f-hp9zp" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.073874 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fca68f2a-06ef-4c6a-8971-026d05045c4a-audit-policies\") pod \"apiserver-7bbb656c7d-5p2wz\" (UID: \"fca68f2a-06ef-4c6a-8971-026d05045c4a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5p2wz" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.074013 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fca68f2a-06ef-4c6a-8971-026d05045c4a-audit-dir\") pod \"apiserver-7bbb656c7d-5p2wz\" (UID: \"fca68f2a-06ef-4c6a-8971-026d05045c4a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5p2wz" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.074714 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd16f3bc-76f4-4731-9141-19cf2aaf926d-config\") pod \"authentication-operator-69f744f599-bd6l4\" (UID: \"fd16f3bc-76f4-4731-9141-19cf2aaf926d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bd6l4" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.076486 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13c936d9-26fd-46c4-9099-05a09312e511-config\") pod \"controller-manager-879f6c89f-8lqfg\" (UID: \"13c936d9-26fd-46c4-9099-05a09312e511\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8lqfg" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.079390 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fca68f2a-06ef-4c6a-8971-026d05045c4a-encryption-config\") pod \"apiserver-7bbb656c7d-5p2wz\" (UID: \"fca68f2a-06ef-4c6a-8971-026d05045c4a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5p2wz" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.079914 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c3eebda0-cd9c-448c-8e0c-c25aea48fd54-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-bz4fl\" (UID: \"c3eebda0-cd9c-448c-8e0c-c25aea48fd54\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bz4fl" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.080213 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0aaef2ff-dfeb-4e1c-aeaa-151eec6d15fb-auth-proxy-config\") pod \"machine-approver-56656f9798-t85kw\" (UID: \"0aaef2ff-dfeb-4e1c-aeaa-151eec6d15fb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t85kw" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.080531 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-hp9zp"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.080708 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5629efeb-c910-46f3-aa69-be7863bfb6f1-etcd-serving-ca\") pod \"apiserver-76f77b778f-hp9zp\" (UID: \"5629efeb-c910-46f3-aa69-be7863bfb6f1\") " pod="openshift-apiserver/apiserver-76f77b778f-hp9zp" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.081264 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5629efeb-c910-46f3-aa69-be7863bfb6f1-image-import-ca\") pod \"apiserver-76f77b778f-hp9zp\" (UID: \"5629efeb-c910-46f3-aa69-be7863bfb6f1\") " pod="openshift-apiserver/apiserver-76f77b778f-hp9zp" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.082061 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5705070-06f5-4ad4-b5df-4d82f90f8e27-config\") pod \"route-controller-manager-6576b87f9c-tvc5s\" (UID: \"b5705070-06f5-4ad4-b5df-4d82f90f8e27\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tvc5s" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.082120 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-audit-dir\") pod \"oauth-openshift-558db77b4-n97k6\" (UID: \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\") " pod="openshift-authentication/oauth-openshift-558db77b4-n97k6" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.083548 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b5705070-06f5-4ad4-b5df-4d82f90f8e27-client-ca\") pod \"route-controller-manager-6576b87f9c-tvc5s\" (UID: \"b5705070-06f5-4ad4-b5df-4d82f90f8e27\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tvc5s" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.084042 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.084368 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.085127 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.085256 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.085331 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-gz9cl"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.085941 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gz9cl" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.086172 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.086787 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d86661dc-bc7e-43ff-9c4b-035a8afecace-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-6jxrr\" (UID: \"d86661dc-bc7e-43ff-9c4b-035a8afecace\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6jxrr" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.086834 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.086871 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.086926 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.087006 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.098113 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0aaef2ff-dfeb-4e1c-aeaa-151eec6d15fb-machine-approver-tls\") pod \"machine-approver-56656f9798-t85kw\" (UID: \"0aaef2ff-dfeb-4e1c-aeaa-151eec6d15fb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t85kw" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.087035 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5629efeb-c910-46f3-aa69-be7863bfb6f1-serving-cert\") pod \"apiserver-76f77b778f-hp9zp\" (UID: \"5629efeb-c910-46f3-aa69-be7863bfb6f1\") " pod="openshift-apiserver/apiserver-76f77b778f-hp9zp" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.088603 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-n97k6\" (UID: \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\") " pod="openshift-authentication/oauth-openshift-558db77b4-n97k6" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.089382 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-audit-policies\") pod \"oauth-openshift-558db77b4-n97k6\" (UID: \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\") " pod="openshift-authentication/oauth-openshift-558db77b4-n97k6" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.088672 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5629efeb-c910-46f3-aa69-be7863bfb6f1-encryption-config\") pod \"apiserver-76f77b778f-hp9zp\" (UID: \"5629efeb-c910-46f3-aa69-be7863bfb6f1\") " pod="openshift-apiserver/apiserver-76f77b778f-hp9zp" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.087405 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.090006 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5629efeb-c910-46f3-aa69-be7863bfb6f1-audit\") pod \"apiserver-76f77b778f-hp9zp\" (UID: \"5629efeb-c910-46f3-aa69-be7863bfb6f1\") " pod="openshift-apiserver/apiserver-76f77b778f-hp9zp" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.091841 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-n97k6\" (UID: \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\") " pod="openshift-authentication/oauth-openshift-558db77b4-n97k6" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.092434 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-n97k6\" (UID: \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\") " pod="openshift-authentication/oauth-openshift-558db77b4-n97k6" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.092642 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-n97k6\" (UID: \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\") " pod="openshift-authentication/oauth-openshift-558db77b4-n97k6" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.093724 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-bz4fl"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.094005 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-n97k6\" (UID: \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\") " pod="openshift-authentication/oauth-openshift-558db77b4-n97k6" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.090716 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fca68f2a-06ef-4c6a-8971-026d05045c4a-etcd-client\") pod \"apiserver-7bbb656c7d-5p2wz\" (UID: \"fca68f2a-06ef-4c6a-8971-026d05045c4a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5p2wz" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.087782 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.095049 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13c936d9-26fd-46c4-9099-05a09312e511-serving-cert\") pod \"controller-manager-879f6c89f-8lqfg\" (UID: \"13c936d9-26fd-46c4-9099-05a09312e511\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8lqfg" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.087811 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.089928 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd16f3bc-76f4-4731-9141-19cf2aaf926d-service-ca-bundle\") pod \"authentication-operator-69f744f599-bd6l4\" (UID: \"fd16f3bc-76f4-4731-9141-19cf2aaf926d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bd6l4" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.094554 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd16f3bc-76f4-4731-9141-19cf2aaf926d-serving-cert\") pod \"authentication-operator-69f744f599-bd6l4\" (UID: \"fd16f3bc-76f4-4731-9141-19cf2aaf926d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bd6l4" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.087838 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.088019 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.088050 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.094139 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.095199 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/2516f68b-0c44-4a09-abc8-7c4cba0cbb60-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-7w7kl\" (UID: \"2516f68b-0c44-4a09-abc8-7c4cba0cbb60\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7w7kl" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.094618 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fca68f2a-06ef-4c6a-8971-026d05045c4a-serving-cert\") pod \"apiserver-7bbb656c7d-5p2wz\" (UID: \"fca68f2a-06ef-4c6a-8971-026d05045c4a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5p2wz" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.103150 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5629efeb-c910-46f3-aa69-be7863bfb6f1-trusted-ca-bundle\") pod \"apiserver-76f77b778f-hp9zp\" (UID: \"5629efeb-c910-46f3-aa69-be7863bfb6f1\") " pod="openshift-apiserver/apiserver-76f77b778f-hp9zp" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.108049 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c3eebda0-cd9c-448c-8e0c-c25aea48fd54-images\") pod \"machine-api-operator-5694c8668f-bz4fl\" (UID: \"c3eebda0-cd9c-448c-8e0c-c25aea48fd54\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bz4fl" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.110463 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-n97k6\" (UID: \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\") " pod="openshift-authentication/oauth-openshift-558db77b4-n97k6" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.111107 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-n97k6\" (UID: \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\") " pod="openshift-authentication/oauth-openshift-558db77b4-n97k6" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.116625 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5705070-06f5-4ad4-b5df-4d82f90f8e27-serving-cert\") pod \"route-controller-manager-6576b87f9c-tvc5s\" (UID: \"b5705070-06f5-4ad4-b5df-4d82f90f8e27\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tvc5s" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.116977 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5629efeb-c910-46f3-aa69-be7863bfb6f1-etcd-client\") pod \"apiserver-76f77b778f-hp9zp\" (UID: \"5629efeb-c910-46f3-aa69-be7863bfb6f1\") " pod="openshift-apiserver/apiserver-76f77b778f-hp9zp" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.117151 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c922ab64-1708-4b9b-bb3d-a0e1e4b5eaf0-serving-cert\") pod \"console-operator-58897d9998-lfk66\" (UID: \"c922ab64-1708-4b9b-bb3d-a0e1e4b5eaf0\") " pod="openshift-console-operator/console-operator-58897d9998-lfk66" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.118222 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.118603 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.127388 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.128050 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.128617 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-n97k6\" (UID: \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\") " pod="openshift-authentication/oauth-openshift-558db77b4-n97k6" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.128772 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.128982 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-k5rpm"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.129300 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.165437 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.167124 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.169658 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.170166 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-k5rpm" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.170978 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.172135 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/13c936d9-26fd-46c4-9099-05a09312e511-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-8lqfg\" (UID: \"13c936d9-26fd-46c4-9099-05a09312e511\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8lqfg" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.176250 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd16f3bc-76f4-4731-9141-19cf2aaf926d-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bd6l4\" (UID: \"fd16f3bc-76f4-4731-9141-19cf2aaf926d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bd6l4" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.176679 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.177195 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.178590 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-n97k6\" (UID: \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\") " pod="openshift-authentication/oauth-openshift-558db77b4-n97k6" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.180502 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bxffw"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.180534 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-n97k6\" (UID: \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\") " pod="openshift-authentication/oauth-openshift-558db77b4-n97k6" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.182711 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bb401509-3ef4-41bc-93db-fbee2b5454b9-console-config\") pod \"console-f9d7485db-ctt57\" (UID: \"bb401509-3ef4-41bc-93db-fbee2b5454b9\") " pod="openshift-console/console-f9d7485db-ctt57" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.182772 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzhtk\" (UniqueName: \"kubernetes.io/projected/bb401509-3ef4-41bc-93db-fbee2b5454b9-kube-api-access-fzhtk\") pod \"console-f9d7485db-ctt57\" (UID: \"bb401509-3ef4-41bc-93db-fbee2b5454b9\") " pod="openshift-console/console-f9d7485db-ctt57" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.182802 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf5ph\" (UniqueName: \"kubernetes.io/projected/7b4f9d36-3495-47b8-b0a6-31f077b718a0-kube-api-access-wf5ph\") pod \"etcd-operator-b45778765-tqz88\" (UID: \"7b4f9d36-3495-47b8-b0a6-31f077b718a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tqz88" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.182834 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66175ea0-414c-4c91-9aec-e1cfa7992a5b-serving-cert\") pod \"openshift-config-operator-7777fb866f-xnq47\" (UID: \"66175ea0-414c-4c91-9aec-e1cfa7992a5b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-xnq47" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.182856 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb401509-3ef4-41bc-93db-fbee2b5454b9-trusted-ca-bundle\") pod \"console-f9d7485db-ctt57\" (UID: \"bb401509-3ef4-41bc-93db-fbee2b5454b9\") " pod="openshift-console/console-f9d7485db-ctt57" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.182881 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bb401509-3ef4-41bc-93db-fbee2b5454b9-service-ca\") pod \"console-f9d7485db-ctt57\" (UID: \"bb401509-3ef4-41bc-93db-fbee2b5454b9\") " pod="openshift-console/console-f9d7485db-ctt57" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.182928 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b4f9d36-3495-47b8-b0a6-31f077b718a0-serving-cert\") pod \"etcd-operator-b45778765-tqz88\" (UID: \"7b4f9d36-3495-47b8-b0a6-31f077b718a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tqz88" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.182949 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bb401509-3ef4-41bc-93db-fbee2b5454b9-console-serving-cert\") pod \"console-f9d7485db-ctt57\" (UID: \"bb401509-3ef4-41bc-93db-fbee2b5454b9\") " pod="openshift-console/console-f9d7485db-ctt57" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.182968 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7b4f9d36-3495-47b8-b0a6-31f077b718a0-etcd-client\") pod \"etcd-operator-b45778765-tqz88\" (UID: \"7b4f9d36-3495-47b8-b0a6-31f077b718a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tqz88" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.182999 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/66175ea0-414c-4c91-9aec-e1cfa7992a5b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-xnq47\" (UID: \"66175ea0-414c-4c91-9aec-e1cfa7992a5b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-xnq47" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.183020 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6jml\" (UniqueName: \"kubernetes.io/projected/66175ea0-414c-4c91-9aec-e1cfa7992a5b-kube-api-access-d6jml\") pod \"openshift-config-operator-7777fb866f-xnq47\" (UID: \"66175ea0-414c-4c91-9aec-e1cfa7992a5b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-xnq47" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.183047 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bb401509-3ef4-41bc-93db-fbee2b5454b9-oauth-serving-cert\") pod \"console-f9d7485db-ctt57\" (UID: \"bb401509-3ef4-41bc-93db-fbee2b5454b9\") " pod="openshift-console/console-f9d7485db-ctt57" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.183074 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bb401509-3ef4-41bc-93db-fbee2b5454b9-console-oauth-config\") pod \"console-f9d7485db-ctt57\" (UID: \"bb401509-3ef4-41bc-93db-fbee2b5454b9\") " pod="openshift-console/console-f9d7485db-ctt57" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.183104 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b4f9d36-3495-47b8-b0a6-31f077b718a0-config\") pod \"etcd-operator-b45778765-tqz88\" (UID: \"7b4f9d36-3495-47b8-b0a6-31f077b718a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tqz88" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.183163 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/228c180e-b1ee-45c8-a186-03b701adc920-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-gc2sn\" (UID: \"228c180e-b1ee-45c8-a186-03b701adc920\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gc2sn" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.183191 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/7b4f9d36-3495-47b8-b0a6-31f077b718a0-etcd-service-ca\") pod \"etcd-operator-b45778765-tqz88\" (UID: \"7b4f9d36-3495-47b8-b0a6-31f077b718a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tqz88" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.183211 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/228c180e-b1ee-45c8-a186-03b701adc920-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-gc2sn\" (UID: \"228c180e-b1ee-45c8-a186-03b701adc920\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gc2sn" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.183237 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/7b4f9d36-3495-47b8-b0a6-31f077b718a0-etcd-ca\") pod \"etcd-operator-b45778765-tqz88\" (UID: \"7b4f9d36-3495-47b8-b0a6-31f077b718a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tqz88" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.183254 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/228c180e-b1ee-45c8-a186-03b701adc920-config\") pod \"kube-apiserver-operator-766d6c64bb-gc2sn\" (UID: \"228c180e-b1ee-45c8-a186-03b701adc920\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gc2sn" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.184184 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d86661dc-bc7e-43ff-9c4b-035a8afecace-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-6jxrr\" (UID: \"d86661dc-bc7e-43ff-9c4b-035a8afecace\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6jxrr" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.184273 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bb401509-3ef4-41bc-93db-fbee2b5454b9-service-ca\") pod \"console-f9d7485db-ctt57\" (UID: \"bb401509-3ef4-41bc-93db-fbee2b5454b9\") " pod="openshift-console/console-f9d7485db-ctt57" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.184329 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.184884 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bb401509-3ef4-41bc-93db-fbee2b5454b9-console-config\") pod \"console-f9d7485db-ctt57\" (UID: \"bb401509-3ef4-41bc-93db-fbee2b5454b9\") " pod="openshift-console/console-f9d7485db-ctt57" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.185659 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/66175ea0-414c-4c91-9aec-e1cfa7992a5b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-xnq47\" (UID: \"66175ea0-414c-4c91-9aec-e1cfa7992a5b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-xnq47" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.185789 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/7b4f9d36-3495-47b8-b0a6-31f077b718a0-etcd-service-ca\") pod \"etcd-operator-b45778765-tqz88\" (UID: \"7b4f9d36-3495-47b8-b0a6-31f077b718a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tqz88" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.186062 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb401509-3ef4-41bc-93db-fbee2b5454b9-trusted-ca-bundle\") pod \"console-f9d7485db-ctt57\" (UID: \"bb401509-3ef4-41bc-93db-fbee2b5454b9\") " pod="openshift-console/console-f9d7485db-ctt57" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.186253 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c922ab64-1708-4b9b-bb3d-a0e1e4b5eaf0-trusted-ca\") pod \"console-operator-58897d9998-lfk66\" (UID: \"c922ab64-1708-4b9b-bb3d-a0e1e4b5eaf0\") " pod="openshift-console-operator/console-operator-58897d9998-lfk66" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.186297 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b4f9d36-3495-47b8-b0a6-31f077b718a0-config\") pod \"etcd-operator-b45778765-tqz88\" (UID: \"7b4f9d36-3495-47b8-b0a6-31f077b718a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tqz88" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.186699 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bb401509-3ef4-41bc-93db-fbee2b5454b9-oauth-serving-cert\") pod \"console-f9d7485db-ctt57\" (UID: \"bb401509-3ef4-41bc-93db-fbee2b5454b9\") " pod="openshift-console/console-f9d7485db-ctt57" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.186712 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b4f9d36-3495-47b8-b0a6-31f077b718a0-serving-cert\") pod \"etcd-operator-b45778765-tqz88\" (UID: \"7b4f9d36-3495-47b8-b0a6-31f077b718a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tqz88" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.186913 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/7b4f9d36-3495-47b8-b0a6-31f077b718a0-etcd-ca\") pod \"etcd-operator-b45778765-tqz88\" (UID: \"7b4f9d36-3495-47b8-b0a6-31f077b718a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tqz88" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.187006 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-xbgtb"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.187237 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bxffw" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.187485 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-njrwv"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.187738 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-xbgtb" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.187880 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ch6d4"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.188185 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-njrwv" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.188301 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bvzn7"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.188481 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ch6d4" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.188598 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-f8kkl"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.188858 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bvzn7" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.188948 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-f8kkl" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.190184 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jkwt2"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.190336 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.190492 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66175ea0-414c-4c91-9aec-e1cfa7992a5b-serving-cert\") pod \"openshift-config-operator-7777fb866f-xnq47\" (UID: \"66175ea0-414c-4c91-9aec-e1cfa7992a5b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-xnq47" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.190795 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jkwt2" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.191039 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bb401509-3ef4-41bc-93db-fbee2b5454b9-console-serving-cert\") pod \"console-f9d7485db-ctt57\" (UID: \"bb401509-3ef4-41bc-93db-fbee2b5454b9\") " pod="openshift-console/console-f9d7485db-ctt57" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.191786 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bb401509-3ef4-41bc-93db-fbee2b5454b9-console-oauth-config\") pod \"console-f9d7485db-ctt57\" (UID: \"bb401509-3ef4-41bc-93db-fbee2b5454b9\") " pod="openshift-console/console-f9d7485db-ctt57" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.194560 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-z75wf"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.195119 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z75wf" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.195244 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-n97k6\" (UID: \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\") " pod="openshift-authentication/oauth-openshift-558db77b4-n97k6" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.195857 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7b4f9d36-3495-47b8-b0a6-31f077b718a0-etcd-client\") pod \"etcd-operator-b45778765-tqz88\" (UID: \"7b4f9d36-3495-47b8-b0a6-31f077b718a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tqz88" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.196237 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-t6kkq"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.197136 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t6kkq" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.198707 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hxbwl"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.199470 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hxbwl" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.200198 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-n97k6"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.202243 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bd6l4"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.202563 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.203524 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-lfk66"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.206299 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-bz2gm"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.206358 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7w7kl"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.207111 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-ctt57"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.208358 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-5p2wz"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.210990 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lnvht"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.211546 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lnvht" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.212155 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405550-kkxf5"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.212820 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405550-kkxf5" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.213539 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-fg2kt"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.214318 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fg2kt" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.215300 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5npz2"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.215962 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-t5r2m"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.216122 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5npz2" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.216366 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-t5r2m" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.217353 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6jxrr"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.218821 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2hxrs"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.220684 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-9kh95"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.221295 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-9kh95" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.222583 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8lqfg"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.222777 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.224027 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-z75wf"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.225230 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-njrwv"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.226408 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-gz9cl"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.227424 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bxffw"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.228679 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jkwt2"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.230679 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-92x8q"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.231806 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ch6d4"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.236251 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-4jt92"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.238421 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x9sk6"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.242503 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-xnq47"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.243671 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.245283 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-tqz88"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.246761 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-fg2kt"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.248081 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405550-kkxf5"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.249210 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bvzn7"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.250323 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-m7gb6"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.251932 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-mrrkd"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.252112 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-m7gb6" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.253139 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hxbwl"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.253218 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-mrrkd" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.254198 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lnvht"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.255354 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-gfm2w"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.256615 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gc2sn"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.257936 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-f8kkl"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.259100 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-t5r2m"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.260402 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-xbgtb"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.261681 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5npz2"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.262538 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.263728 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-t6kkq"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.265052 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-m7gb6"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.266065 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-9kh95"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.267065 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-mrrkd"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.268030 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-fvvf5"] Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.268639 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-fvvf5" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.282500 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.303079 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.323791 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.343991 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.363217 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.369329 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/228c180e-b1ee-45c8-a186-03b701adc920-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-gc2sn\" (UID: \"228c180e-b1ee-45c8-a186-03b701adc920\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gc2sn" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.382510 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.402900 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.406392 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/228c180e-b1ee-45c8-a186-03b701adc920-config\") pod \"kube-apiserver-operator-766d6c64bb-gc2sn\" (UID: \"228c180e-b1ee-45c8-a186-03b701adc920\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gc2sn" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.443587 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.464140 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.484044 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.503674 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.550056 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj4w6\" (UniqueName: \"kubernetes.io/projected/c922ab64-1708-4b9b-bb3d-a0e1e4b5eaf0-kube-api-access-rj4w6\") pod \"console-operator-58897d9998-lfk66\" (UID: \"c922ab64-1708-4b9b-bb3d-a0e1e4b5eaf0\") " pod="openshift-console-operator/console-operator-58897d9998-lfk66" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.557670 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5jbx\" (UniqueName: \"kubernetes.io/projected/5629efeb-c910-46f3-aa69-be7863bfb6f1-kube-api-access-v5jbx\") pod \"apiserver-76f77b778f-hp9zp\" (UID: \"5629efeb-c910-46f3-aa69-be7863bfb6f1\") " pod="openshift-apiserver/apiserver-76f77b778f-hp9zp" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.576235 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tv2t\" (UniqueName: \"kubernetes.io/projected/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-kube-api-access-7tv2t\") pod \"oauth-openshift-558db77b4-n97k6\" (UID: \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\") " pod="openshift-authentication/oauth-openshift-558db77b4-n97k6" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.597483 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8n8t\" (UniqueName: \"kubernetes.io/projected/8adac5a2-60c1-4c11-a7bd-62c113d8caca-kube-api-access-f8n8t\") pod \"downloads-7954f5f757-4jt92\" (UID: \"8adac5a2-60c1-4c11-a7bd-62c113d8caca\") " pod="openshift-console/downloads-7954f5f757-4jt92" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.618799 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlq8k\" (UniqueName: \"kubernetes.io/projected/d86661dc-bc7e-43ff-9c4b-035a8afecace-kube-api-access-rlq8k\") pod \"cluster-image-registry-operator-dc59b4c8b-6jxrr\" (UID: \"d86661dc-bc7e-43ff-9c4b-035a8afecace\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6jxrr" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.636770 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v69lg\" (UniqueName: \"kubernetes.io/projected/2516f68b-0c44-4a09-abc8-7c4cba0cbb60-kube-api-access-v69lg\") pod \"cluster-samples-operator-665b6dd947-7w7kl\" (UID: \"2516f68b-0c44-4a09-abc8-7c4cba0cbb60\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7w7kl" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.655160 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m75sm\" (UniqueName: \"kubernetes.io/projected/13c936d9-26fd-46c4-9099-05a09312e511-kube-api-access-m75sm\") pod \"controller-manager-879f6c89f-8lqfg\" (UID: \"13c936d9-26fd-46c4-9099-05a09312e511\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8lqfg" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.677591 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87lvc\" (UniqueName: \"kubernetes.io/projected/b5705070-06f5-4ad4-b5df-4d82f90f8e27-kube-api-access-87lvc\") pod \"route-controller-manager-6576b87f9c-tvc5s\" (UID: \"b5705070-06f5-4ad4-b5df-4d82f90f8e27\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tvc5s" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.683064 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.722339 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6l2n\" (UniqueName: \"kubernetes.io/projected/fca68f2a-06ef-4c6a-8971-026d05045c4a-kube-api-access-r6l2n\") pod \"apiserver-7bbb656c7d-5p2wz\" (UID: \"fca68f2a-06ef-4c6a-8971-026d05045c4a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5p2wz" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.769011 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfsww\" (UniqueName: \"kubernetes.io/projected/c3eebda0-cd9c-448c-8e0c-c25aea48fd54-kube-api-access-kfsww\") pod \"machine-api-operator-5694c8668f-bz4fl\" (UID: \"c3eebda0-cd9c-448c-8e0c-c25aea48fd54\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bz4fl" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.777975 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-n97k6" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.778328 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvn4g\" (UniqueName: \"kubernetes.io/projected/fd16f3bc-76f4-4731-9141-19cf2aaf926d-kube-api-access-xvn4g\") pod \"authentication-operator-69f744f599-bd6l4\" (UID: \"fd16f3bc-76f4-4731-9141-19cf2aaf926d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bd6l4" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.789384 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.798629 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-lfk66" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.806825 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.813648 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7w7kl" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.823009 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.824759 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-4jt92" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.843481 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.845592 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-hp9zp" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.868438 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-bz4fl" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.890995 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnm5g\" (UniqueName: \"kubernetes.io/projected/0aaef2ff-dfeb-4e1c-aeaa-151eec6d15fb-kube-api-access-cnm5g\") pod \"machine-approver-56656f9798-t85kw\" (UID: \"0aaef2ff-dfeb-4e1c-aeaa-151eec6d15fb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t85kw" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.900152 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8lqfg" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.902859 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.908964 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d86661dc-bc7e-43ff-9c4b-035a8afecace-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-6jxrr\" (UID: \"d86661dc-bc7e-43ff-9c4b-035a8afecace\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6jxrr" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.923197 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.941205 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tvc5s" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.943176 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.949827 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5p2wz" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.965114 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.983665 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 28 12:38:04 crc kubenswrapper[4779]: I1128 12:38:04.983983 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t85kw" Nov 28 12:38:05 crc kubenswrapper[4779]: W1128 12:38:05.001598 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0aaef2ff_dfeb_4e1c_aeaa_151eec6d15fb.slice/crio-39b552d6c97b054c433c8989b91e17439a5ec150a27dac377a00dfb441d0352c WatchSource:0}: Error finding container 39b552d6c97b054c433c8989b91e17439a5ec150a27dac377a00dfb441d0352c: Status 404 returned error can't find the container with id 39b552d6c97b054c433c8989b91e17439a5ec150a27dac377a00dfb441d0352c Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.002918 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.023794 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.061803 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf5ph\" (UniqueName: \"kubernetes.io/projected/7b4f9d36-3495-47b8-b0a6-31f077b718a0-kube-api-access-wf5ph\") pod \"etcd-operator-b45778765-tqz88\" (UID: \"7b4f9d36-3495-47b8-b0a6-31f077b718a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tqz88" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.069739 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-n97k6"] Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.069968 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-bd6l4" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.087439 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzhtk\" (UniqueName: \"kubernetes.io/projected/bb401509-3ef4-41bc-93db-fbee2b5454b9-kube-api-access-fzhtk\") pod \"console-f9d7485db-ctt57\" (UID: \"bb401509-3ef4-41bc-93db-fbee2b5454b9\") " pod="openshift-console/console-f9d7485db-ctt57" Nov 28 12:38:05 crc kubenswrapper[4779]: W1128 12:38:05.088798 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cbccab5_86c6_4c0f_82f6_9ae159b32cce.slice/crio-be097b15d07ec9b2278cc964578069431e8480592cd501cebb3185e946015a3c WatchSource:0}: Error finding container be097b15d07ec9b2278cc964578069431e8480592cd501cebb3185e946015a3c: Status 404 returned error can't find the container with id be097b15d07ec9b2278cc964578069431e8480592cd501cebb3185e946015a3c Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.098208 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6jml\" (UniqueName: \"kubernetes.io/projected/66175ea0-414c-4c91-9aec-e1cfa7992a5b-kube-api-access-d6jml\") pod \"openshift-config-operator-7777fb866f-xnq47\" (UID: \"66175ea0-414c-4c91-9aec-e1cfa7992a5b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-xnq47" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.106240 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6jxrr" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.115444 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-lfk66"] Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.122945 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/228c180e-b1ee-45c8-a186-03b701adc920-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-gc2sn\" (UID: \"228c180e-b1ee-45c8-a186-03b701adc920\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gc2sn" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.125011 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.131566 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-xnq47" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.147526 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.148485 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-ctt57" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.152329 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-tqz88" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.166122 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.176083 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gc2sn" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.183469 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.200948 4779 request.go:700] Waited for 1.012927882s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-admission-controller-secret&limit=500&resourceVersion=0 Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.202686 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.222428 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.247830 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.267280 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.282483 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.310186 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.317398 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-5p2wz"] Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.324525 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.350872 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.357689 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8lqfg"] Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.365046 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.378970 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-hp9zp"] Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.386721 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 28 12:38:05 crc kubenswrapper[4779]: W1128 12:38:05.390047 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfca68f2a_06ef_4c6a_8971_026d05045c4a.slice/crio-08b778173df667540116ebf9ef16b69d40a8fbeda261cf22bcf27ef2e6dc9227 WatchSource:0}: Error finding container 08b778173df667540116ebf9ef16b69d40a8fbeda261cf22bcf27ef2e6dc9227: Status 404 returned error can't find the container with id 08b778173df667540116ebf9ef16b69d40a8fbeda261cf22bcf27ef2e6dc9227 Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.400882 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tvc5s"] Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.404958 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 28 12:38:05 crc kubenswrapper[4779]: W1128 12:38:05.407781 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5629efeb_c910_46f3_aa69_be7863bfb6f1.slice/crio-94e66698224d8e17a870cbfbaf5e0c820b8baea146e49a522469d9d649fcb679 WatchSource:0}: Error finding container 94e66698224d8e17a870cbfbaf5e0c820b8baea146e49a522469d9d649fcb679: Status 404 returned error can't find the container with id 94e66698224d8e17a870cbfbaf5e0c820b8baea146e49a522469d9d649fcb679 Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.415167 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-4jt92"] Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.421079 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7w7kl"] Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.422435 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.428532 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-bz4fl"] Nov 28 12:38:05 crc kubenswrapper[4779]: W1128 12:38:05.437849 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3eebda0_cd9c_448c_8e0c_c25aea48fd54.slice/crio-834166869d882708caae8f9bb90bd4097c5b23ca3f0fab79bd6c2ca42c97dc70 WatchSource:0}: Error finding container 834166869d882708caae8f9bb90bd4097c5b23ca3f0fab79bd6c2ca42c97dc70: Status 404 returned error can't find the container with id 834166869d882708caae8f9bb90bd4097c5b23ca3f0fab79bd6c2ca42c97dc70 Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.442454 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.463607 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.483642 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.511010 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.524156 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.538226 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-bz4fl" event={"ID":"c3eebda0-cd9c-448c-8e0c-c25aea48fd54","Type":"ContainerStarted","Data":"834166869d882708caae8f9bb90bd4097c5b23ca3f0fab79bd6c2ca42c97dc70"} Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.543136 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.544958 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hp9zp" event={"ID":"5629efeb-c910-46f3-aa69-be7863bfb6f1","Type":"ContainerStarted","Data":"94e66698224d8e17a870cbfbaf5e0c820b8baea146e49a522469d9d649fcb679"} Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.549111 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t85kw" event={"ID":"0aaef2ff-dfeb-4e1c-aeaa-151eec6d15fb","Type":"ContainerStarted","Data":"bac508bfa627d2522f2609cd42728d35c030d92b74df386be9c38cad6888d901"} Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.549150 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t85kw" event={"ID":"0aaef2ff-dfeb-4e1c-aeaa-151eec6d15fb","Type":"ContainerStarted","Data":"39b552d6c97b054c433c8989b91e17439a5ec150a27dac377a00dfb441d0352c"} Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.552064 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8lqfg" event={"ID":"13c936d9-26fd-46c4-9099-05a09312e511","Type":"ContainerStarted","Data":"5e4f81708632f2bd20f6f63a95cf5c33bca575a5745ad3b3b03a5bcc55ba51b1"} Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.552112 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8lqfg" event={"ID":"13c936d9-26fd-46c4-9099-05a09312e511","Type":"ContainerStarted","Data":"7da9220129db78d1b0a195910e73170ca1fc98d55331f870492edba7adaefc90"} Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.552503 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-8lqfg" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.559895 4779 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-8lqfg container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.559980 4779 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-8lqfg" podUID="13c936d9-26fd-46c4-9099-05a09312e511" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.561339 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5p2wz" event={"ID":"fca68f2a-06ef-4c6a-8971-026d05045c4a","Type":"ContainerStarted","Data":"08b778173df667540116ebf9ef16b69d40a8fbeda261cf22bcf27ef2e6dc9227"} Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.562591 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.565382 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-n97k6" event={"ID":"7cbccab5-86c6-4c0f-82f6-9ae159b32cce","Type":"ContainerStarted","Data":"5b5fd665d09c9a690febd4809c288c653a351aaa3d2025ca956148eeefe5f2ab"} Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.565446 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-n97k6" event={"ID":"7cbccab5-86c6-4c0f-82f6-9ae159b32cce","Type":"ContainerStarted","Data":"be097b15d07ec9b2278cc964578069431e8480592cd501cebb3185e946015a3c"} Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.565771 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-n97k6" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.567884 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tvc5s" event={"ID":"b5705070-06f5-4ad4-b5df-4d82f90f8e27","Type":"ContainerStarted","Data":"339f3a2f3db4143409d08ba95eaa6499179449f76b81de3b0047ccedc7a043ad"} Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.568387 4779 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-n97k6 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" start-of-body= Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.568421 4779 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-n97k6" podUID="7cbccab5-86c6-4c0f-82f6-9ae159b32cce" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.570288 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-4jt92" event={"ID":"8adac5a2-60c1-4c11-a7bd-62c113d8caca","Type":"ContainerStarted","Data":"e5d2d70899206abb7899345840b690bf6c5bdac9777d1bd80415f9b5b1f29d07"} Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.572942 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-lfk66" event={"ID":"c922ab64-1708-4b9b-bb3d-a0e1e4b5eaf0","Type":"ContainerStarted","Data":"7886f77f7e3f8381160819dd08f7629bb6009ba9d4f25586872942adc1a43780"} Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.572967 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-lfk66" event={"ID":"c922ab64-1708-4b9b-bb3d-a0e1e4b5eaf0","Type":"ContainerStarted","Data":"8fc0a0c5ea9aba855a92dc8fd2ce13bd5042563d2deb3b0d83d22c75bc04c4f7"} Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.573837 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-lfk66" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.575606 4779 patch_prober.go:28] interesting pod/console-operator-58897d9998-lfk66 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.575636 4779 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-lfk66" podUID="c922ab64-1708-4b9b-bb3d-a0e1e4b5eaf0" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.583504 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.602919 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.624967 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.643567 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bd6l4"] Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.647674 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6jxrr"] Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.648342 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.668376 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 28 12:38:05 crc kubenswrapper[4779]: W1128 12:38:05.675924 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd86661dc_bc7e_43ff_9c4b_035a8afecace.slice/crio-20da5f9c2fe169bdd6a11820f56110e115cf2b6f5e3f413b5903ebaba532682f WatchSource:0}: Error finding container 20da5f9c2fe169bdd6a11820f56110e115cf2b6f5e3f413b5903ebaba532682f: Status 404 returned error can't find the container with id 20da5f9c2fe169bdd6a11820f56110e115cf2b6f5e3f413b5903ebaba532682f Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.687304 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.705424 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.706676 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-tqz88"] Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.713414 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gc2sn"] Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.724335 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.747231 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.763621 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.765505 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-xnq47"] Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.766029 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-ctt57"] Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.782946 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 12:38:05 crc kubenswrapper[4779]: W1128 12:38:05.785983 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb401509_3ef4_41bc_93db_fbee2b5454b9.slice/crio-2b165f77d94c78b8df6d4386354b0dc458a222e970ceedaf6e4a0023e08c5d40 WatchSource:0}: Error finding container 2b165f77d94c78b8df6d4386354b0dc458a222e970ceedaf6e4a0023e08c5d40: Status 404 returned error can't find the container with id 2b165f77d94c78b8df6d4386354b0dc458a222e970ceedaf6e4a0023e08c5d40 Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.807074 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.823870 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.849237 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.863132 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.883645 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.907277 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.908439 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:38:05 crc kubenswrapper[4779]: E1128 12:38:05.908530 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 12:40:07.908498739 +0000 UTC m=+268.474174093 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.908615 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.908669 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.908697 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.908719 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.910014 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.915864 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.916602 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.916881 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.923998 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.945695 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.980240 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 28 12:38:05 crc kubenswrapper[4779]: I1128 12:38:05.983203 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.002871 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.024274 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.043734 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.063190 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.083201 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.102460 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.125213 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.144201 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.151968 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.164434 4779 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.167836 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.182376 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.190614 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.201641 4779 request.go:700] Waited for 1.948151322s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-default-metrics-tls&limit=500&resourceVersion=0 Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.202738 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.222336 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.243985 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.263153 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.286264 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.313545 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd-registry-certificates\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.313622 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd-ca-trust-extracted\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.313737 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd-registry-tls\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.313757 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd-trusted-ca\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.313799 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd6pq\" (UniqueName: \"kubernetes.io/projected/09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd-kube-api-access-hd6pq\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.313837 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd-installation-pull-secrets\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.313877 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e562e074-6a8d-4c91-91ed-895b1b1ac2d1-metrics-tls\") pod \"dns-operator-744455d44c-bz2gm\" (UID: \"e562e074-6a8d-4c91-91ed-895b1b1ac2d1\") " pod="openshift-dns-operator/dns-operator-744455d44c-bz2gm" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.313909 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9hld\" (UniqueName: \"kubernetes.io/projected/610909a5-8090-4ed3-b686-1f1176a59e9e-kube-api-access-v9hld\") pod \"openshift-controller-manager-operator-756b6f6bc6-92x8q\" (UID: \"610909a5-8090-4ed3-b686-1f1176a59e9e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-92x8q" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.313934 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/610909a5-8090-4ed3-b686-1f1176a59e9e-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-92x8q\" (UID: \"610909a5-8090-4ed3-b686-1f1176a59e9e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-92x8q" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.313951 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnqvm\" (UniqueName: \"kubernetes.io/projected/e562e074-6a8d-4c91-91ed-895b1b1ac2d1-kube-api-access-nnqvm\") pod \"dns-operator-744455d44c-bz2gm\" (UID: \"e562e074-6a8d-4c91-91ed-895b1b1ac2d1\") " pod="openshift-dns-operator/dns-operator-744455d44c-bz2gm" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.313984 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1562193f-2d67-487b-9a29-9be653c11154-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2hxrs\" (UID: \"1562193f-2d67-487b-9a29-9be653c11154\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2hxrs" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.313998 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1562193f-2d67-487b-9a29-9be653c11154-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2hxrs\" (UID: \"1562193f-2d67-487b-9a29-9be653c11154\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2hxrs" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.314016 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2gqd\" (UniqueName: \"kubernetes.io/projected/1562193f-2d67-487b-9a29-9be653c11154-kube-api-access-q2gqd\") pod \"openshift-apiserver-operator-796bbdcf4f-2hxrs\" (UID: \"1562193f-2d67-487b-9a29-9be653c11154\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2hxrs" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.314030 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd-bound-sa-token\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.314043 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/610909a5-8090-4ed3-b686-1f1176a59e9e-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-92x8q\" (UID: \"610909a5-8090-4ed3-b686-1f1176a59e9e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-92x8q" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.314103 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:06 crc kubenswrapper[4779]: E1128 12:38:06.314442 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 12:38:06.814427729 +0000 UTC m=+147.380103173 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gfm2w" (UID: "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.416068 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.416467 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x9fl\" (UniqueName: \"kubernetes.io/projected/95299c5d-a4b2-4528-9cc2-d6d0155aa621-kube-api-access-6x9fl\") pod \"catalog-operator-68c6474976-5npz2\" (UID: \"95299c5d-a4b2-4528-9cc2-d6d0155aa621\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5npz2" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.416513 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e2eedfd1-32f1-478a-b46d-939da24ba282-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-f8kkl\" (UID: \"e2eedfd1-32f1-478a-b46d-939da24ba282\") " pod="openshift-marketplace/marketplace-operator-79b997595-f8kkl" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.416546 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/314edc02-f932-423f-a24b-5db0c6c08957-apiservice-cert\") pod \"packageserver-d55dfcdfc-lnvht\" (UID: \"314edc02-f932-423f-a24b-5db0c6c08957\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lnvht" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.416561 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/bfadbe4f-46c0-4c08-b766-85cbbc651ac4-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-hxbwl\" (UID: \"bfadbe4f-46c0-4c08-b766-85cbbc651ac4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hxbwl" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.416595 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/657217e1-39d9-4d22-acf9-930d4597d9fc-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-x9sk6\" (UID: \"657217e1-39d9-4d22-acf9-930d4597d9fc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x9sk6" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.416610 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e8ac53b-b8ec-45f8-8b02-008f7e50a85f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bxffw\" (UID: \"8e8ac53b-b8ec-45f8-8b02-008f7e50a85f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bxffw" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.416625 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/657217e1-39d9-4d22-acf9-930d4597d9fc-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-x9sk6\" (UID: \"657217e1-39d9-4d22-acf9-930d4597d9fc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x9sk6" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.416648 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fxlc\" (UniqueName: \"kubernetes.io/projected/314edc02-f932-423f-a24b-5db0c6c08957-kube-api-access-5fxlc\") pod \"packageserver-d55dfcdfc-lnvht\" (UID: \"314edc02-f932-423f-a24b-5db0c6c08957\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lnvht" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.416665 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a42c1ca1-45e2-48a2-94f2-a0e38e001d4b-signing-cabundle\") pod \"service-ca-9c57cc56f-9kh95\" (UID: \"a42c1ca1-45e2-48a2-94f2-a0e38e001d4b\") " pod="openshift-service-ca/service-ca-9c57cc56f-9kh95" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.416697 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ptfq\" (UniqueName: \"kubernetes.io/projected/df4a38f8-8868-4674-b1a8-5d47dd9b9d31-kube-api-access-4ptfq\") pod \"migrator-59844c95c7-ch6d4\" (UID: \"df4a38f8-8868-4674-b1a8-5d47dd9b9d31\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ch6d4" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.416711 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f5a958f1-dcb5-4ec4-aecf-d75645454426-node-bootstrap-token\") pod \"machine-config-server-fvvf5\" (UID: \"f5a958f1-dcb5-4ec4-aecf-d75645454426\") " pod="openshift-machine-config-operator/machine-config-server-fvvf5" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.416727 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8aa27b4a-7e0c-42e7-8732-8fa9dd15a754-auth-proxy-config\") pod \"machine-config-operator-74547568cd-z75wf\" (UID: \"8aa27b4a-7e0c-42e7-8732-8fa9dd15a754\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z75wf" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.416741 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqckz\" (UniqueName: \"kubernetes.io/projected/e2eedfd1-32f1-478a-b46d-939da24ba282-kube-api-access-dqckz\") pod \"marketplace-operator-79b997595-f8kkl\" (UID: \"e2eedfd1-32f1-478a-b46d-939da24ba282\") " pod="openshift-marketplace/marketplace-operator-79b997595-f8kkl" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.416757 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58b8c\" (UniqueName: \"kubernetes.io/projected/47d26bcb-c4e5-439c-8709-d589e50a1dad-kube-api-access-58b8c\") pod \"dns-default-mrrkd\" (UID: \"47d26bcb-c4e5-439c-8709-d589e50a1dad\") " pod="openshift-dns/dns-default-mrrkd" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.416774 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/95299c5d-a4b2-4528-9cc2-d6d0155aa621-profile-collector-cert\") pod \"catalog-operator-68c6474976-5npz2\" (UID: \"95299c5d-a4b2-4528-9cc2-d6d0155aa621\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5npz2" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.416808 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv4tv\" (UniqueName: \"kubernetes.io/projected/bfadbe4f-46c0-4c08-b766-85cbbc651ac4-kube-api-access-wv4tv\") pod \"package-server-manager-789f6589d5-hxbwl\" (UID: \"bfadbe4f-46c0-4c08-b766-85cbbc651ac4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hxbwl" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.416830 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5cebf084-4ed8-45d2-a6ed-77c092539420-cert\") pod \"ingress-canary-t5r2m\" (UID: \"5cebf084-4ed8-45d2-a6ed-77c092539420\") " pod="openshift-ingress-canary/ingress-canary-t5r2m" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.416844 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e2eedfd1-32f1-478a-b46d-939da24ba282-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-f8kkl\" (UID: \"e2eedfd1-32f1-478a-b46d-939da24ba282\") " pod="openshift-marketplace/marketplace-operator-79b997595-f8kkl" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.416861 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/79af79ce-0947-4a73-b45e-d588b52d115a-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-xbgtb\" (UID: \"79af79ce-0947-4a73-b45e-d588b52d115a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xbgtb" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.416877 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/314edc02-f932-423f-a24b-5db0c6c08957-webhook-cert\") pod \"packageserver-d55dfcdfc-lnvht\" (UID: \"314edc02-f932-423f-a24b-5db0c6c08957\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lnvht" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.416891 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e08e1dca-54a7-4ddc-8942-6a5645304b53-metrics-tls\") pod \"ingress-operator-5b745b69d9-gz9cl\" (UID: \"e08e1dca-54a7-4ddc-8942-6a5645304b53\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gz9cl" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.416906 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k29ts\" (UniqueName: \"kubernetes.io/projected/e08e1dca-54a7-4ddc-8942-6a5645304b53-kube-api-access-k29ts\") pod \"ingress-operator-5b745b69d9-gz9cl\" (UID: \"e08e1dca-54a7-4ddc-8942-6a5645304b53\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gz9cl" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.416923 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svgs5\" (UniqueName: \"kubernetes.io/projected/331b553d-6ae6-48fa-93a1-5e07ce6747f3-kube-api-access-svgs5\") pod \"service-ca-operator-777779d784-fg2kt\" (UID: \"331b553d-6ae6-48fa-93a1-5e07ce6747f3\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fg2kt" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.416939 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd-bound-sa-token\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.416956 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b868v\" (UniqueName: \"kubernetes.io/projected/a42c1ca1-45e2-48a2-94f2-a0e38e001d4b-kube-api-access-b868v\") pod \"service-ca-9c57cc56f-9kh95\" (UID: \"a42c1ca1-45e2-48a2-94f2-a0e38e001d4b\") " pod="openshift-service-ca/service-ca-9c57cc56f-9kh95" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.416981 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjrs4\" (UniqueName: \"kubernetes.io/projected/5cebf084-4ed8-45d2-a6ed-77c092539420-kube-api-access-hjrs4\") pod \"ingress-canary-t5r2m\" (UID: \"5cebf084-4ed8-45d2-a6ed-77c092539420\") " pod="openshift-ingress-canary/ingress-canary-t5r2m" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.417007 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8aa27b4a-7e0c-42e7-8732-8fa9dd15a754-images\") pod \"machine-config-operator-74547568cd-z75wf\" (UID: \"8aa27b4a-7e0c-42e7-8732-8fa9dd15a754\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z75wf" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.417021 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a42c1ca1-45e2-48a2-94f2-a0e38e001d4b-signing-key\") pod \"service-ca-9c57cc56f-9kh95\" (UID: \"a42c1ca1-45e2-48a2-94f2-a0e38e001d4b\") " pod="openshift-service-ca/service-ca-9c57cc56f-9kh95" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.417046 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd-registry-certificates\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.417062 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e8ac53b-b8ec-45f8-8b02-008f7e50a85f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bxffw\" (UID: \"8e8ac53b-b8ec-45f8-8b02-008f7e50a85f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bxffw" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.417077 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnnlh\" (UniqueName: \"kubernetes.io/projected/79af79ce-0947-4a73-b45e-d588b52d115a-kube-api-access-pnnlh\") pod \"multus-admission-controller-857f4d67dd-xbgtb\" (UID: \"79af79ce-0947-4a73-b45e-d588b52d115a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xbgtb" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.417112 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/47d26bcb-c4e5-439c-8709-d589e50a1dad-metrics-tls\") pod \"dns-default-mrrkd\" (UID: \"47d26bcb-c4e5-439c-8709-d589e50a1dad\") " pod="openshift-dns/dns-default-mrrkd" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.417135 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/72921f7d-c7f6-4d36-a102-b3393776f50e-proxy-tls\") pod \"machine-config-controller-84d6567774-t6kkq\" (UID: \"72921f7d-c7f6-4d36-a102-b3393776f50e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t6kkq" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.417149 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdxw5\" (UniqueName: \"kubernetes.io/projected/72921f7d-c7f6-4d36-a102-b3393776f50e-kube-api-access-mdxw5\") pod \"machine-config-controller-84d6567774-t6kkq\" (UID: \"72921f7d-c7f6-4d36-a102-b3393776f50e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t6kkq" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.417186 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd-ca-trust-extracted\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.417202 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/56157237-28db-49f7-8506-bbddb98aa46b-plugins-dir\") pod \"csi-hostpathplugin-m7gb6\" (UID: \"56157237-28db-49f7-8506-bbddb98aa46b\") " pod="hostpath-provisioner/csi-hostpathplugin-m7gb6" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.417217 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/51d49383-db9f-4a63-865c-4387ecf691ed-stats-auth\") pod \"router-default-5444994796-k5rpm\" (UID: \"51d49383-db9f-4a63-865c-4387ecf691ed\") " pod="openshift-ingress/router-default-5444994796-k5rpm" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.417232 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/657217e1-39d9-4d22-acf9-930d4597d9fc-config\") pod \"kube-controller-manager-operator-78b949d7b-x9sk6\" (UID: \"657217e1-39d9-4d22-acf9-930d4597d9fc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x9sk6" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.417246 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/47d26bcb-c4e5-439c-8709-d589e50a1dad-config-volume\") pod \"dns-default-mrrkd\" (UID: \"47d26bcb-c4e5-439c-8709-d589e50a1dad\") " pod="openshift-dns/dns-default-mrrkd" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.417275 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f5a958f1-dcb5-4ec4-aecf-d75645454426-certs\") pod \"machine-config-server-fvvf5\" (UID: \"f5a958f1-dcb5-4ec4-aecf-d75645454426\") " pod="openshift-machine-config-operator/machine-config-server-fvvf5" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.417289 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vw4x\" (UniqueName: \"kubernetes.io/projected/1475f2e1-1c5b-470d-b0aa-0645ad327bb5-kube-api-access-8vw4x\") pod \"control-plane-machine-set-operator-78cbb6b69f-njrwv\" (UID: \"1475f2e1-1c5b-470d-b0aa-0645ad327bb5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-njrwv" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.417315 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxc8g\" (UniqueName: \"kubernetes.io/projected/51d49383-db9f-4a63-865c-4387ecf691ed-kube-api-access-xxc8g\") pod \"router-default-5444994796-k5rpm\" (UID: \"51d49383-db9f-4a63-865c-4387ecf691ed\") " pod="openshift-ingress/router-default-5444994796-k5rpm" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.417330 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc7gx\" (UniqueName: \"kubernetes.io/projected/b262e413-4650-41e5-b26b-0dd6ad0e4761-kube-api-access-zc7gx\") pod \"kube-storage-version-migrator-operator-b67b599dd-bvzn7\" (UID: \"b262e413-4650-41e5-b26b-0dd6ad0e4761\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bvzn7" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.417353 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dm7h\" (UniqueName: \"kubernetes.io/projected/189cc15e-4851-49ad-a757-49451158a3d7-kube-api-access-5dm7h\") pod \"collect-profiles-29405550-kkxf5\" (UID: \"189cc15e-4851-49ad-a757-49451158a3d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405550-kkxf5" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.417394 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd-registry-tls\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.417418 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/56157237-28db-49f7-8506-bbddb98aa46b-socket-dir\") pod \"csi-hostpathplugin-m7gb6\" (UID: \"56157237-28db-49f7-8506-bbddb98aa46b\") " pod="hostpath-provisioner/csi-hostpathplugin-m7gb6" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.417434 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd-trusted-ca\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.417454 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/189cc15e-4851-49ad-a757-49451158a3d7-secret-volume\") pod \"collect-profiles-29405550-kkxf5\" (UID: \"189cc15e-4851-49ad-a757-49451158a3d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405550-kkxf5" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.417469 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/56157237-28db-49f7-8506-bbddb98aa46b-csi-data-dir\") pod \"csi-hostpathplugin-m7gb6\" (UID: \"56157237-28db-49f7-8506-bbddb98aa46b\") " pod="hostpath-provisioner/csi-hostpathplugin-m7gb6" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.417493 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hd6pq\" (UniqueName: \"kubernetes.io/projected/09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd-kube-api-access-hd6pq\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.417510 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e8ac53b-b8ec-45f8-8b02-008f7e50a85f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bxffw\" (UID: \"8e8ac53b-b8ec-45f8-8b02-008f7e50a85f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bxffw" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.417524 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/95299c5d-a4b2-4528-9cc2-d6d0155aa621-srv-cert\") pod \"catalog-operator-68c6474976-5npz2\" (UID: \"95299c5d-a4b2-4528-9cc2-d6d0155aa621\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5npz2" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.417541 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd-installation-pull-secrets\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.417558 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/56157237-28db-49f7-8506-bbddb98aa46b-mountpoint-dir\") pod \"csi-hostpathplugin-m7gb6\" (UID: \"56157237-28db-49f7-8506-bbddb98aa46b\") " pod="hostpath-provisioner/csi-hostpathplugin-m7gb6" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.417579 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnnwk\" (UniqueName: \"kubernetes.io/projected/f5a958f1-dcb5-4ec4-aecf-d75645454426-kube-api-access-hnnwk\") pod \"machine-config-server-fvvf5\" (UID: \"f5a958f1-dcb5-4ec4-aecf-d75645454426\") " pod="openshift-machine-config-operator/machine-config-server-fvvf5" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.417604 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/56157237-28db-49f7-8506-bbddb98aa46b-registration-dir\") pod \"csi-hostpathplugin-m7gb6\" (UID: \"56157237-28db-49f7-8506-bbddb98aa46b\") " pod="hostpath-provisioner/csi-hostpathplugin-m7gb6" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.417620 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/331b553d-6ae6-48fa-93a1-5e07ce6747f3-config\") pod \"service-ca-operator-777779d784-fg2kt\" (UID: \"331b553d-6ae6-48fa-93a1-5e07ce6747f3\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fg2kt" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.417635 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e562e074-6a8d-4c91-91ed-895b1b1ac2d1-metrics-tls\") pod \"dns-operator-744455d44c-bz2gm\" (UID: \"e562e074-6a8d-4c91-91ed-895b1b1ac2d1\") " pod="openshift-dns-operator/dns-operator-744455d44c-bz2gm" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.417652 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/1475f2e1-1c5b-470d-b0aa-0645ad327bb5-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-njrwv\" (UID: \"1475f2e1-1c5b-470d-b0aa-0645ad327bb5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-njrwv" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.417668 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0f65c972-5334-4705-bd19-90d43f3174e0-srv-cert\") pod \"olm-operator-6b444d44fb-jkwt2\" (UID: \"0f65c972-5334-4705-bd19-90d43f3174e0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jkwt2" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.417683 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/189cc15e-4851-49ad-a757-49451158a3d7-config-volume\") pod \"collect-profiles-29405550-kkxf5\" (UID: \"189cc15e-4851-49ad-a757-49451158a3d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405550-kkxf5" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.417709 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/610909a5-8090-4ed3-b686-1f1176a59e9e-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-92x8q\" (UID: \"610909a5-8090-4ed3-b686-1f1176a59e9e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-92x8q" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.417726 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9hld\" (UniqueName: \"kubernetes.io/projected/610909a5-8090-4ed3-b686-1f1176a59e9e-kube-api-access-v9hld\") pod \"openshift-controller-manager-operator-756b6f6bc6-92x8q\" (UID: \"610909a5-8090-4ed3-b686-1f1176a59e9e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-92x8q" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.417743 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnqvm\" (UniqueName: \"kubernetes.io/projected/e562e074-6a8d-4c91-91ed-895b1b1ac2d1-kube-api-access-nnqvm\") pod \"dns-operator-744455d44c-bz2gm\" (UID: \"e562e074-6a8d-4c91-91ed-895b1b1ac2d1\") " pod="openshift-dns-operator/dns-operator-744455d44c-bz2gm" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.417758 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0f65c972-5334-4705-bd19-90d43f3174e0-profile-collector-cert\") pod \"olm-operator-6b444d44fb-jkwt2\" (UID: \"0f65c972-5334-4705-bd19-90d43f3174e0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jkwt2" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.417774 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1562193f-2d67-487b-9a29-9be653c11154-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2hxrs\" (UID: \"1562193f-2d67-487b-9a29-9be653c11154\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2hxrs" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.417789 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e08e1dca-54a7-4ddc-8942-6a5645304b53-bound-sa-token\") pod \"ingress-operator-5b745b69d9-gz9cl\" (UID: \"e08e1dca-54a7-4ddc-8942-6a5645304b53\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gz9cl" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.417814 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/314edc02-f932-423f-a24b-5db0c6c08957-tmpfs\") pod \"packageserver-d55dfcdfc-lnvht\" (UID: \"314edc02-f932-423f-a24b-5db0c6c08957\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lnvht" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.417829 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7g7t\" (UniqueName: \"kubernetes.io/projected/56157237-28db-49f7-8506-bbddb98aa46b-kube-api-access-l7g7t\") pod \"csi-hostpathplugin-m7gb6\" (UID: \"56157237-28db-49f7-8506-bbddb98aa46b\") " pod="hostpath-provisioner/csi-hostpathplugin-m7gb6" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.417844 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/72921f7d-c7f6-4d36-a102-b3393776f50e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-t6kkq\" (UID: \"72921f7d-c7f6-4d36-a102-b3393776f50e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t6kkq" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.417858 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/51d49383-db9f-4a63-865c-4387ecf691ed-metrics-certs\") pod \"router-default-5444994796-k5rpm\" (UID: \"51d49383-db9f-4a63-865c-4387ecf691ed\") " pod="openshift-ingress/router-default-5444994796-k5rpm" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.417873 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2gqd\" (UniqueName: \"kubernetes.io/projected/1562193f-2d67-487b-9a29-9be653c11154-kube-api-access-q2gqd\") pod \"openshift-apiserver-operator-796bbdcf4f-2hxrs\" (UID: \"1562193f-2d67-487b-9a29-9be653c11154\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2hxrs" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.417889 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1562193f-2d67-487b-9a29-9be653c11154-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2hxrs\" (UID: \"1562193f-2d67-487b-9a29-9be653c11154\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2hxrs" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.417914 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/610909a5-8090-4ed3-b686-1f1176a59e9e-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-92x8q\" (UID: \"610909a5-8090-4ed3-b686-1f1176a59e9e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-92x8q" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.417947 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8aa27b4a-7e0c-42e7-8732-8fa9dd15a754-proxy-tls\") pod \"machine-config-operator-74547568cd-z75wf\" (UID: \"8aa27b4a-7e0c-42e7-8732-8fa9dd15a754\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z75wf" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.417961 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e08e1dca-54a7-4ddc-8942-6a5645304b53-trusted-ca\") pod \"ingress-operator-5b745b69d9-gz9cl\" (UID: \"e08e1dca-54a7-4ddc-8942-6a5645304b53\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gz9cl" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.417976 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv8rx\" (UniqueName: \"kubernetes.io/projected/0f65c972-5334-4705-bd19-90d43f3174e0-kube-api-access-mv8rx\") pod \"olm-operator-6b444d44fb-jkwt2\" (UID: \"0f65c972-5334-4705-bd19-90d43f3174e0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jkwt2" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.417990 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6n2k\" (UniqueName: \"kubernetes.io/projected/8aa27b4a-7e0c-42e7-8732-8fa9dd15a754-kube-api-access-v6n2k\") pod \"machine-config-operator-74547568cd-z75wf\" (UID: \"8aa27b4a-7e0c-42e7-8732-8fa9dd15a754\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z75wf" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.418006 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b262e413-4650-41e5-b26b-0dd6ad0e4761-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-bvzn7\" (UID: \"b262e413-4650-41e5-b26b-0dd6ad0e4761\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bvzn7" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.418021 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/51d49383-db9f-4a63-865c-4387ecf691ed-default-certificate\") pod \"router-default-5444994796-k5rpm\" (UID: \"51d49383-db9f-4a63-865c-4387ecf691ed\") " pod="openshift-ingress/router-default-5444994796-k5rpm" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.418051 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/331b553d-6ae6-48fa-93a1-5e07ce6747f3-serving-cert\") pod \"service-ca-operator-777779d784-fg2kt\" (UID: \"331b553d-6ae6-48fa-93a1-5e07ce6747f3\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fg2kt" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.418074 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b262e413-4650-41e5-b26b-0dd6ad0e4761-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-bvzn7\" (UID: \"b262e413-4650-41e5-b26b-0dd6ad0e4761\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bvzn7" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.418112 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51d49383-db9f-4a63-865c-4387ecf691ed-service-ca-bundle\") pod \"router-default-5444994796-k5rpm\" (UID: \"51d49383-db9f-4a63-865c-4387ecf691ed\") " pod="openshift-ingress/router-default-5444994796-k5rpm" Nov 28 12:38:06 crc kubenswrapper[4779]: E1128 12:38:06.418750 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 12:38:06.918736577 +0000 UTC m=+147.484411931 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.423629 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd-ca-trust-extracted\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.427707 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1562193f-2d67-487b-9a29-9be653c11154-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2hxrs\" (UID: \"1562193f-2d67-487b-9a29-9be653c11154\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2hxrs" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.428504 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd-trusted-ca\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.430327 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd-registry-certificates\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.431819 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/610909a5-8090-4ed3-b686-1f1176a59e9e-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-92x8q\" (UID: \"610909a5-8090-4ed3-b686-1f1176a59e9e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-92x8q" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.432024 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1562193f-2d67-487b-9a29-9be653c11154-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2hxrs\" (UID: \"1562193f-2d67-487b-9a29-9be653c11154\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2hxrs" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.432265 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd-registry-tls\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.433709 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e562e074-6a8d-4c91-91ed-895b1b1ac2d1-metrics-tls\") pod \"dns-operator-744455d44c-bz2gm\" (UID: \"e562e074-6a8d-4c91-91ed-895b1b1ac2d1\") " pod="openshift-dns-operator/dns-operator-744455d44c-bz2gm" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.437370 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd-installation-pull-secrets\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.437941 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/610909a5-8090-4ed3-b686-1f1176a59e9e-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-92x8q\" (UID: \"610909a5-8090-4ed3-b686-1f1176a59e9e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-92x8q" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.462873 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd-bound-sa-token\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.518891 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/72921f7d-c7f6-4d36-a102-b3393776f50e-proxy-tls\") pod \"machine-config-controller-84d6567774-t6kkq\" (UID: \"72921f7d-c7f6-4d36-a102-b3393776f50e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t6kkq" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.518929 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdxw5\" (UniqueName: \"kubernetes.io/projected/72921f7d-c7f6-4d36-a102-b3393776f50e-kube-api-access-mdxw5\") pod \"machine-config-controller-84d6567774-t6kkq\" (UID: \"72921f7d-c7f6-4d36-a102-b3393776f50e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t6kkq" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.518952 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/56157237-28db-49f7-8506-bbddb98aa46b-plugins-dir\") pod \"csi-hostpathplugin-m7gb6\" (UID: \"56157237-28db-49f7-8506-bbddb98aa46b\") " pod="hostpath-provisioner/csi-hostpathplugin-m7gb6" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.518968 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/51d49383-db9f-4a63-865c-4387ecf691ed-stats-auth\") pod \"router-default-5444994796-k5rpm\" (UID: \"51d49383-db9f-4a63-865c-4387ecf691ed\") " pod="openshift-ingress/router-default-5444994796-k5rpm" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.518990 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/657217e1-39d9-4d22-acf9-930d4597d9fc-config\") pod \"kube-controller-manager-operator-78b949d7b-x9sk6\" (UID: \"657217e1-39d9-4d22-acf9-930d4597d9fc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x9sk6" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.519007 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/47d26bcb-c4e5-439c-8709-d589e50a1dad-config-volume\") pod \"dns-default-mrrkd\" (UID: \"47d26bcb-c4e5-439c-8709-d589e50a1dad\") " pod="openshift-dns/dns-default-mrrkd" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.519030 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f5a958f1-dcb5-4ec4-aecf-d75645454426-certs\") pod \"machine-config-server-fvvf5\" (UID: \"f5a958f1-dcb5-4ec4-aecf-d75645454426\") " pod="openshift-machine-config-operator/machine-config-server-fvvf5" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.519047 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vw4x\" (UniqueName: \"kubernetes.io/projected/1475f2e1-1c5b-470d-b0aa-0645ad327bb5-kube-api-access-8vw4x\") pod \"control-plane-machine-set-operator-78cbb6b69f-njrwv\" (UID: \"1475f2e1-1c5b-470d-b0aa-0645ad327bb5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-njrwv" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.519066 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxc8g\" (UniqueName: \"kubernetes.io/projected/51d49383-db9f-4a63-865c-4387ecf691ed-kube-api-access-xxc8g\") pod \"router-default-5444994796-k5rpm\" (UID: \"51d49383-db9f-4a63-865c-4387ecf691ed\") " pod="openshift-ingress/router-default-5444994796-k5rpm" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.519139 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc7gx\" (UniqueName: \"kubernetes.io/projected/b262e413-4650-41e5-b26b-0dd6ad0e4761-kube-api-access-zc7gx\") pod \"kube-storage-version-migrator-operator-b67b599dd-bvzn7\" (UID: \"b262e413-4650-41e5-b26b-0dd6ad0e4761\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bvzn7" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.519159 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dm7h\" (UniqueName: \"kubernetes.io/projected/189cc15e-4851-49ad-a757-49451158a3d7-kube-api-access-5dm7h\") pod \"collect-profiles-29405550-kkxf5\" (UID: \"189cc15e-4851-49ad-a757-49451158a3d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405550-kkxf5" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.519181 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/56157237-28db-49f7-8506-bbddb98aa46b-socket-dir\") pod \"csi-hostpathplugin-m7gb6\" (UID: \"56157237-28db-49f7-8506-bbddb98aa46b\") " pod="hostpath-provisioner/csi-hostpathplugin-m7gb6" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.519210 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/189cc15e-4851-49ad-a757-49451158a3d7-secret-volume\") pod \"collect-profiles-29405550-kkxf5\" (UID: \"189cc15e-4851-49ad-a757-49451158a3d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405550-kkxf5" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.519238 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/56157237-28db-49f7-8506-bbddb98aa46b-csi-data-dir\") pod \"csi-hostpathplugin-m7gb6\" (UID: \"56157237-28db-49f7-8506-bbddb98aa46b\") " pod="hostpath-provisioner/csi-hostpathplugin-m7gb6" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.519266 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e8ac53b-b8ec-45f8-8b02-008f7e50a85f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bxffw\" (UID: \"8e8ac53b-b8ec-45f8-8b02-008f7e50a85f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bxffw" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.519283 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/95299c5d-a4b2-4528-9cc2-d6d0155aa621-srv-cert\") pod \"catalog-operator-68c6474976-5npz2\" (UID: \"95299c5d-a4b2-4528-9cc2-d6d0155aa621\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5npz2" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.519299 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/56157237-28db-49f7-8506-bbddb98aa46b-mountpoint-dir\") pod \"csi-hostpathplugin-m7gb6\" (UID: \"56157237-28db-49f7-8506-bbddb98aa46b\") " pod="hostpath-provisioner/csi-hostpathplugin-m7gb6" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.519326 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnnwk\" (UniqueName: \"kubernetes.io/projected/f5a958f1-dcb5-4ec4-aecf-d75645454426-kube-api-access-hnnwk\") pod \"machine-config-server-fvvf5\" (UID: \"f5a958f1-dcb5-4ec4-aecf-d75645454426\") " pod="openshift-machine-config-operator/machine-config-server-fvvf5" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.519346 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/56157237-28db-49f7-8506-bbddb98aa46b-registration-dir\") pod \"csi-hostpathplugin-m7gb6\" (UID: \"56157237-28db-49f7-8506-bbddb98aa46b\") " pod="hostpath-provisioner/csi-hostpathplugin-m7gb6" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.519366 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/331b553d-6ae6-48fa-93a1-5e07ce6747f3-config\") pod \"service-ca-operator-777779d784-fg2kt\" (UID: \"331b553d-6ae6-48fa-93a1-5e07ce6747f3\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fg2kt" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.519386 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/1475f2e1-1c5b-470d-b0aa-0645ad327bb5-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-njrwv\" (UID: \"1475f2e1-1c5b-470d-b0aa-0645ad327bb5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-njrwv" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.519402 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0f65c972-5334-4705-bd19-90d43f3174e0-srv-cert\") pod \"olm-operator-6b444d44fb-jkwt2\" (UID: \"0f65c972-5334-4705-bd19-90d43f3174e0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jkwt2" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.519421 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/189cc15e-4851-49ad-a757-49451158a3d7-config-volume\") pod \"collect-profiles-29405550-kkxf5\" (UID: \"189cc15e-4851-49ad-a757-49451158a3d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405550-kkxf5" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.519458 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0f65c972-5334-4705-bd19-90d43f3174e0-profile-collector-cert\") pod \"olm-operator-6b444d44fb-jkwt2\" (UID: \"0f65c972-5334-4705-bd19-90d43f3174e0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jkwt2" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.519478 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e08e1dca-54a7-4ddc-8942-6a5645304b53-bound-sa-token\") pod \"ingress-operator-5b745b69d9-gz9cl\" (UID: \"e08e1dca-54a7-4ddc-8942-6a5645304b53\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gz9cl" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.519495 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/314edc02-f932-423f-a24b-5db0c6c08957-tmpfs\") pod \"packageserver-d55dfcdfc-lnvht\" (UID: \"314edc02-f932-423f-a24b-5db0c6c08957\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lnvht" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.519516 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7g7t\" (UniqueName: \"kubernetes.io/projected/56157237-28db-49f7-8506-bbddb98aa46b-kube-api-access-l7g7t\") pod \"csi-hostpathplugin-m7gb6\" (UID: \"56157237-28db-49f7-8506-bbddb98aa46b\") " pod="hostpath-provisioner/csi-hostpathplugin-m7gb6" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.519547 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/72921f7d-c7f6-4d36-a102-b3393776f50e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-t6kkq\" (UID: \"72921f7d-c7f6-4d36-a102-b3393776f50e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t6kkq" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.519566 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/51d49383-db9f-4a63-865c-4387ecf691ed-metrics-certs\") pod \"router-default-5444994796-k5rpm\" (UID: \"51d49383-db9f-4a63-865c-4387ecf691ed\") " pod="openshift-ingress/router-default-5444994796-k5rpm" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.519598 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.519622 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8aa27b4a-7e0c-42e7-8732-8fa9dd15a754-proxy-tls\") pod \"machine-config-operator-74547568cd-z75wf\" (UID: \"8aa27b4a-7e0c-42e7-8732-8fa9dd15a754\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z75wf" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.519640 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e08e1dca-54a7-4ddc-8942-6a5645304b53-trusted-ca\") pod \"ingress-operator-5b745b69d9-gz9cl\" (UID: \"e08e1dca-54a7-4ddc-8942-6a5645304b53\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gz9cl" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.519656 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv8rx\" (UniqueName: \"kubernetes.io/projected/0f65c972-5334-4705-bd19-90d43f3174e0-kube-api-access-mv8rx\") pod \"olm-operator-6b444d44fb-jkwt2\" (UID: \"0f65c972-5334-4705-bd19-90d43f3174e0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jkwt2" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.519674 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6n2k\" (UniqueName: \"kubernetes.io/projected/8aa27b4a-7e0c-42e7-8732-8fa9dd15a754-kube-api-access-v6n2k\") pod \"machine-config-operator-74547568cd-z75wf\" (UID: \"8aa27b4a-7e0c-42e7-8732-8fa9dd15a754\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z75wf" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.519692 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b262e413-4650-41e5-b26b-0dd6ad0e4761-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-bvzn7\" (UID: \"b262e413-4650-41e5-b26b-0dd6ad0e4761\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bvzn7" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.519710 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/51d49383-db9f-4a63-865c-4387ecf691ed-default-certificate\") pod \"router-default-5444994796-k5rpm\" (UID: \"51d49383-db9f-4a63-865c-4387ecf691ed\") " pod="openshift-ingress/router-default-5444994796-k5rpm" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.519725 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/331b553d-6ae6-48fa-93a1-5e07ce6747f3-serving-cert\") pod \"service-ca-operator-777779d784-fg2kt\" (UID: \"331b553d-6ae6-48fa-93a1-5e07ce6747f3\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fg2kt" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.519744 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b262e413-4650-41e5-b26b-0dd6ad0e4761-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-bvzn7\" (UID: \"b262e413-4650-41e5-b26b-0dd6ad0e4761\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bvzn7" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.519764 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51d49383-db9f-4a63-865c-4387ecf691ed-service-ca-bundle\") pod \"router-default-5444994796-k5rpm\" (UID: \"51d49383-db9f-4a63-865c-4387ecf691ed\") " pod="openshift-ingress/router-default-5444994796-k5rpm" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.519786 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x9fl\" (UniqueName: \"kubernetes.io/projected/95299c5d-a4b2-4528-9cc2-d6d0155aa621-kube-api-access-6x9fl\") pod \"catalog-operator-68c6474976-5npz2\" (UID: \"95299c5d-a4b2-4528-9cc2-d6d0155aa621\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5npz2" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.519805 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e2eedfd1-32f1-478a-b46d-939da24ba282-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-f8kkl\" (UID: \"e2eedfd1-32f1-478a-b46d-939da24ba282\") " pod="openshift-marketplace/marketplace-operator-79b997595-f8kkl" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.519823 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/314edc02-f932-423f-a24b-5db0c6c08957-apiservice-cert\") pod \"packageserver-d55dfcdfc-lnvht\" (UID: \"314edc02-f932-423f-a24b-5db0c6c08957\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lnvht" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.519843 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/bfadbe4f-46c0-4c08-b766-85cbbc651ac4-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-hxbwl\" (UID: \"bfadbe4f-46c0-4c08-b766-85cbbc651ac4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hxbwl" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.519863 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/657217e1-39d9-4d22-acf9-930d4597d9fc-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-x9sk6\" (UID: \"657217e1-39d9-4d22-acf9-930d4597d9fc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x9sk6" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.519881 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e8ac53b-b8ec-45f8-8b02-008f7e50a85f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bxffw\" (UID: \"8e8ac53b-b8ec-45f8-8b02-008f7e50a85f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bxffw" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.519897 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/657217e1-39d9-4d22-acf9-930d4597d9fc-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-x9sk6\" (UID: \"657217e1-39d9-4d22-acf9-930d4597d9fc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x9sk6" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.519917 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fxlc\" (UniqueName: \"kubernetes.io/projected/314edc02-f932-423f-a24b-5db0c6c08957-kube-api-access-5fxlc\") pod \"packageserver-d55dfcdfc-lnvht\" (UID: \"314edc02-f932-423f-a24b-5db0c6c08957\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lnvht" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.519936 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a42c1ca1-45e2-48a2-94f2-a0e38e001d4b-signing-cabundle\") pod \"service-ca-9c57cc56f-9kh95\" (UID: \"a42c1ca1-45e2-48a2-94f2-a0e38e001d4b\") " pod="openshift-service-ca/service-ca-9c57cc56f-9kh95" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.519955 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ptfq\" (UniqueName: \"kubernetes.io/projected/df4a38f8-8868-4674-b1a8-5d47dd9b9d31-kube-api-access-4ptfq\") pod \"migrator-59844c95c7-ch6d4\" (UID: \"df4a38f8-8868-4674-b1a8-5d47dd9b9d31\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ch6d4" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.519971 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f5a958f1-dcb5-4ec4-aecf-d75645454426-node-bootstrap-token\") pod \"machine-config-server-fvvf5\" (UID: \"f5a958f1-dcb5-4ec4-aecf-d75645454426\") " pod="openshift-machine-config-operator/machine-config-server-fvvf5" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.519992 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8aa27b4a-7e0c-42e7-8732-8fa9dd15a754-auth-proxy-config\") pod \"machine-config-operator-74547568cd-z75wf\" (UID: \"8aa27b4a-7e0c-42e7-8732-8fa9dd15a754\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z75wf" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.520010 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqckz\" (UniqueName: \"kubernetes.io/projected/e2eedfd1-32f1-478a-b46d-939da24ba282-kube-api-access-dqckz\") pod \"marketplace-operator-79b997595-f8kkl\" (UID: \"e2eedfd1-32f1-478a-b46d-939da24ba282\") " pod="openshift-marketplace/marketplace-operator-79b997595-f8kkl" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.520027 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58b8c\" (UniqueName: \"kubernetes.io/projected/47d26bcb-c4e5-439c-8709-d589e50a1dad-kube-api-access-58b8c\") pod \"dns-default-mrrkd\" (UID: \"47d26bcb-c4e5-439c-8709-d589e50a1dad\") " pod="openshift-dns/dns-default-mrrkd" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.520044 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/95299c5d-a4b2-4528-9cc2-d6d0155aa621-profile-collector-cert\") pod \"catalog-operator-68c6474976-5npz2\" (UID: \"95299c5d-a4b2-4528-9cc2-d6d0155aa621\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5npz2" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.520063 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wv4tv\" (UniqueName: \"kubernetes.io/projected/bfadbe4f-46c0-4c08-b766-85cbbc651ac4-kube-api-access-wv4tv\") pod \"package-server-manager-789f6589d5-hxbwl\" (UID: \"bfadbe4f-46c0-4c08-b766-85cbbc651ac4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hxbwl" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.520082 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5cebf084-4ed8-45d2-a6ed-77c092539420-cert\") pod \"ingress-canary-t5r2m\" (UID: \"5cebf084-4ed8-45d2-a6ed-77c092539420\") " pod="openshift-ingress-canary/ingress-canary-t5r2m" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.520115 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e2eedfd1-32f1-478a-b46d-939da24ba282-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-f8kkl\" (UID: \"e2eedfd1-32f1-478a-b46d-939da24ba282\") " pod="openshift-marketplace/marketplace-operator-79b997595-f8kkl" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.520135 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/79af79ce-0947-4a73-b45e-d588b52d115a-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-xbgtb\" (UID: \"79af79ce-0947-4a73-b45e-d588b52d115a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xbgtb" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.520150 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/314edc02-f932-423f-a24b-5db0c6c08957-webhook-cert\") pod \"packageserver-d55dfcdfc-lnvht\" (UID: \"314edc02-f932-423f-a24b-5db0c6c08957\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lnvht" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.520180 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e08e1dca-54a7-4ddc-8942-6a5645304b53-metrics-tls\") pod \"ingress-operator-5b745b69d9-gz9cl\" (UID: \"e08e1dca-54a7-4ddc-8942-6a5645304b53\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gz9cl" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.520199 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k29ts\" (UniqueName: \"kubernetes.io/projected/e08e1dca-54a7-4ddc-8942-6a5645304b53-kube-api-access-k29ts\") pod \"ingress-operator-5b745b69d9-gz9cl\" (UID: \"e08e1dca-54a7-4ddc-8942-6a5645304b53\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gz9cl" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.520230 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svgs5\" (UniqueName: \"kubernetes.io/projected/331b553d-6ae6-48fa-93a1-5e07ce6747f3-kube-api-access-svgs5\") pod \"service-ca-operator-777779d784-fg2kt\" (UID: \"331b553d-6ae6-48fa-93a1-5e07ce6747f3\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fg2kt" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.520249 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b868v\" (UniqueName: \"kubernetes.io/projected/a42c1ca1-45e2-48a2-94f2-a0e38e001d4b-kube-api-access-b868v\") pod \"service-ca-9c57cc56f-9kh95\" (UID: \"a42c1ca1-45e2-48a2-94f2-a0e38e001d4b\") " pod="openshift-service-ca/service-ca-9c57cc56f-9kh95" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.520266 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjrs4\" (UniqueName: \"kubernetes.io/projected/5cebf084-4ed8-45d2-a6ed-77c092539420-kube-api-access-hjrs4\") pod \"ingress-canary-t5r2m\" (UID: \"5cebf084-4ed8-45d2-a6ed-77c092539420\") " pod="openshift-ingress-canary/ingress-canary-t5r2m" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.520284 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8aa27b4a-7e0c-42e7-8732-8fa9dd15a754-images\") pod \"machine-config-operator-74547568cd-z75wf\" (UID: \"8aa27b4a-7e0c-42e7-8732-8fa9dd15a754\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z75wf" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.520302 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a42c1ca1-45e2-48a2-94f2-a0e38e001d4b-signing-key\") pod \"service-ca-9c57cc56f-9kh95\" (UID: \"a42c1ca1-45e2-48a2-94f2-a0e38e001d4b\") " pod="openshift-service-ca/service-ca-9c57cc56f-9kh95" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.520322 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e8ac53b-b8ec-45f8-8b02-008f7e50a85f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bxffw\" (UID: \"8e8ac53b-b8ec-45f8-8b02-008f7e50a85f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bxffw" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.520338 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnnlh\" (UniqueName: \"kubernetes.io/projected/79af79ce-0947-4a73-b45e-d588b52d115a-kube-api-access-pnnlh\") pod \"multus-admission-controller-857f4d67dd-xbgtb\" (UID: \"79af79ce-0947-4a73-b45e-d588b52d115a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xbgtb" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.520354 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/47d26bcb-c4e5-439c-8709-d589e50a1dad-metrics-tls\") pod \"dns-default-mrrkd\" (UID: \"47d26bcb-c4e5-439c-8709-d589e50a1dad\") " pod="openshift-dns/dns-default-mrrkd" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.521816 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2gqd\" (UniqueName: \"kubernetes.io/projected/1562193f-2d67-487b-9a29-9be653c11154-kube-api-access-q2gqd\") pod \"openshift-apiserver-operator-796bbdcf4f-2hxrs\" (UID: \"1562193f-2d67-487b-9a29-9be653c11154\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2hxrs" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.524484 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/56157237-28db-49f7-8506-bbddb98aa46b-mountpoint-dir\") pod \"csi-hostpathplugin-m7gb6\" (UID: \"56157237-28db-49f7-8506-bbddb98aa46b\") " pod="hostpath-provisioner/csi-hostpathplugin-m7gb6" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.524840 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/56157237-28db-49f7-8506-bbddb98aa46b-plugins-dir\") pod \"csi-hostpathplugin-m7gb6\" (UID: \"56157237-28db-49f7-8506-bbddb98aa46b\") " pod="hostpath-provisioner/csi-hostpathplugin-m7gb6" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.526688 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/56157237-28db-49f7-8506-bbddb98aa46b-registration-dir\") pod \"csi-hostpathplugin-m7gb6\" (UID: \"56157237-28db-49f7-8506-bbddb98aa46b\") " pod="hostpath-provisioner/csi-hostpathplugin-m7gb6" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.527361 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/331b553d-6ae6-48fa-93a1-5e07ce6747f3-config\") pod \"service-ca-operator-777779d784-fg2kt\" (UID: \"331b553d-6ae6-48fa-93a1-5e07ce6747f3\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fg2kt" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.527367 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8aa27b4a-7e0c-42e7-8732-8fa9dd15a754-auth-proxy-config\") pod \"machine-config-operator-74547568cd-z75wf\" (UID: \"8aa27b4a-7e0c-42e7-8732-8fa9dd15a754\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z75wf" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.543835 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/56157237-28db-49f7-8506-bbddb98aa46b-socket-dir\") pod \"csi-hostpathplugin-m7gb6\" (UID: \"56157237-28db-49f7-8506-bbddb98aa46b\") " pod="hostpath-provisioner/csi-hostpathplugin-m7gb6" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.543963 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/56157237-28db-49f7-8506-bbddb98aa46b-csi-data-dir\") pod \"csi-hostpathplugin-m7gb6\" (UID: \"56157237-28db-49f7-8506-bbddb98aa46b\") " pod="hostpath-provisioner/csi-hostpathplugin-m7gb6" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.546953 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e08e1dca-54a7-4ddc-8942-6a5645304b53-trusted-ca\") pod \"ingress-operator-5b745b69d9-gz9cl\" (UID: \"e08e1dca-54a7-4ddc-8942-6a5645304b53\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gz9cl" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.547546 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/189cc15e-4851-49ad-a757-49451158a3d7-config-volume\") pod \"collect-profiles-29405550-kkxf5\" (UID: \"189cc15e-4851-49ad-a757-49451158a3d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405550-kkxf5" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.547604 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/51d49383-db9f-4a63-865c-4387ecf691ed-stats-auth\") pod \"router-default-5444994796-k5rpm\" (UID: \"51d49383-db9f-4a63-865c-4387ecf691ed\") " pod="openshift-ingress/router-default-5444994796-k5rpm" Nov 28 12:38:06 crc kubenswrapper[4779]: E1128 12:38:06.547983 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 12:38:07.047970677 +0000 UTC m=+147.613646031 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gfm2w" (UID: "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.548572 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/657217e1-39d9-4d22-acf9-930d4597d9fc-config\") pod \"kube-controller-manager-operator-78b949d7b-x9sk6\" (UID: \"657217e1-39d9-4d22-acf9-930d4597d9fc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x9sk6" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.548597 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/47d26bcb-c4e5-439c-8709-d589e50a1dad-config-volume\") pod \"dns-default-mrrkd\" (UID: \"47d26bcb-c4e5-439c-8709-d589e50a1dad\") " pod="openshift-dns/dns-default-mrrkd" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.550491 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8aa27b4a-7e0c-42e7-8732-8fa9dd15a754-proxy-tls\") pod \"machine-config-operator-74547568cd-z75wf\" (UID: \"8aa27b4a-7e0c-42e7-8732-8fa9dd15a754\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z75wf" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.551729 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8aa27b4a-7e0c-42e7-8732-8fa9dd15a754-images\") pod \"machine-config-operator-74547568cd-z75wf\" (UID: \"8aa27b4a-7e0c-42e7-8732-8fa9dd15a754\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z75wf" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.550200 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/314edc02-f932-423f-a24b-5db0c6c08957-tmpfs\") pod \"packageserver-d55dfcdfc-lnvht\" (UID: \"314edc02-f932-423f-a24b-5db0c6c08957\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lnvht" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.552683 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51d49383-db9f-4a63-865c-4387ecf691ed-service-ca-bundle\") pod \"router-default-5444994796-k5rpm\" (UID: \"51d49383-db9f-4a63-865c-4387ecf691ed\") " pod="openshift-ingress/router-default-5444994796-k5rpm" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.553071 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b262e413-4650-41e5-b26b-0dd6ad0e4761-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-bvzn7\" (UID: \"b262e413-4650-41e5-b26b-0dd6ad0e4761\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bvzn7" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.557471 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/72921f7d-c7f6-4d36-a102-b3393776f50e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-t6kkq\" (UID: \"72921f7d-c7f6-4d36-a102-b3393776f50e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t6kkq" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.557868 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/51d49383-db9f-4a63-865c-4387ecf691ed-default-certificate\") pod \"router-default-5444994796-k5rpm\" (UID: \"51d49383-db9f-4a63-865c-4387ecf691ed\") " pod="openshift-ingress/router-default-5444994796-k5rpm" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.558288 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/1475f2e1-1c5b-470d-b0aa-0645ad327bb5-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-njrwv\" (UID: \"1475f2e1-1c5b-470d-b0aa-0645ad327bb5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-njrwv" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.558656 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/51d49383-db9f-4a63-865c-4387ecf691ed-metrics-certs\") pod \"router-default-5444994796-k5rpm\" (UID: \"51d49383-db9f-4a63-865c-4387ecf691ed\") " pod="openshift-ingress/router-default-5444994796-k5rpm" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.559521 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e2eedfd1-32f1-478a-b46d-939da24ba282-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-f8kkl\" (UID: \"e2eedfd1-32f1-478a-b46d-939da24ba282\") " pod="openshift-marketplace/marketplace-operator-79b997595-f8kkl" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.560634 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b262e413-4650-41e5-b26b-0dd6ad0e4761-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-bvzn7\" (UID: \"b262e413-4650-41e5-b26b-0dd6ad0e4761\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bvzn7" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.560899 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/331b553d-6ae6-48fa-93a1-5e07ce6747f3-serving-cert\") pod \"service-ca-operator-777779d784-fg2kt\" (UID: \"331b553d-6ae6-48fa-93a1-5e07ce6747f3\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fg2kt" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.560960 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0f65c972-5334-4705-bd19-90d43f3174e0-profile-collector-cert\") pod \"olm-operator-6b444d44fb-jkwt2\" (UID: \"0f65c972-5334-4705-bd19-90d43f3174e0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jkwt2" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.565332 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/95299c5d-a4b2-4528-9cc2-d6d0155aa621-profile-collector-cert\") pod \"catalog-operator-68c6474976-5npz2\" (UID: \"95299c5d-a4b2-4528-9cc2-d6d0155aa621\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5npz2" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.565634 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/47d26bcb-c4e5-439c-8709-d589e50a1dad-metrics-tls\") pod \"dns-default-mrrkd\" (UID: \"47d26bcb-c4e5-439c-8709-d589e50a1dad\") " pod="openshift-dns/dns-default-mrrkd" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.565814 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/314edc02-f932-423f-a24b-5db0c6c08957-webhook-cert\") pod \"packageserver-d55dfcdfc-lnvht\" (UID: \"314edc02-f932-423f-a24b-5db0c6c08957\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lnvht" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.565948 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5cebf084-4ed8-45d2-a6ed-77c092539420-cert\") pod \"ingress-canary-t5r2m\" (UID: \"5cebf084-4ed8-45d2-a6ed-77c092539420\") " pod="openshift-ingress-canary/ingress-canary-t5r2m" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.566011 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/72921f7d-c7f6-4d36-a102-b3393776f50e-proxy-tls\") pod \"machine-config-controller-84d6567774-t6kkq\" (UID: \"72921f7d-c7f6-4d36-a102-b3393776f50e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t6kkq" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.566384 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/95299c5d-a4b2-4528-9cc2-d6d0155aa621-srv-cert\") pod \"catalog-operator-68c6474976-5npz2\" (UID: \"95299c5d-a4b2-4528-9cc2-d6d0155aa621\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5npz2" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.566879 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/189cc15e-4851-49ad-a757-49451158a3d7-secret-volume\") pod \"collect-profiles-29405550-kkxf5\" (UID: \"189cc15e-4851-49ad-a757-49451158a3d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405550-kkxf5" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.567029 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e8ac53b-b8ec-45f8-8b02-008f7e50a85f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bxffw\" (UID: \"8e8ac53b-b8ec-45f8-8b02-008f7e50a85f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bxffw" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.567252 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hd6pq\" (UniqueName: \"kubernetes.io/projected/09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd-kube-api-access-hd6pq\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.567642 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a42c1ca1-45e2-48a2-94f2-a0e38e001d4b-signing-cabundle\") pod \"service-ca-9c57cc56f-9kh95\" (UID: \"a42c1ca1-45e2-48a2-94f2-a0e38e001d4b\") " pod="openshift-service-ca/service-ca-9c57cc56f-9kh95" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.568174 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f5a958f1-dcb5-4ec4-aecf-d75645454426-certs\") pod \"machine-config-server-fvvf5\" (UID: \"f5a958f1-dcb5-4ec4-aecf-d75645454426\") " pod="openshift-machine-config-operator/machine-config-server-fvvf5" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.569270 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/314edc02-f932-423f-a24b-5db0c6c08957-apiservice-cert\") pod \"packageserver-d55dfcdfc-lnvht\" (UID: \"314edc02-f932-423f-a24b-5db0c6c08957\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lnvht" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.569679 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0f65c972-5334-4705-bd19-90d43f3174e0-srv-cert\") pod \"olm-operator-6b444d44fb-jkwt2\" (UID: \"0f65c972-5334-4705-bd19-90d43f3174e0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jkwt2" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.571891 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e8ac53b-b8ec-45f8-8b02-008f7e50a85f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bxffw\" (UID: \"8e8ac53b-b8ec-45f8-8b02-008f7e50a85f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bxffw" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.575293 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a42c1ca1-45e2-48a2-94f2-a0e38e001d4b-signing-key\") pod \"service-ca-9c57cc56f-9kh95\" (UID: \"a42c1ca1-45e2-48a2-94f2-a0e38e001d4b\") " pod="openshift-service-ca/service-ca-9c57cc56f-9kh95" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.576674 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e2eedfd1-32f1-478a-b46d-939da24ba282-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-f8kkl\" (UID: \"e2eedfd1-32f1-478a-b46d-939da24ba282\") " pod="openshift-marketplace/marketplace-operator-79b997595-f8kkl" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.577963 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e08e1dca-54a7-4ddc-8942-6a5645304b53-metrics-tls\") pod \"ingress-operator-5b745b69d9-gz9cl\" (UID: \"e08e1dca-54a7-4ddc-8942-6a5645304b53\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gz9cl" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.582674 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f5a958f1-dcb5-4ec4-aecf-d75645454426-node-bootstrap-token\") pod \"machine-config-server-fvvf5\" (UID: \"f5a958f1-dcb5-4ec4-aecf-d75645454426\") " pod="openshift-machine-config-operator/machine-config-server-fvvf5" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.583995 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/657217e1-39d9-4d22-acf9-930d4597d9fc-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-x9sk6\" (UID: \"657217e1-39d9-4d22-acf9-930d4597d9fc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x9sk6" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.584121 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/79af79ce-0947-4a73-b45e-d588b52d115a-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-xbgtb\" (UID: \"79af79ce-0947-4a73-b45e-d588b52d115a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xbgtb" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.589350 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9hld\" (UniqueName: \"kubernetes.io/projected/610909a5-8090-4ed3-b686-1f1176a59e9e-kube-api-access-v9hld\") pod \"openshift-controller-manager-operator-756b6f6bc6-92x8q\" (UID: \"610909a5-8090-4ed3-b686-1f1176a59e9e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-92x8q" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.599754 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnqvm\" (UniqueName: \"kubernetes.io/projected/e562e074-6a8d-4c91-91ed-895b1b1ac2d1-kube-api-access-nnqvm\") pod \"dns-operator-744455d44c-bz2gm\" (UID: \"e562e074-6a8d-4c91-91ed-895b1b1ac2d1\") " pod="openshift-dns-operator/dns-operator-744455d44c-bz2gm" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.602775 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/bfadbe4f-46c0-4c08-b766-85cbbc651ac4-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-hxbwl\" (UID: \"bfadbe4f-46c0-4c08-b766-85cbbc651ac4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hxbwl" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.612368 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6n2k\" (UniqueName: \"kubernetes.io/projected/8aa27b4a-7e0c-42e7-8732-8fa9dd15a754-kube-api-access-v6n2k\") pod \"machine-config-operator-74547568cd-z75wf\" (UID: \"8aa27b4a-7e0c-42e7-8732-8fa9dd15a754\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z75wf" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.620812 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:38:06 crc kubenswrapper[4779]: E1128 12:38:06.621215 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 12:38:07.121197819 +0000 UTC m=+147.686873173 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.629286 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2hxrs" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.639123 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdxw5\" (UniqueName: \"kubernetes.io/projected/72921f7d-c7f6-4d36-a102-b3393776f50e-kube-api-access-mdxw5\") pod \"machine-config-controller-84d6567774-t6kkq\" (UID: \"72921f7d-c7f6-4d36-a102-b3393776f50e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t6kkq" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.639761 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-bd6l4" event={"ID":"fd16f3bc-76f4-4731-9141-19cf2aaf926d","Type":"ContainerStarted","Data":"fb9f1cfb4565cfc5e72706c385137334d93a742c08ae6a611717b201a8900f8b"} Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.639790 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-bd6l4" event={"ID":"fd16f3bc-76f4-4731-9141-19cf2aaf926d","Type":"ContainerStarted","Data":"076331fd98d8816223134f419cf7a036083d8f2bf94579a47ba424c3ac4eaebf"} Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.656559 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e08e1dca-54a7-4ddc-8942-6a5645304b53-bound-sa-token\") pod \"ingress-operator-5b745b69d9-gz9cl\" (UID: \"e08e1dca-54a7-4ddc-8942-6a5645304b53\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gz9cl" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.656556 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"55c41900ba0be686afce1a46b728153d782982d8a3abb17d0b7ab803fd6ffde9"} Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.669034 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-92x8q" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.672700 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-bz2gm" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.676872 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnnwk\" (UniqueName: \"kubernetes.io/projected/f5a958f1-dcb5-4ec4-aecf-d75645454426-kube-api-access-hnnwk\") pod \"machine-config-server-fvvf5\" (UID: \"f5a958f1-dcb5-4ec4-aecf-d75645454426\") " pod="openshift-machine-config-operator/machine-config-server-fvvf5" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.707767 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vw4x\" (UniqueName: \"kubernetes.io/projected/1475f2e1-1c5b-470d-b0aa-0645ad327bb5-kube-api-access-8vw4x\") pod \"control-plane-machine-set-operator-78cbb6b69f-njrwv\" (UID: \"1475f2e1-1c5b-470d-b0aa-0645ad327bb5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-njrwv" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.715957 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-njrwv" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.716979 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxc8g\" (UniqueName: \"kubernetes.io/projected/51d49383-db9f-4a63-865c-4387ecf691ed-kube-api-access-xxc8g\") pod \"router-default-5444994796-k5rpm\" (UID: \"51d49383-db9f-4a63-865c-4387ecf691ed\") " pod="openshift-ingress/router-default-5444994796-k5rpm" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.723781 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:06 crc kubenswrapper[4779]: E1128 12:38:06.724645 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 12:38:07.224633643 +0000 UTC m=+147.790308997 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gfm2w" (UID: "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.750191 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dm7h\" (UniqueName: \"kubernetes.io/projected/189cc15e-4851-49ad-a757-49451158a3d7-kube-api-access-5dm7h\") pod \"collect-profiles-29405550-kkxf5\" (UID: \"189cc15e-4851-49ad-a757-49451158a3d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405550-kkxf5" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.752243 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-bz4fl" event={"ID":"c3eebda0-cd9c-448c-8e0c-c25aea48fd54","Type":"ContainerStarted","Data":"4302709bdcd55f5aa3468a246dc2d74d4c39e8493a64128c5e13753a43795229"} Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.752277 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-bz4fl" event={"ID":"c3eebda0-cd9c-448c-8e0c-c25aea48fd54","Type":"ContainerStarted","Data":"d55608991961091bec54b6edda442a83c6823aabada199549c545fc2bfc320ff"} Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.764756 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z75wf" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.776368 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t6kkq" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.779917 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-4jt92" event={"ID":"8adac5a2-60c1-4c11-a7bd-62c113d8caca","Type":"ContainerStarted","Data":"7303881f1e52f50825a0010e512154bebafecefa5fae6fb94bde44cfea49a08d"} Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.780720 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-4jt92" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.782513 4779 patch_prober.go:28] interesting pod/downloads-7954f5f757-4jt92 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.782613 4779 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4jt92" podUID="8adac5a2-60c1-4c11-a7bd-62c113d8caca" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.789527 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405550-kkxf5" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.799692 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e8ac53b-b8ec-45f8-8b02-008f7e50a85f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bxffw\" (UID: \"8e8ac53b-b8ec-45f8-8b02-008f7e50a85f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bxffw" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.799944 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svgs5\" (UniqueName: \"kubernetes.io/projected/331b553d-6ae6-48fa-93a1-5e07ce6747f3-kube-api-access-svgs5\") pod \"service-ca-operator-777779d784-fg2kt\" (UID: \"331b553d-6ae6-48fa-93a1-5e07ce6747f3\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fg2kt" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.800662 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k29ts\" (UniqueName: \"kubernetes.io/projected/e08e1dca-54a7-4ddc-8942-6a5645304b53-kube-api-access-k29ts\") pod \"ingress-operator-5b745b69d9-gz9cl\" (UID: \"e08e1dca-54a7-4ddc-8942-6a5645304b53\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gz9cl" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.800708 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7w7kl" event={"ID":"2516f68b-0c44-4a09-abc8-7c4cba0cbb60","Type":"ContainerStarted","Data":"52be36f9f4f02984d7b22e400907b34085c36febece9c4192f37f6e890100f81"} Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.800736 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7w7kl" event={"ID":"2516f68b-0c44-4a09-abc8-7c4cba0cbb60","Type":"ContainerStarted","Data":"62dcfbf2b50faf9ed078e54b81e5f810ea6ecdcc5a3be050e546ade4e161e680"} Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.800747 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7w7kl" event={"ID":"2516f68b-0c44-4a09-abc8-7c4cba0cbb60","Type":"ContainerStarted","Data":"c9cbf7260b6f19f4af29d348f9e6baae309b03c44341ee329e0b2c4b29111e7b"} Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.804875 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-tqz88" event={"ID":"7b4f9d36-3495-47b8-b0a6-31f077b718a0","Type":"ContainerStarted","Data":"0750016bd6c4dab6f5274ae1a6616ac6dd0177a96f136a41c41490332184c7cf"} Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.804909 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-tqz88" event={"ID":"7b4f9d36-3495-47b8-b0a6-31f077b718a0","Type":"ContainerStarted","Data":"5905b3c49c30a5336a6765bdbb2640f902dc8ff0e69a85e012cf0c13da2a0fd1"} Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.806269 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t85kw" event={"ID":"0aaef2ff-dfeb-4e1c-aeaa-151eec6d15fb","Type":"ContainerStarted","Data":"5e539b40799dd297139700ba18fa438ff79c2fb077ad2679ea258cd03e877c2d"} Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.808213 4779 generic.go:334] "Generic (PLEG): container finished" podID="66175ea0-414c-4c91-9aec-e1cfa7992a5b" containerID="839b243e22bed2e7b823b1266803f9597416ae2603043115a0290c48f2d6ab68" exitCode=0 Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.808259 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-xnq47" event={"ID":"66175ea0-414c-4c91-9aec-e1cfa7992a5b","Type":"ContainerDied","Data":"839b243e22bed2e7b823b1266803f9597416ae2603043115a0290c48f2d6ab68"} Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.808281 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-xnq47" event={"ID":"66175ea0-414c-4c91-9aec-e1cfa7992a5b","Type":"ContainerStarted","Data":"7f89cd1095178f8c16eea4a4d81f4811f753da8984e6e724749119520917e0f8"} Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.825503 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:38:06 crc kubenswrapper[4779]: E1128 12:38:06.825758 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 12:38:07.325736316 +0000 UTC m=+147.891411680 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.825994 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:06 crc kubenswrapper[4779]: E1128 12:38:06.827033 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 12:38:07.32702514 +0000 UTC m=+147.892700494 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gfm2w" (UID: "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.830646 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b868v\" (UniqueName: \"kubernetes.io/projected/a42c1ca1-45e2-48a2-94f2-a0e38e001d4b-kube-api-access-b868v\") pod \"service-ca-9c57cc56f-9kh95\" (UID: \"a42c1ca1-45e2-48a2-94f2-a0e38e001d4b\") " pod="openshift-service-ca/service-ca-9c57cc56f-9kh95" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.842798 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjrs4\" (UniqueName: \"kubernetes.io/projected/5cebf084-4ed8-45d2-a6ed-77c092539420-kube-api-access-hjrs4\") pod \"ingress-canary-t5r2m\" (UID: \"5cebf084-4ed8-45d2-a6ed-77c092539420\") " pod="openshift-ingress-canary/ingress-canary-t5r2m" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.846927 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6jxrr" event={"ID":"d86661dc-bc7e-43ff-9c4b-035a8afecace","Type":"ContainerStarted","Data":"eb3f1b254c196576cb29de8e1e6dacbf743002438aeb184c0e051b3d6e453255"} Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.846975 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6jxrr" event={"ID":"d86661dc-bc7e-43ff-9c4b-035a8afecace","Type":"ContainerStarted","Data":"20da5f9c2fe169bdd6a11820f56110e115cf2b6f5e3f413b5903ebaba532682f"} Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.852529 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv8rx\" (UniqueName: \"kubernetes.io/projected/0f65c972-5334-4705-bd19-90d43f3174e0-kube-api-access-mv8rx\") pod \"olm-operator-6b444d44fb-jkwt2\" (UID: \"0f65c972-5334-4705-bd19-90d43f3174e0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jkwt2" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.882493 4779 generic.go:334] "Generic (PLEG): container finished" podID="5629efeb-c910-46f3-aa69-be7863bfb6f1" containerID="dff0f179d89276ebb11b2b72803d9a67d2c4e2a7a1f9247cff2f709cb3f86d4b" exitCode=0 Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.882621 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hp9zp" event={"ID":"5629efeb-c910-46f3-aa69-be7863bfb6f1","Type":"ContainerDied","Data":"dff0f179d89276ebb11b2b72803d9a67d2c4e2a7a1f9247cff2f709cb3f86d4b"} Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.890374 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-fvvf5" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.892278 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-k5rpm" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.897509 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58b8c\" (UniqueName: \"kubernetes.io/projected/47d26bcb-c4e5-439c-8709-d589e50a1dad-kube-api-access-58b8c\") pod \"dns-default-mrrkd\" (UID: \"47d26bcb-c4e5-439c-8709-d589e50a1dad\") " pod="openshift-dns/dns-default-mrrkd" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.904841 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqckz\" (UniqueName: \"kubernetes.io/projected/e2eedfd1-32f1-478a-b46d-939da24ba282-kube-api-access-dqckz\") pod \"marketplace-operator-79b997595-f8kkl\" (UID: \"e2eedfd1-32f1-478a-b46d-939da24ba282\") " pod="openshift-marketplace/marketplace-operator-79b997595-f8kkl" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.904989 4779 generic.go:334] "Generic (PLEG): container finished" podID="fca68f2a-06ef-4c6a-8971-026d05045c4a" containerID="8b3a576fedc6932a2811a68ed48e3ba3768700d5e09da265eaf7f256d25f65f7" exitCode=0 Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.905691 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5p2wz" event={"ID":"fca68f2a-06ef-4c6a-8971-026d05045c4a","Type":"ContainerDied","Data":"8b3a576fedc6932a2811a68ed48e3ba3768700d5e09da265eaf7f256d25f65f7"} Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.914728 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gc2sn" event={"ID":"228c180e-b1ee-45c8-a186-03b701adc920","Type":"ContainerStarted","Data":"ff311ebee075688038d1f60588bc74e649cc0e4e4a793a28f26dd086e01cebdc"} Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.914767 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gc2sn" event={"ID":"228c180e-b1ee-45c8-a186-03b701adc920","Type":"ContainerStarted","Data":"4e3de9f7861090ea54aa341f13b0721d0a8be37e61730067050310ffc2e1905c"} Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.927902 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wv4tv\" (UniqueName: \"kubernetes.io/projected/bfadbe4f-46c0-4c08-b766-85cbbc651ac4-kube-api-access-wv4tv\") pod \"package-server-manager-789f6589d5-hxbwl\" (UID: \"bfadbe4f-46c0-4c08-b766-85cbbc651ac4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hxbwl" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.928907 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:38:06 crc kubenswrapper[4779]: E1128 12:38:06.929933 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 12:38:07.42991076 +0000 UTC m=+147.995586114 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.932461 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-ctt57" event={"ID":"bb401509-3ef4-41bc-93db-fbee2b5454b9","Type":"ContainerStarted","Data":"b75ef25052c0835efd10ca75ba6aae87b86a54b1907919ef647027e084e854d6"} Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.932492 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-ctt57" event={"ID":"bb401509-3ef4-41bc-93db-fbee2b5454b9","Type":"ContainerStarted","Data":"2b165f77d94c78b8df6d4386354b0dc458a222e970ceedaf6e4a0023e08c5d40"} Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.954625 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tvc5s" event={"ID":"b5705070-06f5-4ad4-b5df-4d82f90f8e27","Type":"ContainerStarted","Data":"ccab56ae3804a28125b1d7c71c470708a79879f4b488c0a19107038e93a0ca34"} Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.955492 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tvc5s" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.966960 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc7gx\" (UniqueName: \"kubernetes.io/projected/b262e413-4650-41e5-b26b-0dd6ad0e4761-kube-api-access-zc7gx\") pod \"kube-storage-version-migrator-operator-b67b599dd-bvzn7\" (UID: \"b262e413-4650-41e5-b26b-0dd6ad0e4761\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bvzn7" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.967259 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-n97k6" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.970770 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x9fl\" (UniqueName: \"kubernetes.io/projected/95299c5d-a4b2-4528-9cc2-d6d0155aa621-kube-api-access-6x9fl\") pod \"catalog-operator-68c6474976-5npz2\" (UID: \"95299c5d-a4b2-4528-9cc2-d6d0155aa621\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5npz2" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.974858 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-lfk66" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.975185 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-8lqfg" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.979399 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tvc5s" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.989840 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gz9cl" Nov 28 12:38:06 crc kubenswrapper[4779]: I1128 12:38:06.993132 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7g7t\" (UniqueName: \"kubernetes.io/projected/56157237-28db-49f7-8506-bbddb98aa46b-kube-api-access-l7g7t\") pod \"csi-hostpathplugin-m7gb6\" (UID: \"56157237-28db-49f7-8506-bbddb98aa46b\") " pod="hostpath-provisioner/csi-hostpathplugin-m7gb6" Nov 28 12:38:07 crc kubenswrapper[4779]: I1128 12:38:07.005880 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bxffw" Nov 28 12:38:07 crc kubenswrapper[4779]: I1128 12:38:07.010518 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnnlh\" (UniqueName: \"kubernetes.io/projected/79af79ce-0947-4a73-b45e-d588b52d115a-kube-api-access-pnnlh\") pod \"multus-admission-controller-857f4d67dd-xbgtb\" (UID: \"79af79ce-0947-4a73-b45e-d588b52d115a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xbgtb" Nov 28 12:38:07 crc kubenswrapper[4779]: I1128 12:38:07.014654 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/657217e1-39d9-4d22-acf9-930d4597d9fc-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-x9sk6\" (UID: \"657217e1-39d9-4d22-acf9-930d4597d9fc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x9sk6" Nov 28 12:38:07 crc kubenswrapper[4779]: I1128 12:38:07.031965 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:07 crc kubenswrapper[4779]: E1128 12:38:07.034525 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 12:38:07.534513507 +0000 UTC m=+148.100188861 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gfm2w" (UID: "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:07 crc kubenswrapper[4779]: I1128 12:38:07.035293 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bvzn7" Nov 28 12:38:07 crc kubenswrapper[4779]: I1128 12:38:07.043770 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-f8kkl" Nov 28 12:38:07 crc kubenswrapper[4779]: I1128 12:38:07.056377 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jkwt2" Nov 28 12:38:07 crc kubenswrapper[4779]: I1128 12:38:07.060581 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ptfq\" (UniqueName: \"kubernetes.io/projected/df4a38f8-8868-4674-b1a8-5d47dd9b9d31-kube-api-access-4ptfq\") pod \"migrator-59844c95c7-ch6d4\" (UID: \"df4a38f8-8868-4674-b1a8-5d47dd9b9d31\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ch6d4" Nov 28 12:38:07 crc kubenswrapper[4779]: I1128 12:38:07.075767 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fxlc\" (UniqueName: \"kubernetes.io/projected/314edc02-f932-423f-a24b-5db0c6c08957-kube-api-access-5fxlc\") pod \"packageserver-d55dfcdfc-lnvht\" (UID: \"314edc02-f932-423f-a24b-5db0c6c08957\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lnvht" Nov 28 12:38:07 crc kubenswrapper[4779]: I1128 12:38:07.075995 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hxbwl" Nov 28 12:38:07 crc kubenswrapper[4779]: I1128 12:38:07.084452 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lnvht" Nov 28 12:38:07 crc kubenswrapper[4779]: I1128 12:38:07.101142 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fg2kt" Nov 28 12:38:07 crc kubenswrapper[4779]: I1128 12:38:07.109563 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5npz2" Nov 28 12:38:07 crc kubenswrapper[4779]: I1128 12:38:07.113072 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-t5r2m" Nov 28 12:38:07 crc kubenswrapper[4779]: I1128 12:38:07.124322 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-9kh95" Nov 28 12:38:07 crc kubenswrapper[4779]: I1128 12:38:07.153247 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:38:07 crc kubenswrapper[4779]: E1128 12:38:07.158739 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 12:38:07.658713511 +0000 UTC m=+148.224388865 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:07 crc kubenswrapper[4779]: I1128 12:38:07.182904 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-mrrkd" Nov 28 12:38:07 crc kubenswrapper[4779]: I1128 12:38:07.184441 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-m7gb6" Nov 28 12:38:07 crc kubenswrapper[4779]: I1128 12:38:07.265022 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:07 crc kubenswrapper[4779]: E1128 12:38:07.265348 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 12:38:07.765336141 +0000 UTC m=+148.331011495 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gfm2w" (UID: "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:07 crc kubenswrapper[4779]: I1128 12:38:07.282367 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x9sk6" Nov 28 12:38:07 crc kubenswrapper[4779]: I1128 12:38:07.305734 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-xbgtb" Nov 28 12:38:07 crc kubenswrapper[4779]: I1128 12:38:07.323651 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ch6d4" Nov 28 12:38:07 crc kubenswrapper[4779]: I1128 12:38:07.373062 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:38:07 crc kubenswrapper[4779]: E1128 12:38:07.373567 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 12:38:07.873549055 +0000 UTC m=+148.439224399 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:07 crc kubenswrapper[4779]: I1128 12:38:07.478483 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:07 crc kubenswrapper[4779]: E1128 12:38:07.478769 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 12:38:07.978758758 +0000 UTC m=+148.544434112 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gfm2w" (UID: "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:07 crc kubenswrapper[4779]: I1128 12:38:07.532343 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-bz2gm"] Nov 28 12:38:07 crc kubenswrapper[4779]: I1128 12:38:07.536620 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-92x8q"] Nov 28 12:38:07 crc kubenswrapper[4779]: I1128 12:38:07.581294 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:38:07 crc kubenswrapper[4779]: E1128 12:38:07.582658 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 12:38:08.082644035 +0000 UTC m=+148.648319389 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:07 crc kubenswrapper[4779]: I1128 12:38:07.696006 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:07 crc kubenswrapper[4779]: I1128 12:38:07.696010 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2hxrs"] Nov 28 12:38:07 crc kubenswrapper[4779]: E1128 12:38:07.696347 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 12:38:08.196330716 +0000 UTC m=+148.762006070 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gfm2w" (UID: "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:07 crc kubenswrapper[4779]: I1128 12:38:07.796729 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:38:07 crc kubenswrapper[4779]: E1128 12:38:07.797039 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 12:38:08.297023047 +0000 UTC m=+148.862698401 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:07 crc kubenswrapper[4779]: I1128 12:38:07.799227 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gc2sn" podStartSLOduration=129.799216016 podStartE2EDuration="2m9.799216016s" podCreationTimestamp="2025-11-28 12:35:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:38:07.79752658 +0000 UTC m=+148.363201924" watchObservedRunningTime="2025-11-28 12:38:07.799216016 +0000 UTC m=+148.364891370" Nov 28 12:38:07 crc kubenswrapper[4779]: I1128 12:38:07.817166 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-njrwv"] Nov 28 12:38:07 crc kubenswrapper[4779]: I1128 12:38:07.898307 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:07 crc kubenswrapper[4779]: E1128 12:38:07.898842 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 12:38:08.398819848 +0000 UTC m=+148.964495212 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gfm2w" (UID: "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:07 crc kubenswrapper[4779]: I1128 12:38:07.999600 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:38:08 crc kubenswrapper[4779]: E1128 12:38:08.000023 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 12:38:08.500008782 +0000 UTC m=+149.065684146 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:08 crc kubenswrapper[4779]: I1128 12:38:08.022000 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-bz2gm" event={"ID":"e562e074-6a8d-4c91-91ed-895b1b1ac2d1","Type":"ContainerStarted","Data":"fb55bca617f85f96a9e11f6e3c2cd703c10a7bb55b2b432a46f96e1656cacf17"} Nov 28 12:38:08 crc kubenswrapper[4779]: I1128 12:38:08.042991 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"e86ee5ed1a1a0a23595a243674d64f7d1bc8e4f3c57edd328e3031400f2d202c"} Nov 28 12:38:08 crc kubenswrapper[4779]: I1128 12:38:08.043033 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"70ecccc657a690e40ec4ba26623c45ef8faea344e4533e7d7d81611e2d06751d"} Nov 28 12:38:08 crc kubenswrapper[4779]: I1128 12:38:08.102621 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-92x8q" event={"ID":"610909a5-8090-4ed3-b686-1f1176a59e9e","Type":"ContainerStarted","Data":"254979dc9cb323165227b04866778f05bef0ed2ff6f7423d3c803b5a6a69892a"} Nov 28 12:38:08 crc kubenswrapper[4779]: I1128 12:38:08.104334 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:08 crc kubenswrapper[4779]: E1128 12:38:08.104607 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 12:38:08.604593378 +0000 UTC m=+149.170268732 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gfm2w" (UID: "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:08 crc kubenswrapper[4779]: I1128 12:38:08.117414 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-8lqfg" podStartSLOduration=130.117393883 podStartE2EDuration="2m10.117393883s" podCreationTimestamp="2025-11-28 12:35:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:38:08.068772874 +0000 UTC m=+148.634448228" watchObservedRunningTime="2025-11-28 12:38:08.117393883 +0000 UTC m=+148.683069227" Nov 28 12:38:08 crc kubenswrapper[4779]: I1128 12:38:08.190602 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-k5rpm" event={"ID":"51d49383-db9f-4a63-865c-4387ecf691ed","Type":"ContainerStarted","Data":"fb40e4b8456ed8911b36eab606333797d1ce3dc3db8fa8e41468c56c1840157e"} Nov 28 12:38:08 crc kubenswrapper[4779]: I1128 12:38:08.199796 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"e6cb81904c595fd00fe47a11dd6f40e14c053d5edea3f4f030f8b7fe0c9a031d"} Nov 28 12:38:08 crc kubenswrapper[4779]: I1128 12:38:08.199852 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"7751b702f576daa4234c3a86dff4009aea4cacd1d12be36b433b9a904023dab8"} Nov 28 12:38:08 crc kubenswrapper[4779]: I1128 12:38:08.211484 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:38:08 crc kubenswrapper[4779]: I1128 12:38:08.212248 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:38:08 crc kubenswrapper[4779]: E1128 12:38:08.212864 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 12:38:08.712839483 +0000 UTC m=+149.278514837 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:08 crc kubenswrapper[4779]: I1128 12:38:08.237216 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t85kw" podStartSLOduration=131.237188968 podStartE2EDuration="2m11.237188968s" podCreationTimestamp="2025-11-28 12:35:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:38:08.117827034 +0000 UTC m=+148.683502388" watchObservedRunningTime="2025-11-28 12:38:08.237188968 +0000 UTC m=+148.802864322" Nov 28 12:38:08 crc kubenswrapper[4779]: I1128 12:38:08.237380 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-tqz88" podStartSLOduration=130.237374753 podStartE2EDuration="2m10.237374753s" podCreationTimestamp="2025-11-28 12:35:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:38:08.211296761 +0000 UTC m=+148.776972125" watchObservedRunningTime="2025-11-28 12:38:08.237374753 +0000 UTC m=+148.803050107" Nov 28 12:38:08 crc kubenswrapper[4779]: I1128 12:38:08.262863 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6jxrr" podStartSLOduration=130.262846419 podStartE2EDuration="2m10.262846419s" podCreationTimestamp="2025-11-28 12:35:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:38:08.260797024 +0000 UTC m=+148.826472378" watchObservedRunningTime="2025-11-28 12:38:08.262846419 +0000 UTC m=+148.828521773" Nov 28 12:38:08 crc kubenswrapper[4779]: I1128 12:38:08.276072 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"1645f8e86c958d6f2063cc97a75ebf36fafcf2fe80a25e27eb2b956b98dcf02b"} Nov 28 12:38:08 crc kubenswrapper[4779]: I1128 12:38:08.288350 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-fvvf5" event={"ID":"f5a958f1-dcb5-4ec4-aecf-d75645454426","Type":"ContainerStarted","Data":"fbdeb942bb4bd7a7c7343375269d2afb49680b4fd85619398d5b6fb143dab433"} Nov 28 12:38:08 crc kubenswrapper[4779]: I1128 12:38:08.314943 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:08 crc kubenswrapper[4779]: E1128 12:38:08.316186 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 12:38:08.816173495 +0000 UTC m=+149.381848849 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gfm2w" (UID: "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:08 crc kubenswrapper[4779]: I1128 12:38:08.327192 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hp9zp" event={"ID":"5629efeb-c910-46f3-aa69-be7863bfb6f1","Type":"ContainerStarted","Data":"29841b112f3c8ea782bd4bda0d45d28a2e73fdd8c84df4ce187eb390152b549f"} Nov 28 12:38:08 crc kubenswrapper[4779]: I1128 12:38:08.342839 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5p2wz" event={"ID":"fca68f2a-06ef-4c6a-8971-026d05045c4a","Type":"ContainerStarted","Data":"a83be9532f1b6ca11b30086ec80dcd33d8c0975f56cc396dc4b2c0af41082926"} Nov 28 12:38:08 crc kubenswrapper[4779]: I1128 12:38:08.364725 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-xnq47" event={"ID":"66175ea0-414c-4c91-9aec-e1cfa7992a5b","Type":"ContainerStarted","Data":"5d884d0577fccd03aaf7f20d1024981cf03efdea74c152d8d3abe811f95a3b93"} Nov 28 12:38:08 crc kubenswrapper[4779]: I1128 12:38:08.364768 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-xnq47" Nov 28 12:38:08 crc kubenswrapper[4779]: I1128 12:38:08.366857 4779 patch_prober.go:28] interesting pod/downloads-7954f5f757-4jt92 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Nov 28 12:38:08 crc kubenswrapper[4779]: I1128 12:38:08.366920 4779 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4jt92" podUID="8adac5a2-60c1-4c11-a7bd-62c113d8caca" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Nov 28 12:38:08 crc kubenswrapper[4779]: I1128 12:38:08.376964 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-4jt92" podStartSLOduration=130.376940731 podStartE2EDuration="2m10.376940731s" podCreationTimestamp="2025-11-28 12:35:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:38:08.350283853 +0000 UTC m=+148.915959207" watchObservedRunningTime="2025-11-28 12:38:08.376940731 +0000 UTC m=+148.942616095" Nov 28 12:38:08 crc kubenswrapper[4779]: I1128 12:38:08.418704 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:38:08 crc kubenswrapper[4779]: E1128 12:38:08.418902 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 12:38:08.91887224 +0000 UTC m=+149.484547594 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:08 crc kubenswrapper[4779]: I1128 12:38:08.419775 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:08 crc kubenswrapper[4779]: E1128 12:38:08.420609 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 12:38:08.920595946 +0000 UTC m=+149.486271290 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gfm2w" (UID: "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:08 crc kubenswrapper[4779]: I1128 12:38:08.522007 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:38:08 crc kubenswrapper[4779]: E1128 12:38:08.523195 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 12:38:09.023180968 +0000 UTC m=+149.588856312 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:08 crc kubenswrapper[4779]: I1128 12:38:08.549833 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-lfk66" podStartSLOduration=130.549816815 podStartE2EDuration="2m10.549816815s" podCreationTimestamp="2025-11-28 12:35:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:38:08.548365886 +0000 UTC m=+149.114041250" watchObservedRunningTime="2025-11-28 12:38:08.549816815 +0000 UTC m=+149.115492169" Nov 28 12:38:08 crc kubenswrapper[4779]: I1128 12:38:08.623743 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:08 crc kubenswrapper[4779]: E1128 12:38:08.624245 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 12:38:09.124234839 +0000 UTC m=+149.689910193 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gfm2w" (UID: "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:08 crc kubenswrapper[4779]: I1128 12:38:08.674038 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tvc5s" podStartSLOduration=130.67401807 podStartE2EDuration="2m10.67401807s" podCreationTimestamp="2025-11-28 12:35:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:38:08.626944902 +0000 UTC m=+149.192620246" watchObservedRunningTime="2025-11-28 12:38:08.67401807 +0000 UTC m=+149.239693424" Nov 28 12:38:08 crc kubenswrapper[4779]: I1128 12:38:08.681404 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-n97k6" podStartSLOduration=131.681385048 podStartE2EDuration="2m11.681385048s" podCreationTimestamp="2025-11-28 12:35:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:38:08.667379691 +0000 UTC m=+149.233055045" watchObservedRunningTime="2025-11-28 12:38:08.681385048 +0000 UTC m=+149.247060402" Nov 28 12:38:08 crc kubenswrapper[4779]: I1128 12:38:08.682501 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jkwt2"] Nov 28 12:38:08 crc kubenswrapper[4779]: I1128 12:38:08.732772 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-t6kkq"] Nov 28 12:38:08 crc kubenswrapper[4779]: I1128 12:38:08.739191 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:38:08 crc kubenswrapper[4779]: E1128 12:38:08.739439 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 12:38:09.239422691 +0000 UTC m=+149.805098045 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:08 crc kubenswrapper[4779]: I1128 12:38:08.739597 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:08 crc kubenswrapper[4779]: E1128 12:38:08.739833 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 12:38:09.239826251 +0000 UTC m=+149.805501605 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gfm2w" (UID: "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:08 crc kubenswrapper[4779]: I1128 12:38:08.768931 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-gz9cl"] Nov 28 12:38:08 crc kubenswrapper[4779]: I1128 12:38:08.842152 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405550-kkxf5"] Nov 28 12:38:08 crc kubenswrapper[4779]: I1128 12:38:08.842684 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:38:08 crc kubenswrapper[4779]: E1128 12:38:08.858349 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 12:38:09.343351679 +0000 UTC m=+149.909027033 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:08 crc kubenswrapper[4779]: I1128 12:38:08.929077 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7w7kl" podStartSLOduration=130.929057946 podStartE2EDuration="2m10.929057946s" podCreationTimestamp="2025-11-28 12:35:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:38:08.895273257 +0000 UTC m=+149.460948611" watchObservedRunningTime="2025-11-28 12:38:08.929057946 +0000 UTC m=+149.494733300" Nov 28 12:38:08 crc kubenswrapper[4779]: I1128 12:38:08.971634 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:08 crc kubenswrapper[4779]: E1128 12:38:08.972080 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 12:38:09.472062034 +0000 UTC m=+150.037737388 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gfm2w" (UID: "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.072379 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:38:09 crc kubenswrapper[4779]: E1128 12:38:09.072476 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 12:38:09.572453546 +0000 UTC m=+150.138128900 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.072728 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:09 crc kubenswrapper[4779]: E1128 12:38:09.073064 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 12:38:09.573052202 +0000 UTC m=+150.138727556 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gfm2w" (UID: "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.097376 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-bd6l4" podStartSLOduration=132.097358317 podStartE2EDuration="2m12.097358317s" podCreationTimestamp="2025-11-28 12:35:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:38:09.081487119 +0000 UTC m=+149.647162473" watchObservedRunningTime="2025-11-28 12:38:09.097358317 +0000 UTC m=+149.663033671" Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.098370 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-z75wf"] Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.145950 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-ctt57" podStartSLOduration=131.145936795 podStartE2EDuration="2m11.145936795s" podCreationTimestamp="2025-11-28 12:35:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:38:09.144206418 +0000 UTC m=+149.709881772" watchObservedRunningTime="2025-11-28 12:38:09.145936795 +0000 UTC m=+149.711612149" Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.173284 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:38:09 crc kubenswrapper[4779]: E1128 12:38:09.173668 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 12:38:09.673651671 +0000 UTC m=+150.239327025 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.195961 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-bz4fl" podStartSLOduration=131.195947961 podStartE2EDuration="2m11.195947961s" podCreationTimestamp="2025-11-28 12:35:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:38:09.194913143 +0000 UTC m=+149.760588497" watchObservedRunningTime="2025-11-28 12:38:09.195947961 +0000 UTC m=+149.761623315" Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.219734 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-f8kkl"] Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.247521 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.256162 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bvzn7"] Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.275675 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:09 crc kubenswrapper[4779]: E1128 12:38:09.275947 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 12:38:09.775938305 +0000 UTC m=+150.341613649 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gfm2w" (UID: "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.389466 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:38:09 crc kubenswrapper[4779]: E1128 12:38:09.389777 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 12:38:09.88976154 +0000 UTC m=+150.455436894 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.410400 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-xnq47" podStartSLOduration=131.410381035 podStartE2EDuration="2m11.410381035s" podCreationTimestamp="2025-11-28 12:35:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:38:09.387897529 +0000 UTC m=+149.953572883" watchObservedRunningTime="2025-11-28 12:38:09.410381035 +0000 UTC m=+149.976056389" Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.428278 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lnvht"] Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.439107 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5p2wz" podStartSLOduration=131.439081887 podStartE2EDuration="2m11.439081887s" podCreationTimestamp="2025-11-28 12:35:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:38:09.436996601 +0000 UTC m=+150.002671955" watchObservedRunningTime="2025-11-28 12:38:09.439081887 +0000 UTC m=+150.004757241" Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.447813 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bvzn7" event={"ID":"b262e413-4650-41e5-b26b-0dd6ad0e4761","Type":"ContainerStarted","Data":"40cf1002b3f251b597c6892dd07a1cb87f076f0f6b69cca27d344ed5299dad23"} Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.449130 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t6kkq" event={"ID":"72921f7d-c7f6-4d36-a102-b3393776f50e","Type":"ContainerStarted","Data":"95bf73cbc893a40edfb744f83273385494f8ed5b315c585b42c4868c7960b1c0"} Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.449155 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t6kkq" event={"ID":"72921f7d-c7f6-4d36-a102-b3393776f50e","Type":"ContainerStarted","Data":"620f5ae772e190c5e7ab11bcbaa77ed8e6efc85264181a5bda697f1326c875d8"} Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.458885 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-k5rpm" event={"ID":"51d49383-db9f-4a63-865c-4387ecf691ed","Type":"ContainerStarted","Data":"d4c925430434cad2e16badf91686db313a8b7e2e129b80812b5389165826ef8e"} Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.479402 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gz9cl" event={"ID":"e08e1dca-54a7-4ddc-8942-6a5645304b53","Type":"ContainerStarted","Data":"c610a63bb3146e951d6dc958a0914c2751172389115af21acff954d28a694b22"} Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.494312 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:09 crc kubenswrapper[4779]: E1128 12:38:09.494678 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 12:38:09.994664164 +0000 UTC m=+150.560339518 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gfm2w" (UID: "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.500973 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-k5rpm" podStartSLOduration=131.500956253 podStartE2EDuration="2m11.500956253s" podCreationTimestamp="2025-11-28 12:35:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:38:09.499165495 +0000 UTC m=+150.064840849" watchObservedRunningTime="2025-11-28 12:38:09.500956253 +0000 UTC m=+150.066631607" Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.501036 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-xbgtb"] Nov 28 12:38:09 crc kubenswrapper[4779]: W1128 12:38:09.511731 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod314edc02_f932_423f_a24b_5db0c6c08957.slice/crio-68c82d395a83f1aa10a508923afd1f4b204d3aae162c43a8ea48e13c0d087a04 WatchSource:0}: Error finding container 68c82d395a83f1aa10a508923afd1f4b204d3aae162c43a8ea48e13c0d087a04: Status 404 returned error can't find the container with id 68c82d395a83f1aa10a508923afd1f4b204d3aae162c43a8ea48e13c0d087a04 Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.515127 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-njrwv" event={"ID":"1475f2e1-1c5b-470d-b0aa-0645ad327bb5","Type":"ContainerStarted","Data":"4457458f9be6c9c677e4859498665fe435e76820748b91e406a8df2e7ec1f90e"} Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.515639 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-njrwv" event={"ID":"1475f2e1-1c5b-470d-b0aa-0645ad327bb5","Type":"ContainerStarted","Data":"7b7468becdd4e343be68dcf7eac3e9d9102e506dc98e062509ff4e956c3a3450"} Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.569554 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2hxrs" event={"ID":"1562193f-2d67-487b-9a29-9be653c11154","Type":"ContainerStarted","Data":"ed4d896355c5ef2e8933f7705e28490092705f8114ca28f5c705e7764083b322"} Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.569629 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2hxrs" event={"ID":"1562193f-2d67-487b-9a29-9be653c11154","Type":"ContainerStarted","Data":"3746720f4321f29bee6529e40108fd5a17ff459acc921605a5016cda976d05f1"} Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.575073 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-fg2kt"] Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.593926 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-njrwv" podStartSLOduration=131.593910536 podStartE2EDuration="2m11.593910536s" podCreationTimestamp="2025-11-28 12:35:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:38:09.581606355 +0000 UTC m=+150.147281709" watchObservedRunningTime="2025-11-28 12:38:09.593910536 +0000 UTC m=+150.159585890" Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.600217 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:38:09 crc kubenswrapper[4779]: E1128 12:38:09.602275 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 12:38:10.10223182 +0000 UTC m=+150.667907174 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.626449 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-f8kkl" event={"ID":"e2eedfd1-32f1-478a-b46d-939da24ba282","Type":"ContainerStarted","Data":"c4a3e8a204c557c59d1c6d5dfa04d009d34de817f4e2667f4aa70d66e27ebb46"} Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.706819 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:09 crc kubenswrapper[4779]: E1128 12:38:09.707264 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 12:38:10.207252018 +0000 UTC m=+150.772927372 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gfm2w" (UID: "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.712032 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2hxrs" podStartSLOduration=132.712011756 podStartE2EDuration="2m12.712011756s" podCreationTimestamp="2025-11-28 12:35:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:38:09.648708142 +0000 UTC m=+150.214383496" watchObservedRunningTime="2025-11-28 12:38:09.712011756 +0000 UTC m=+150.277687110" Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.714579 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5npz2"] Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.714618 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hp9zp" event={"ID":"5629efeb-c910-46f3-aa69-be7863bfb6f1","Type":"ContainerStarted","Data":"c38dcf3680a6772c8668092f0f689ffcb05849b0c3e45a36e6707393ae7e6887"} Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.807778 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:38:09 crc kubenswrapper[4779]: E1128 12:38:09.808207 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 12:38:10.308175625 +0000 UTC m=+150.873850979 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.808637 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.817230 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405550-kkxf5" event={"ID":"189cc15e-4851-49ad-a757-49451158a3d7","Type":"ContainerStarted","Data":"1ccf019e5e90a49e8db56bb6689d31cb701b13e33c9cb56f50ee68184ba158fb"} Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.817276 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-9kh95"] Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.817292 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x9sk6"] Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.817302 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-bz2gm" event={"ID":"e562e074-6a8d-4c91-91ed-895b1b1ac2d1","Type":"ContainerStarted","Data":"15192b755880bef3c369d7e116116eff0e843adf618c0b34e65caedd955717f6"} Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.817312 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-fvvf5" event={"ID":"f5a958f1-dcb5-4ec4-aecf-d75645454426","Type":"ContainerStarted","Data":"87e0e01e2423c534843e04e910741154e04c61a3340aa30ff5346d42eb0f0ae7"} Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.817324 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hxbwl"] Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.817334 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bxffw"] Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.817342 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-t5r2m"] Nov 28 12:38:09 crc kubenswrapper[4779]: W1128 12:38:09.824991 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod331b553d_6ae6_48fa_93a1_5e07ce6747f3.slice/crio-b8920c46d5a189ccc9b96ce6aae8acdc80dc0922ffaa884c4eae67134fe95581 WatchSource:0}: Error finding container b8920c46d5a189ccc9b96ce6aae8acdc80dc0922ffaa884c4eae67134fe95581: Status 404 returned error can't find the container with id b8920c46d5a189ccc9b96ce6aae8acdc80dc0922ffaa884c4eae67134fe95581 Nov 28 12:38:09 crc kubenswrapper[4779]: E1128 12:38:09.825359 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 12:38:10.325329517 +0000 UTC m=+150.891004871 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gfm2w" (UID: "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.831226 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jkwt2" event={"ID":"0f65c972-5334-4705-bd19-90d43f3174e0","Type":"ContainerStarted","Data":"6bbe65baaeba76cc19d833a829a8540266e57192976ae5e513558d1098fd2114"} Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.831288 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jkwt2" event={"ID":"0f65c972-5334-4705-bd19-90d43f3174e0","Type":"ContainerStarted","Data":"3662afcab03c1d1b403f7fdf06ef72ad2e3b7de3178873b67dfef2215b908b88"} Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.831804 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jkwt2" Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.833511 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-hp9zp" podStartSLOduration=132.833490357 podStartE2EDuration="2m12.833490357s" podCreationTimestamp="2025-11-28 12:35:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:38:09.76526433 +0000 UTC m=+150.330939684" watchObservedRunningTime="2025-11-28 12:38:09.833490357 +0000 UTC m=+150.399165711" Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.841521 4779 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-jkwt2 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.841602 4779 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jkwt2" podUID="0f65c972-5334-4705-bd19-90d43f3174e0" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.846035 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-hp9zp" Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.846340 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-hp9zp" Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.849403 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-m7gb6"] Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.855848 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z75wf" event={"ID":"8aa27b4a-7e0c-42e7-8732-8fa9dd15a754","Type":"ContainerStarted","Data":"06f99a09d9e3ae8c7ea124d2ec78c4323a30838ed0bbfcd7e560e58e2e0cadf0"} Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.860265 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-mrrkd"] Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.866186 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ch6d4"] Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.877556 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-92x8q" event={"ID":"610909a5-8090-4ed3-b686-1f1176a59e9e","Type":"ContainerStarted","Data":"7729544c4b99e8de1f6838f25ae910bbeb7ddcfc040be90426f830ef5f41fe69"} Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.878495 4779 patch_prober.go:28] interesting pod/downloads-7954f5f757-4jt92 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.878534 4779 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4jt92" podUID="8adac5a2-60c1-4c11-a7bd-62c113d8caca" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.898209 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-k5rpm" Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.907291 4779 patch_prober.go:28] interesting pod/router-default-5444994796-k5rpm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 12:38:09 crc kubenswrapper[4779]: [-]has-synced failed: reason withheld Nov 28 12:38:09 crc kubenswrapper[4779]: [+]process-running ok Nov 28 12:38:09 crc kubenswrapper[4779]: healthz check failed Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.907345 4779 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k5rpm" podUID="51d49383-db9f-4a63-865c-4387ecf691ed" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.910174 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:38:09 crc kubenswrapper[4779]: E1128 12:38:09.911668 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 12:38:10.411649451 +0000 UTC m=+150.977324805 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:09 crc kubenswrapper[4779]: W1128 12:38:09.944115 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbfadbe4f_46c0_4c08_b766_85cbbc651ac4.slice/crio-d9e056660a831b9849a0d855d7ebd7facdc8f83f628a44518b0cd701a6128696 WatchSource:0}: Error finding container d9e056660a831b9849a0d855d7ebd7facdc8f83f628a44518b0cd701a6128696: Status 404 returned error can't find the container with id d9e056660a831b9849a0d855d7ebd7facdc8f83f628a44518b0cd701a6128696 Nov 28 12:38:09 crc kubenswrapper[4779]: W1128 12:38:09.945337 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47d26bcb_c4e5_439c_8709_d589e50a1dad.slice/crio-150d3b5847a63190cb657dc39e244fa019e0ec5d3ec2eb2761078606d271f5b9 WatchSource:0}: Error finding container 150d3b5847a63190cb657dc39e244fa019e0ec5d3ec2eb2761078606d271f5b9: Status 404 returned error can't find the container with id 150d3b5847a63190cb657dc39e244fa019e0ec5d3ec2eb2761078606d271f5b9 Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.958588 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5p2wz" Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.958792 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5p2wz" Nov 28 12:38:09 crc kubenswrapper[4779]: I1128 12:38:09.984059 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5p2wz" Nov 28 12:38:10 crc kubenswrapper[4779]: W1128 12:38:09.998430 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf4a38f8_8868_4674_b1a8_5d47dd9b9d31.slice/crio-f9652f3964ce7a19de1d14719ef53f5fb80ed75a342eb4bf0c7c07c0f531860b WatchSource:0}: Error finding container f9652f3964ce7a19de1d14719ef53f5fb80ed75a342eb4bf0c7c07c0f531860b: Status 404 returned error can't find the container with id f9652f3964ce7a19de1d14719ef53f5fb80ed75a342eb4bf0c7c07c0f531860b Nov 28 12:38:10 crc kubenswrapper[4779]: I1128 12:38:10.011891 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:10 crc kubenswrapper[4779]: E1128 12:38:10.012429 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 12:38:10.512414164 +0000 UTC m=+151.078089518 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gfm2w" (UID: "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:10 crc kubenswrapper[4779]: I1128 12:38:10.115692 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:38:10 crc kubenswrapper[4779]: E1128 12:38:10.116621 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 12:38:10.616591239 +0000 UTC m=+151.182266583 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:10 crc kubenswrapper[4779]: I1128 12:38:10.120385 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:10 crc kubenswrapper[4779]: E1128 12:38:10.120695 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 12:38:10.620684639 +0000 UTC m=+151.186359993 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gfm2w" (UID: "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:10 crc kubenswrapper[4779]: I1128 12:38:10.222336 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:38:10 crc kubenswrapper[4779]: E1128 12:38:10.222544 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 12:38:10.722521701 +0000 UTC m=+151.288197055 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:10 crc kubenswrapper[4779]: I1128 12:38:10.232357 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jkwt2" podStartSLOduration=132.232341396 podStartE2EDuration="2m12.232341396s" podCreationTimestamp="2025-11-28 12:35:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:38:10.231475452 +0000 UTC m=+150.797150816" watchObservedRunningTime="2025-11-28 12:38:10.232341396 +0000 UTC m=+150.798016750" Nov 28 12:38:10 crc kubenswrapper[4779]: I1128 12:38:10.323666 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:10 crc kubenswrapper[4779]: E1128 12:38:10.324651 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 12:38:10.824637651 +0000 UTC m=+151.390313005 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gfm2w" (UID: "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:10 crc kubenswrapper[4779]: I1128 12:38:10.365551 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-fvvf5" podStartSLOduration=6.365533112 podStartE2EDuration="6.365533112s" podCreationTimestamp="2025-11-28 12:38:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:38:10.354257108 +0000 UTC m=+150.919932462" watchObservedRunningTime="2025-11-28 12:38:10.365533112 +0000 UTC m=+150.931208466" Nov 28 12:38:10 crc kubenswrapper[4779]: I1128 12:38:10.377142 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-92x8q" podStartSLOduration=132.377125064 podStartE2EDuration="2m12.377125064s" podCreationTimestamp="2025-11-28 12:35:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:38:10.320300654 +0000 UTC m=+150.885976008" watchObservedRunningTime="2025-11-28 12:38:10.377125064 +0000 UTC m=+150.942800418" Nov 28 12:38:10 crc kubenswrapper[4779]: I1128 12:38:10.425824 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:38:10 crc kubenswrapper[4779]: E1128 12:38:10.426287 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 12:38:10.926272217 +0000 UTC m=+151.491947571 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:10 crc kubenswrapper[4779]: I1128 12:38:10.527194 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:10 crc kubenswrapper[4779]: E1128 12:38:10.527612 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 12:38:11.027582475 +0000 UTC m=+151.593257829 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gfm2w" (UID: "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:10 crc kubenswrapper[4779]: I1128 12:38:10.629914 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:38:10 crc kubenswrapper[4779]: E1128 12:38:10.630272 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 12:38:11.130238289 +0000 UTC m=+151.695913643 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:10 crc kubenswrapper[4779]: I1128 12:38:10.630769 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:10 crc kubenswrapper[4779]: E1128 12:38:10.631240 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 12:38:11.131224625 +0000 UTC m=+151.696899979 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gfm2w" (UID: "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:10 crc kubenswrapper[4779]: I1128 12:38:10.733679 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:38:10 crc kubenswrapper[4779]: E1128 12:38:10.733963 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 12:38:11.233946121 +0000 UTC m=+151.799621475 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:10 crc kubenswrapper[4779]: I1128 12:38:10.837136 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:10 crc kubenswrapper[4779]: E1128 12:38:10.837840 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 12:38:11.337820508 +0000 UTC m=+151.903495852 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gfm2w" (UID: "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:10 crc kubenswrapper[4779]: I1128 12:38:10.899287 4779 patch_prober.go:28] interesting pod/router-default-5444994796-k5rpm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 12:38:10 crc kubenswrapper[4779]: [-]has-synced failed: reason withheld Nov 28 12:38:10 crc kubenswrapper[4779]: [+]process-running ok Nov 28 12:38:10 crc kubenswrapper[4779]: healthz check failed Nov 28 12:38:10 crc kubenswrapper[4779]: I1128 12:38:10.899602 4779 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k5rpm" podUID="51d49383-db9f-4a63-865c-4387ecf691ed" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 12:38:10 crc kubenswrapper[4779]: I1128 12:38:10.908546 4779 patch_prober.go:28] interesting pod/apiserver-76f77b778f-hp9zp container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 28 12:38:10 crc kubenswrapper[4779]: [+]log ok Nov 28 12:38:10 crc kubenswrapper[4779]: [+]etcd ok Nov 28 12:38:10 crc kubenswrapper[4779]: [+]poststarthook/start-apiserver-admission-initializer ok Nov 28 12:38:10 crc kubenswrapper[4779]: [+]poststarthook/generic-apiserver-start-informers ok Nov 28 12:38:10 crc kubenswrapper[4779]: [+]poststarthook/max-in-flight-filter ok Nov 28 12:38:10 crc kubenswrapper[4779]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 28 12:38:10 crc kubenswrapper[4779]: [+]poststarthook/image.openshift.io-apiserver-caches ok Nov 28 12:38:10 crc kubenswrapper[4779]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Nov 28 12:38:10 crc kubenswrapper[4779]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Nov 28 12:38:10 crc kubenswrapper[4779]: [+]poststarthook/project.openshift.io-projectcache ok Nov 28 12:38:10 crc kubenswrapper[4779]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Nov 28 12:38:10 crc kubenswrapper[4779]: [+]poststarthook/openshift.io-startinformers ok Nov 28 12:38:10 crc kubenswrapper[4779]: [+]poststarthook/openshift.io-restmapperupdater ok Nov 28 12:38:10 crc kubenswrapper[4779]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Nov 28 12:38:10 crc kubenswrapper[4779]: livez check failed Nov 28 12:38:10 crc kubenswrapper[4779]: I1128 12:38:10.908600 4779 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-hp9zp" podUID="5629efeb-c910-46f3-aa69-be7863bfb6f1" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 12:38:10 crc kubenswrapper[4779]: I1128 12:38:10.937288 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bvzn7" event={"ID":"b262e413-4650-41e5-b26b-0dd6ad0e4761","Type":"ContainerStarted","Data":"0d1b02a1aa0a41e7f1a9c56a6e5d1e5a89552da7dfe9933e2a32a6aae40117ae"} Nov 28 12:38:10 crc kubenswrapper[4779]: I1128 12:38:10.939483 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:38:10 crc kubenswrapper[4779]: E1128 12:38:10.939770 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 12:38:11.439756822 +0000 UTC m=+152.005432176 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:10 crc kubenswrapper[4779]: I1128 12:38:10.979957 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lnvht" event={"ID":"314edc02-f932-423f-a24b-5db0c6c08957","Type":"ContainerStarted","Data":"a0ee5f71927789321f4a4ef923ef1ea20f5699449da37d0a11fd96a872321a0a"} Nov 28 12:38:10 crc kubenswrapper[4779]: I1128 12:38:10.980006 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lnvht" event={"ID":"314edc02-f932-423f-a24b-5db0c6c08957","Type":"ContainerStarted","Data":"68c82d395a83f1aa10a508923afd1f4b204d3aae162c43a8ea48e13c0d087a04"} Nov 28 12:38:10 crc kubenswrapper[4779]: I1128 12:38:10.981001 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lnvht" Nov 28 12:38:10 crc kubenswrapper[4779]: I1128 12:38:10.991630 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bvzn7" podStartSLOduration=132.991603988 podStartE2EDuration="2m12.991603988s" podCreationTimestamp="2025-11-28 12:35:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:38:10.988469953 +0000 UTC m=+151.554145327" watchObservedRunningTime="2025-11-28 12:38:10.991603988 +0000 UTC m=+151.557279342" Nov 28 12:38:10 crc kubenswrapper[4779]: I1128 12:38:10.996328 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-t5r2m" event={"ID":"5cebf084-4ed8-45d2-a6ed-77c092539420","Type":"ContainerStarted","Data":"9ff3aa11c78097ddc4a9b9b86d6c6e4cd0bc130650b50f464375a7da5ac223ad"} Nov 28 12:38:10 crc kubenswrapper[4779]: I1128 12:38:10.996476 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-t5r2m" event={"ID":"5cebf084-4ed8-45d2-a6ed-77c092539420","Type":"ContainerStarted","Data":"b5b6d84d67e28e0611c23dc8cc9f81ee90a969a4949c933fcac0fef5ef8b433c"} Nov 28 12:38:11 crc kubenswrapper[4779]: I1128 12:38:11.021717 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bxffw" event={"ID":"8e8ac53b-b8ec-45f8-8b02-008f7e50a85f","Type":"ContainerStarted","Data":"d4ebfcddf094511d9b3856cfafb80cbd83f61672415150131d312dff67e2de42"} Nov 28 12:38:11 crc kubenswrapper[4779]: I1128 12:38:11.042900 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:11 crc kubenswrapper[4779]: E1128 12:38:11.054653 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 12:38:11.554633665 +0000 UTC m=+152.120309009 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gfm2w" (UID: "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:11 crc kubenswrapper[4779]: I1128 12:38:11.056676 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lnvht" podStartSLOduration=133.056658329 podStartE2EDuration="2m13.056658329s" podCreationTimestamp="2025-11-28 12:35:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:38:11.014482324 +0000 UTC m=+151.580157668" watchObservedRunningTime="2025-11-28 12:38:11.056658329 +0000 UTC m=+151.622333683" Nov 28 12:38:11 crc kubenswrapper[4779]: I1128 12:38:11.057370 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-t5r2m" podStartSLOduration=8.057363348 podStartE2EDuration="8.057363348s" podCreationTimestamp="2025-11-28 12:38:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:38:11.057008039 +0000 UTC m=+151.622683383" watchObservedRunningTime="2025-11-28 12:38:11.057363348 +0000 UTC m=+151.623038702" Nov 28 12:38:11 crc kubenswrapper[4779]: I1128 12:38:11.073783 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t6kkq" event={"ID":"72921f7d-c7f6-4d36-a102-b3393776f50e","Type":"ContainerStarted","Data":"322b86930c6b72671720bd73f9c9148df71280474b1386abde0ef7cb87c3142a"} Nov 28 12:38:11 crc kubenswrapper[4779]: I1128 12:38:11.077213 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-xbgtb" event={"ID":"79af79ce-0947-4a73-b45e-d588b52d115a","Type":"ContainerStarted","Data":"1567088635cc2537145bd7219573e195fccc8c8d60e99e61fbd2654e53f6e713"} Nov 28 12:38:11 crc kubenswrapper[4779]: I1128 12:38:11.077247 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-xbgtb" event={"ID":"79af79ce-0947-4a73-b45e-d588b52d115a","Type":"ContainerStarted","Data":"c4aeee994c50c8bf1bdc6ded13f04082d8b2066560fbdd86f062e53b09882209"} Nov 28 12:38:11 crc kubenswrapper[4779]: I1128 12:38:11.124833 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-f8kkl" event={"ID":"e2eedfd1-32f1-478a-b46d-939da24ba282","Type":"ContainerStarted","Data":"cb4ef0ff2f057c2da9c5a063639e6985d73deff284d1b4e223d737887d25b78a"} Nov 28 12:38:11 crc kubenswrapper[4779]: I1128 12:38:11.125944 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-f8kkl" Nov 28 12:38:11 crc kubenswrapper[4779]: I1128 12:38:11.147310 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:38:11 crc kubenswrapper[4779]: E1128 12:38:11.150179 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 12:38:11.650149356 +0000 UTC m=+152.215824710 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:11 crc kubenswrapper[4779]: I1128 12:38:11.174953 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t6kkq" podStartSLOduration=133.174936244 podStartE2EDuration="2m13.174936244s" podCreationTimestamp="2025-11-28 12:35:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:38:11.173931937 +0000 UTC m=+151.739607291" watchObservedRunningTime="2025-11-28 12:38:11.174936244 +0000 UTC m=+151.740611598" Nov 28 12:38:11 crc kubenswrapper[4779]: I1128 12:38:11.176560 4779 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-f8kkl container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Nov 28 12:38:11 crc kubenswrapper[4779]: I1128 12:38:11.176611 4779 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-f8kkl" podUID="e2eedfd1-32f1-478a-b46d-939da24ba282" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Nov 28 12:38:11 crc kubenswrapper[4779]: I1128 12:38:11.176977 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-xnq47" Nov 28 12:38:11 crc kubenswrapper[4779]: I1128 12:38:11.184258 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-bz2gm" event={"ID":"e562e074-6a8d-4c91-91ed-895b1b1ac2d1","Type":"ContainerStarted","Data":"e14dc7a66ed699165a59f33a8bf0c29613a373e184340a47a2b859f9cf4d8217"} Nov 28 12:38:11 crc kubenswrapper[4779]: I1128 12:38:11.215636 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z75wf" event={"ID":"8aa27b4a-7e0c-42e7-8732-8fa9dd15a754","Type":"ContainerStarted","Data":"a91a9ad988ef9c59821b6ff51a7fadd579a4437e448194557df1eab906d678c5"} Nov 28 12:38:11 crc kubenswrapper[4779]: I1128 12:38:11.215690 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z75wf" event={"ID":"8aa27b4a-7e0c-42e7-8732-8fa9dd15a754","Type":"ContainerStarted","Data":"11979f0f0f7ab793045461beb6bc8a7016597fe1fdc58b85cbd61ab7b01f12d8"} Nov 28 12:38:11 crc kubenswrapper[4779]: I1128 12:38:11.230736 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5npz2" event={"ID":"95299c5d-a4b2-4528-9cc2-d6d0155aa621","Type":"ContainerStarted","Data":"d1a5de7a1114caa5d6b4fca1556797c46c1e240f8a37860dc58fbf976f555f34"} Nov 28 12:38:11 crc kubenswrapper[4779]: I1128 12:38:11.230786 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5npz2" event={"ID":"95299c5d-a4b2-4528-9cc2-d6d0155aa621","Type":"ContainerStarted","Data":"1d769b5e4bea0ff7b70f86ed9981a699a3026bcaa812d5bd38bb7eceb38a7aea"} Nov 28 12:38:11 crc kubenswrapper[4779]: I1128 12:38:11.231598 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5npz2" Nov 28 12:38:11 crc kubenswrapper[4779]: I1128 12:38:11.269262 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:11 crc kubenswrapper[4779]: E1128 12:38:11.271880 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 12:38:11.771862423 +0000 UTC m=+152.337537777 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gfm2w" (UID: "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:11 crc kubenswrapper[4779]: I1128 12:38:11.273904 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-9kh95" event={"ID":"a42c1ca1-45e2-48a2-94f2-a0e38e001d4b","Type":"ContainerStarted","Data":"dfbbc150619dbda54ce78d1a591d7f5abd81c769213f33dfc90b0ff2119e8273"} Nov 28 12:38:11 crc kubenswrapper[4779]: I1128 12:38:11.273942 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-9kh95" event={"ID":"a42c1ca1-45e2-48a2-94f2-a0e38e001d4b","Type":"ContainerStarted","Data":"812c1ee0ea337184aedfe2ffc6e84c5b2d0159959015f5552fef899e6d138c1b"} Nov 28 12:38:11 crc kubenswrapper[4779]: I1128 12:38:11.291274 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gz9cl" event={"ID":"e08e1dca-54a7-4ddc-8942-6a5645304b53","Type":"ContainerStarted","Data":"ffde232ff9c748acbb5f69cc5f415afe7f4f9f7772978cde611b5db373408e85"} Nov 28 12:38:11 crc kubenswrapper[4779]: I1128 12:38:11.291321 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gz9cl" event={"ID":"e08e1dca-54a7-4ddc-8942-6a5645304b53","Type":"ContainerStarted","Data":"794cebe1417c2e3e2b499a2899a9d88eddfd37d2b6262ada0da1fd93b3d00404"} Nov 28 12:38:11 crc kubenswrapper[4779]: I1128 12:38:11.304646 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x9sk6" event={"ID":"657217e1-39d9-4d22-acf9-930d4597d9fc","Type":"ContainerStarted","Data":"4d41bba5ddbcd687806ac063665a4dce39dde9423c45310f4e9ad1387a5548f0"} Nov 28 12:38:11 crc kubenswrapper[4779]: I1128 12:38:11.319760 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405550-kkxf5" event={"ID":"189cc15e-4851-49ad-a757-49451158a3d7","Type":"ContainerStarted","Data":"ed8d289ec1b39b0e1bc6891ba419f36988de748661a8dc25c8bdb04b750af4db"} Nov 28 12:38:11 crc kubenswrapper[4779]: I1128 12:38:11.328636 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-f8kkl" podStartSLOduration=133.328620622 podStartE2EDuration="2m13.328620622s" podCreationTimestamp="2025-11-28 12:35:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:38:11.268518133 +0000 UTC m=+151.834193487" watchObservedRunningTime="2025-11-28 12:38:11.328620622 +0000 UTC m=+151.894295976" Nov 28 12:38:11 crc kubenswrapper[4779]: I1128 12:38:11.329343 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5npz2" podStartSLOduration=133.329339371 podStartE2EDuration="2m13.329339371s" podCreationTimestamp="2025-11-28 12:35:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:38:11.328452587 +0000 UTC m=+151.894127941" watchObservedRunningTime="2025-11-28 12:38:11.329339371 +0000 UTC m=+151.895014725" Nov 28 12:38:11 crc kubenswrapper[4779]: I1128 12:38:11.336298 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5npz2" Nov 28 12:38:11 crc kubenswrapper[4779]: I1128 12:38:11.357351 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ch6d4" event={"ID":"df4a38f8-8868-4674-b1a8-5d47dd9b9d31","Type":"ContainerStarted","Data":"c73ec172d4c18a298d232847b4a53436b320f0fe58284c628d801f561befa5c2"} Nov 28 12:38:11 crc kubenswrapper[4779]: I1128 12:38:11.357408 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ch6d4" event={"ID":"df4a38f8-8868-4674-b1a8-5d47dd9b9d31","Type":"ContainerStarted","Data":"f9652f3964ce7a19de1d14719ef53f5fb80ed75a342eb4bf0c7c07c0f531860b"} Nov 28 12:38:11 crc kubenswrapper[4779]: I1128 12:38:11.372524 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:38:11 crc kubenswrapper[4779]: E1128 12:38:11.373965 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 12:38:11.873945092 +0000 UTC m=+152.439620446 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:11 crc kubenswrapper[4779]: I1128 12:38:11.383120 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-mrrkd" event={"ID":"47d26bcb-c4e5-439c-8709-d589e50a1dad","Type":"ContainerStarted","Data":"150d3b5847a63190cb657dc39e244fa019e0ec5d3ec2eb2761078606d271f5b9"} Nov 28 12:38:11 crc kubenswrapper[4779]: I1128 12:38:11.397362 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hxbwl" event={"ID":"bfadbe4f-46c0-4c08-b766-85cbbc651ac4","Type":"ContainerStarted","Data":"81c08d69e8ebad8cca6f98a06d5d49f746e4ea729a0fb486b467632462f8f079"} Nov 28 12:38:11 crc kubenswrapper[4779]: I1128 12:38:11.397418 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hxbwl" event={"ID":"bfadbe4f-46c0-4c08-b766-85cbbc651ac4","Type":"ContainerStarted","Data":"d9e056660a831b9849a0d855d7ebd7facdc8f83f628a44518b0cd701a6128696"} Nov 28 12:38:11 crc kubenswrapper[4779]: I1128 12:38:11.408349 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fg2kt" event={"ID":"331b553d-6ae6-48fa-93a1-5e07ce6747f3","Type":"ContainerStarted","Data":"11d3b300f97d46647e67bbd6b7a8e361c07b2242e45b49bfd6bf53fc32f284c0"} Nov 28 12:38:11 crc kubenswrapper[4779]: I1128 12:38:11.408390 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fg2kt" event={"ID":"331b553d-6ae6-48fa-93a1-5e07ce6747f3","Type":"ContainerStarted","Data":"b8920c46d5a189ccc9b96ce6aae8acdc80dc0922ffaa884c4eae67134fe95581"} Nov 28 12:38:11 crc kubenswrapper[4779]: I1128 12:38:11.413378 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-m7gb6" event={"ID":"56157237-28db-49f7-8506-bbddb98aa46b","Type":"ContainerStarted","Data":"ac33b7c2e16a257725d5cc215de658f32a2397dc4660fc8c172c6a7f20770bf2"} Nov 28 12:38:11 crc kubenswrapper[4779]: I1128 12:38:11.422721 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jkwt2" Nov 28 12:38:11 crc kubenswrapper[4779]: I1128 12:38:11.433840 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-bz2gm" podStartSLOduration=133.433823054 podStartE2EDuration="2m13.433823054s" podCreationTimestamp="2025-11-28 12:35:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:38:11.431721168 +0000 UTC m=+151.997396522" watchObservedRunningTime="2025-11-28 12:38:11.433823054 +0000 UTC m=+151.999498408" Nov 28 12:38:11 crc kubenswrapper[4779]: I1128 12:38:11.434281 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-5p2wz" Nov 28 12:38:11 crc kubenswrapper[4779]: I1128 12:38:11.475494 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:11 crc kubenswrapper[4779]: E1128 12:38:11.478111 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 12:38:11.978076876 +0000 UTC m=+152.543752230 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gfm2w" (UID: "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:11 crc kubenswrapper[4779]: I1128 12:38:11.515680 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z75wf" podStartSLOduration=133.515660408 podStartE2EDuration="2m13.515660408s" podCreationTimestamp="2025-11-28 12:35:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:38:11.467592833 +0000 UTC m=+152.033268197" watchObservedRunningTime="2025-11-28 12:38:11.515660408 +0000 UTC m=+152.081335762" Nov 28 12:38:11 crc kubenswrapper[4779]: I1128 12:38:11.516389 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gz9cl" podStartSLOduration=133.516384737 podStartE2EDuration="2m13.516384737s" podCreationTimestamp="2025-11-28 12:35:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:38:11.509782519 +0000 UTC m=+152.075457873" watchObservedRunningTime="2025-11-28 12:38:11.516384737 +0000 UTC m=+152.082060091" Nov 28 12:38:11 crc kubenswrapper[4779]: I1128 12:38:11.569157 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29405550-kkxf5" podStartSLOduration=133.569137118 podStartE2EDuration="2m13.569137118s" podCreationTimestamp="2025-11-28 12:35:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:38:11.568771318 +0000 UTC m=+152.134446662" watchObservedRunningTime="2025-11-28 12:38:11.569137118 +0000 UTC m=+152.134812472" Nov 28 12:38:11 crc kubenswrapper[4779]: I1128 12:38:11.583394 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:38:11 crc kubenswrapper[4779]: E1128 12:38:11.584128 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 12:38:12.084114141 +0000 UTC m=+152.649789495 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:11 crc kubenswrapper[4779]: I1128 12:38:11.686051 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:11 crc kubenswrapper[4779]: E1128 12:38:11.686593 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 12:38:12.186569719 +0000 UTC m=+152.752245243 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gfm2w" (UID: "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:11 crc kubenswrapper[4779]: I1128 12:38:11.707391 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fg2kt" podStartSLOduration=133.707361009 podStartE2EDuration="2m13.707361009s" podCreationTimestamp="2025-11-28 12:35:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:38:11.706672681 +0000 UTC m=+152.272348035" watchObservedRunningTime="2025-11-28 12:38:11.707361009 +0000 UTC m=+152.273036363" Nov 28 12:38:11 crc kubenswrapper[4779]: I1128 12:38:11.782013 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-9kh95" podStartSLOduration=133.781986768 podStartE2EDuration="2m13.781986768s" podCreationTimestamp="2025-11-28 12:35:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:38:11.78056355 +0000 UTC m=+152.346238904" watchObservedRunningTime="2025-11-28 12:38:11.781986768 +0000 UTC m=+152.347662122" Nov 28 12:38:11 crc kubenswrapper[4779]: I1128 12:38:11.787506 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:38:11 crc kubenswrapper[4779]: E1128 12:38:11.787806 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 12:38:12.287790535 +0000 UTC m=+152.853465889 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:11 crc kubenswrapper[4779]: I1128 12:38:11.893277 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:11 crc kubenswrapper[4779]: E1128 12:38:11.893756 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 12:38:12.393739917 +0000 UTC m=+152.959415271 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gfm2w" (UID: "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:11 crc kubenswrapper[4779]: I1128 12:38:11.910204 4779 patch_prober.go:28] interesting pod/router-default-5444994796-k5rpm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 12:38:11 crc kubenswrapper[4779]: [-]has-synced failed: reason withheld Nov 28 12:38:11 crc kubenswrapper[4779]: [+]process-running ok Nov 28 12:38:11 crc kubenswrapper[4779]: healthz check failed Nov 28 12:38:11 crc kubenswrapper[4779]: I1128 12:38:11.910685 4779 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k5rpm" podUID="51d49383-db9f-4a63-865c-4387ecf691ed" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 12:38:11 crc kubenswrapper[4779]: I1128 12:38:11.917264 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lnvht" Nov 28 12:38:11 crc kubenswrapper[4779]: I1128 12:38:11.995310 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:38:11 crc kubenswrapper[4779]: E1128 12:38:11.997531 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 12:38:12.497494151 +0000 UTC m=+153.063169505 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:12 crc kubenswrapper[4779]: I1128 12:38:12.096986 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:12 crc kubenswrapper[4779]: E1128 12:38:12.097353 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 12:38:12.597337269 +0000 UTC m=+153.163012623 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gfm2w" (UID: "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:12 crc kubenswrapper[4779]: I1128 12:38:12.198198 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:38:12 crc kubenswrapper[4779]: E1128 12:38:12.198385 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 12:38:12.698357339 +0000 UTC m=+153.264032693 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:12 crc kubenswrapper[4779]: I1128 12:38:12.198508 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:12 crc kubenswrapper[4779]: E1128 12:38:12.198880 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 12:38:12.698873133 +0000 UTC m=+153.264548487 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gfm2w" (UID: "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:12 crc kubenswrapper[4779]: I1128 12:38:12.299394 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:38:12 crc kubenswrapper[4779]: E1128 12:38:12.299741 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 12:38:12.799714138 +0000 UTC m=+153.365389492 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:12 crc kubenswrapper[4779]: I1128 12:38:12.400859 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:12 crc kubenswrapper[4779]: E1128 12:38:12.401168 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 12:38:12.901157029 +0000 UTC m=+153.466832383 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gfm2w" (UID: "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:12 crc kubenswrapper[4779]: I1128 12:38:12.417714 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-mrrkd" event={"ID":"47d26bcb-c4e5-439c-8709-d589e50a1dad","Type":"ContainerStarted","Data":"75e763e7bf44d8b7d6aab09dd3fd004158f1ac28258f0ae26f1b423ff49479b2"} Nov 28 12:38:12 crc kubenswrapper[4779]: I1128 12:38:12.417744 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-mrrkd" event={"ID":"47d26bcb-c4e5-439c-8709-d589e50a1dad","Type":"ContainerStarted","Data":"45370b5f00faec7628d3660199ae1148fc38121fdb1ef147905577eeb03265d4"} Nov 28 12:38:12 crc kubenswrapper[4779]: I1128 12:38:12.418345 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-mrrkd" Nov 28 12:38:12 crc kubenswrapper[4779]: I1128 12:38:12.419741 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hxbwl" event={"ID":"bfadbe4f-46c0-4c08-b766-85cbbc651ac4","Type":"ContainerStarted","Data":"9839cf650c2b2b9713100c043f20d475a5970305455bd1c45f55b79129d2e484"} Nov 28 12:38:12 crc kubenswrapper[4779]: I1128 12:38:12.420209 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hxbwl" Nov 28 12:38:12 crc kubenswrapper[4779]: I1128 12:38:12.421515 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-xbgtb" event={"ID":"79af79ce-0947-4a73-b45e-d588b52d115a","Type":"ContainerStarted","Data":"40207d3f54c6c8570092cd641b9022c8aaea6b94860bda5cf0aec06c9d8aab7d"} Nov 28 12:38:12 crc kubenswrapper[4779]: I1128 12:38:12.430655 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ch6d4" event={"ID":"df4a38f8-8868-4674-b1a8-5d47dd9b9d31","Type":"ContainerStarted","Data":"0c6f12dd4f89227644dc6182bc771d69c2ff2b04af7c636e648eabd5f37a9010"} Nov 28 12:38:12 crc kubenswrapper[4779]: I1128 12:38:12.432302 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-m7gb6" event={"ID":"56157237-28db-49f7-8506-bbddb98aa46b","Type":"ContainerStarted","Data":"a2b09f805fb3f11a04775065f53e58d2872c2ca5869b8196e13a221aac90e2ab"} Nov 28 12:38:12 crc kubenswrapper[4779]: I1128 12:38:12.433244 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x9sk6" event={"ID":"657217e1-39d9-4d22-acf9-930d4597d9fc","Type":"ContainerStarted","Data":"8471c96b8dfc855d992e14c0fb38ff694c783b6440951eb0e7729e4f393338b0"} Nov 28 12:38:12 crc kubenswrapper[4779]: I1128 12:38:12.435141 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bxffw" event={"ID":"8e8ac53b-b8ec-45f8-8b02-008f7e50a85f","Type":"ContainerStarted","Data":"64a909199654da1acce320cf51d7a5d77e07c9bbc499bd47ffcf61d9cb098e76"} Nov 28 12:38:12 crc kubenswrapper[4779]: I1128 12:38:12.435987 4779 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-f8kkl container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Nov 28 12:38:12 crc kubenswrapper[4779]: I1128 12:38:12.436037 4779 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-f8kkl" podUID="e2eedfd1-32f1-478a-b46d-939da24ba282" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Nov 28 12:38:12 crc kubenswrapper[4779]: I1128 12:38:12.439720 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-mrrkd" podStartSLOduration=8.439706917 podStartE2EDuration="8.439706917s" podCreationTimestamp="2025-11-28 12:38:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:38:12.437719564 +0000 UTC m=+153.003394918" watchObservedRunningTime="2025-11-28 12:38:12.439706917 +0000 UTC m=+153.005382271" Nov 28 12:38:12 crc kubenswrapper[4779]: I1128 12:38:12.496862 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-xbgtb" podStartSLOduration=134.496841325 podStartE2EDuration="2m14.496841325s" podCreationTimestamp="2025-11-28 12:35:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:38:12.49033246 +0000 UTC m=+153.056007814" watchObservedRunningTime="2025-11-28 12:38:12.496841325 +0000 UTC m=+153.062516679" Nov 28 12:38:12 crc kubenswrapper[4779]: I1128 12:38:12.498170 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bxffw" podStartSLOduration=134.498164941 podStartE2EDuration="2m14.498164941s" podCreationTimestamp="2025-11-28 12:35:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:38:12.467542047 +0000 UTC m=+153.033217401" watchObservedRunningTime="2025-11-28 12:38:12.498164941 +0000 UTC m=+153.063840295" Nov 28 12:38:12 crc kubenswrapper[4779]: I1128 12:38:12.501678 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:38:12 crc kubenswrapper[4779]: E1128 12:38:12.503410 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 12:38:13.003395782 +0000 UTC m=+153.569071136 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:12 crc kubenswrapper[4779]: I1128 12:38:12.563103 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ch6d4" podStartSLOduration=134.563072968 podStartE2EDuration="2m14.563072968s" podCreationTimestamp="2025-11-28 12:35:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:38:12.535480826 +0000 UTC m=+153.101156180" watchObservedRunningTime="2025-11-28 12:38:12.563072968 +0000 UTC m=+153.128748322" Nov 28 12:38:12 crc kubenswrapper[4779]: I1128 12:38:12.592035 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hxbwl" podStartSLOduration=134.592014747 podStartE2EDuration="2m14.592014747s" podCreationTimestamp="2025-11-28 12:35:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:38:12.562622406 +0000 UTC m=+153.128297750" watchObservedRunningTime="2025-11-28 12:38:12.592014747 +0000 UTC m=+153.157690101" Nov 28 12:38:12 crc kubenswrapper[4779]: I1128 12:38:12.592883 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x9sk6" podStartSLOduration=134.59287569 podStartE2EDuration="2m14.59287569s" podCreationTimestamp="2025-11-28 12:35:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:38:12.592218603 +0000 UTC m=+153.157893967" watchObservedRunningTime="2025-11-28 12:38:12.59287569 +0000 UTC m=+153.158551044" Nov 28 12:38:12 crc kubenswrapper[4779]: I1128 12:38:12.603896 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:12 crc kubenswrapper[4779]: E1128 12:38:12.604283 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 12:38:13.104271917 +0000 UTC m=+153.669947271 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gfm2w" (UID: "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:12 crc kubenswrapper[4779]: I1128 12:38:12.704734 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:38:12 crc kubenswrapper[4779]: E1128 12:38:12.704933 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 12:38:13.204898926 +0000 UTC m=+153.770574280 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:12 crc kubenswrapper[4779]: I1128 12:38:12.705262 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:12 crc kubenswrapper[4779]: E1128 12:38:12.705629 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 12:38:13.205612566 +0000 UTC m=+153.771287920 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gfm2w" (UID: "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:12 crc kubenswrapper[4779]: I1128 12:38:12.806829 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:38:12 crc kubenswrapper[4779]: E1128 12:38:12.807024 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 12:38:13.306995045 +0000 UTC m=+153.872670399 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:12 crc kubenswrapper[4779]: I1128 12:38:12.807285 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:12 crc kubenswrapper[4779]: E1128 12:38:12.807434 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 12:38:13.307427357 +0000 UTC m=+153.873102711 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gfm2w" (UID: "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:12 crc kubenswrapper[4779]: I1128 12:38:12.906295 4779 patch_prober.go:28] interesting pod/router-default-5444994796-k5rpm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 12:38:12 crc kubenswrapper[4779]: [-]has-synced failed: reason withheld Nov 28 12:38:12 crc kubenswrapper[4779]: [+]process-running ok Nov 28 12:38:12 crc kubenswrapper[4779]: healthz check failed Nov 28 12:38:12 crc kubenswrapper[4779]: I1128 12:38:12.906392 4779 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k5rpm" podUID="51d49383-db9f-4a63-865c-4387ecf691ed" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 12:38:12 crc kubenswrapper[4779]: I1128 12:38:12.908633 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:38:12 crc kubenswrapper[4779]: E1128 12:38:12.908825 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 12:38:13.408801716 +0000 UTC m=+153.974477070 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:12 crc kubenswrapper[4779]: I1128 12:38:12.908998 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:12 crc kubenswrapper[4779]: E1128 12:38:12.909377 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 12:38:13.409358041 +0000 UTC m=+153.975033395 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gfm2w" (UID: "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.010561 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:38:13 crc kubenswrapper[4779]: E1128 12:38:13.010878 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 12:38:13.510837914 +0000 UTC m=+154.076513268 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.011238 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:13 crc kubenswrapper[4779]: E1128 12:38:13.011579 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 12:38:13.511563993 +0000 UTC m=+154.077239347 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gfm2w" (UID: "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.112691 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:38:13 crc kubenswrapper[4779]: E1128 12:38:13.113198 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 12:38:13.613043086 +0000 UTC m=+154.178718430 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.113422 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:13 crc kubenswrapper[4779]: E1128 12:38:13.113846 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 12:38:13.613828917 +0000 UTC m=+154.179504271 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gfm2w" (UID: "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.185660 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xbp9s"] Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.186904 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xbp9s" Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.192834 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.196227 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xbp9s"] Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.243874 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:38:13 crc kubenswrapper[4779]: E1128 12:38:13.244311 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 12:38:13.744283859 +0000 UTC m=+154.309959213 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.344924 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqxlr\" (UniqueName: \"kubernetes.io/projected/b88224c6-06e6-41c7-bba9-cb04ae3361e0-kube-api-access-jqxlr\") pod \"certified-operators-xbp9s\" (UID: \"b88224c6-06e6-41c7-bba9-cb04ae3361e0\") " pod="openshift-marketplace/certified-operators-xbp9s" Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.344988 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.345012 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b88224c6-06e6-41c7-bba9-cb04ae3361e0-utilities\") pod \"certified-operators-xbp9s\" (UID: \"b88224c6-06e6-41c7-bba9-cb04ae3361e0\") " pod="openshift-marketplace/certified-operators-xbp9s" Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.345191 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b88224c6-06e6-41c7-bba9-cb04ae3361e0-catalog-content\") pod \"certified-operators-xbp9s\" (UID: \"b88224c6-06e6-41c7-bba9-cb04ae3361e0\") " pod="openshift-marketplace/certified-operators-xbp9s" Nov 28 12:38:13 crc kubenswrapper[4779]: E1128 12:38:13.345368 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 12:38:13.84535062 +0000 UTC m=+154.411025964 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gfm2w" (UID: "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.389284 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tsxnv"] Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.390301 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tsxnv" Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.396510 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.406405 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tsxnv"] Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.441388 4779 generic.go:334] "Generic (PLEG): container finished" podID="189cc15e-4851-49ad-a757-49451158a3d7" containerID="ed8d289ec1b39b0e1bc6891ba419f36988de748661a8dc25c8bdb04b750af4db" exitCode=0 Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.441449 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405550-kkxf5" event={"ID":"189cc15e-4851-49ad-a757-49451158a3d7","Type":"ContainerDied","Data":"ed8d289ec1b39b0e1bc6891ba419f36988de748661a8dc25c8bdb04b750af4db"} Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.444619 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-m7gb6" event={"ID":"56157237-28db-49f7-8506-bbddb98aa46b","Type":"ContainerStarted","Data":"9380f5f1e862c3e4d8bd8b137615184d21af14da8272657f368ea87df5ad80fa"} Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.444646 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-m7gb6" event={"ID":"56157237-28db-49f7-8506-bbddb98aa46b","Type":"ContainerStarted","Data":"63c6ddb2f838face354b0ffe52b4dbaafde3e1616b42808163b78129545a11a7"} Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.444656 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-m7gb6" event={"ID":"56157237-28db-49f7-8506-bbddb98aa46b","Type":"ContainerStarted","Data":"50762d86ae2eb437e7ffab3ddad20029e30dadae6c82e7e3d549ef946761984c"} Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.445668 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.445886 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqxlr\" (UniqueName: \"kubernetes.io/projected/b88224c6-06e6-41c7-bba9-cb04ae3361e0-kube-api-access-jqxlr\") pod \"certified-operators-xbp9s\" (UID: \"b88224c6-06e6-41c7-bba9-cb04ae3361e0\") " pod="openshift-marketplace/certified-operators-xbp9s" Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.445956 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b88224c6-06e6-41c7-bba9-cb04ae3361e0-utilities\") pod \"certified-operators-xbp9s\" (UID: \"b88224c6-06e6-41c7-bba9-cb04ae3361e0\") " pod="openshift-marketplace/certified-operators-xbp9s" Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.446009 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b88224c6-06e6-41c7-bba9-cb04ae3361e0-catalog-content\") pod \"certified-operators-xbp9s\" (UID: \"b88224c6-06e6-41c7-bba9-cb04ae3361e0\") " pod="openshift-marketplace/certified-operators-xbp9s" Nov 28 12:38:13 crc kubenswrapper[4779]: E1128 12:38:13.446519 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 12:38:13.946474323 +0000 UTC m=+154.512149837 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.446641 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b88224c6-06e6-41c7-bba9-cb04ae3361e0-catalog-content\") pod \"certified-operators-xbp9s\" (UID: \"b88224c6-06e6-41c7-bba9-cb04ae3361e0\") " pod="openshift-marketplace/certified-operators-xbp9s" Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.447020 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b88224c6-06e6-41c7-bba9-cb04ae3361e0-utilities\") pod \"certified-operators-xbp9s\" (UID: \"b88224c6-06e6-41c7-bba9-cb04ae3361e0\") " pod="openshift-marketplace/certified-operators-xbp9s" Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.449303 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-f8kkl" Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.473545 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-m7gb6" podStartSLOduration=9.473518041 podStartE2EDuration="9.473518041s" podCreationTimestamp="2025-11-28 12:38:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:38:13.473399108 +0000 UTC m=+154.039074542" watchObservedRunningTime="2025-11-28 12:38:13.473518041 +0000 UTC m=+154.039193395" Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.477670 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqxlr\" (UniqueName: \"kubernetes.io/projected/b88224c6-06e6-41c7-bba9-cb04ae3361e0-kube-api-access-jqxlr\") pod \"certified-operators-xbp9s\" (UID: \"b88224c6-06e6-41c7-bba9-cb04ae3361e0\") " pod="openshift-marketplace/certified-operators-xbp9s" Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.535694 4779 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.547256 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftpj8\" (UniqueName: \"kubernetes.io/projected/42148fd9-447b-43a5-b513-7cc37b19ab16-kube-api-access-ftpj8\") pod \"community-operators-tsxnv\" (UID: \"42148fd9-447b-43a5-b513-7cc37b19ab16\") " pod="openshift-marketplace/community-operators-tsxnv" Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.547362 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42148fd9-447b-43a5-b513-7cc37b19ab16-catalog-content\") pod \"community-operators-tsxnv\" (UID: \"42148fd9-447b-43a5-b513-7cc37b19ab16\") " pod="openshift-marketplace/community-operators-tsxnv" Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.547401 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.547795 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xbp9s" Nov 28 12:38:13 crc kubenswrapper[4779]: E1128 12:38:13.548303 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 12:38:14.048278924 +0000 UTC m=+154.613954278 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gfm2w" (UID: "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.548409 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42148fd9-447b-43a5-b513-7cc37b19ab16-utilities\") pod \"community-operators-tsxnv\" (UID: \"42148fd9-447b-43a5-b513-7cc37b19ab16\") " pod="openshift-marketplace/community-operators-tsxnv" Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.587360 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nv895"] Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.588695 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nv895" Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.596289 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nv895"] Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.650001 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.650153 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42148fd9-447b-43a5-b513-7cc37b19ab16-utilities\") pod \"community-operators-tsxnv\" (UID: \"42148fd9-447b-43a5-b513-7cc37b19ab16\") " pod="openshift-marketplace/community-operators-tsxnv" Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.650235 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftpj8\" (UniqueName: \"kubernetes.io/projected/42148fd9-447b-43a5-b513-7cc37b19ab16-kube-api-access-ftpj8\") pod \"community-operators-tsxnv\" (UID: \"42148fd9-447b-43a5-b513-7cc37b19ab16\") " pod="openshift-marketplace/community-operators-tsxnv" Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.650264 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42148fd9-447b-43a5-b513-7cc37b19ab16-catalog-content\") pod \"community-operators-tsxnv\" (UID: \"42148fd9-447b-43a5-b513-7cc37b19ab16\") " pod="openshift-marketplace/community-operators-tsxnv" Nov 28 12:38:13 crc kubenswrapper[4779]: E1128 12:38:13.650663 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 12:38:14.150627779 +0000 UTC m=+154.716303173 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.650747 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42148fd9-447b-43a5-b513-7cc37b19ab16-catalog-content\") pod \"community-operators-tsxnv\" (UID: \"42148fd9-447b-43a5-b513-7cc37b19ab16\") " pod="openshift-marketplace/community-operators-tsxnv" Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.651149 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42148fd9-447b-43a5-b513-7cc37b19ab16-utilities\") pod \"community-operators-tsxnv\" (UID: \"42148fd9-447b-43a5-b513-7cc37b19ab16\") " pod="openshift-marketplace/community-operators-tsxnv" Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.679494 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftpj8\" (UniqueName: \"kubernetes.io/projected/42148fd9-447b-43a5-b513-7cc37b19ab16-kube-api-access-ftpj8\") pod \"community-operators-tsxnv\" (UID: \"42148fd9-447b-43a5-b513-7cc37b19ab16\") " pod="openshift-marketplace/community-operators-tsxnv" Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.720930 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tsxnv" Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.754820 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5-utilities\") pod \"certified-operators-nv895\" (UID: \"dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5\") " pod="openshift-marketplace/certified-operators-nv895" Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.755083 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4zdq\" (UniqueName: \"kubernetes.io/projected/dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5-kube-api-access-m4zdq\") pod \"certified-operators-nv895\" (UID: \"dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5\") " pod="openshift-marketplace/certified-operators-nv895" Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.755145 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5-catalog-content\") pod \"certified-operators-nv895\" (UID: \"dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5\") " pod="openshift-marketplace/certified-operators-nv895" Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.755198 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.755597 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xbp9s"] Nov 28 12:38:13 crc kubenswrapper[4779]: E1128 12:38:13.755646 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 12:38:14.255626296 +0000 UTC m=+154.821301650 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gfm2w" (UID: "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:13 crc kubenswrapper[4779]: W1128 12:38:13.776166 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb88224c6_06e6_41c7_bba9_cb04ae3361e0.slice/crio-35bdeb077301f4e6fba5093ef63afed1142d4f690f8cb8a2b0901b5d59e5134f WatchSource:0}: Error finding container 35bdeb077301f4e6fba5093ef63afed1142d4f690f8cb8a2b0901b5d59e5134f: Status 404 returned error can't find the container with id 35bdeb077301f4e6fba5093ef63afed1142d4f690f8cb8a2b0901b5d59e5134f Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.793718 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bgzr4"] Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.796875 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bgzr4" Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.805003 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bgzr4"] Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.861933 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.862624 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5-utilities\") pod \"certified-operators-nv895\" (UID: \"dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5\") " pod="openshift-marketplace/certified-operators-nv895" Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.862695 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4zdq\" (UniqueName: \"kubernetes.io/projected/dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5-kube-api-access-m4zdq\") pod \"certified-operators-nv895\" (UID: \"dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5\") " pod="openshift-marketplace/certified-operators-nv895" Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.862730 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5-catalog-content\") pod \"certified-operators-nv895\" (UID: \"dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5\") " pod="openshift-marketplace/certified-operators-nv895" Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.863344 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5-catalog-content\") pod \"certified-operators-nv895\" (UID: \"dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5\") " pod="openshift-marketplace/certified-operators-nv895" Nov 28 12:38:13 crc kubenswrapper[4779]: E1128 12:38:13.863461 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 12:38:14.363434709 +0000 UTC m=+154.929110063 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.863699 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5-utilities\") pod \"certified-operators-nv895\" (UID: \"dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5\") " pod="openshift-marketplace/certified-operators-nv895" Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.898331 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4zdq\" (UniqueName: \"kubernetes.io/projected/dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5-kube-api-access-m4zdq\") pod \"certified-operators-nv895\" (UID: \"dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5\") " pod="openshift-marketplace/certified-operators-nv895" Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.899472 4779 patch_prober.go:28] interesting pod/router-default-5444994796-k5rpm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 12:38:13 crc kubenswrapper[4779]: [-]has-synced failed: reason withheld Nov 28 12:38:13 crc kubenswrapper[4779]: [+]process-running ok Nov 28 12:38:13 crc kubenswrapper[4779]: healthz check failed Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.899542 4779 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k5rpm" podUID="51d49383-db9f-4a63-865c-4387ecf691ed" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.902313 4779 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-28T12:38:13.535752936Z","Handler":null,"Name":""} Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.913652 4779 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.913723 4779 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.917044 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nv895" Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.963752 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8-utilities\") pod \"community-operators-bgzr4\" (UID: \"35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8\") " pod="openshift-marketplace/community-operators-bgzr4" Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.963816 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjm9g\" (UniqueName: \"kubernetes.io/projected/35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8-kube-api-access-qjm9g\") pod \"community-operators-bgzr4\" (UID: \"35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8\") " pod="openshift-marketplace/community-operators-bgzr4" Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.963852 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.963882 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8-catalog-content\") pod \"community-operators-bgzr4\" (UID: \"35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8\") " pod="openshift-marketplace/community-operators-bgzr4" Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.966809 4779 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.966831 4779 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:13 crc kubenswrapper[4779]: I1128 12:38:13.989749 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gfm2w\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:14 crc kubenswrapper[4779]: I1128 12:38:14.036768 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tsxnv"] Nov 28 12:38:14 crc kubenswrapper[4779]: W1128 12:38:14.050201 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42148fd9_447b_43a5_b513_7cc37b19ab16.slice/crio-b425776c25c35f2d49a5dd81d43b5ad7e236e6e97a658c7556e8fe624297d2e2 WatchSource:0}: Error finding container b425776c25c35f2d49a5dd81d43b5ad7e236e6e97a658c7556e8fe624297d2e2: Status 404 returned error can't find the container with id b425776c25c35f2d49a5dd81d43b5ad7e236e6e97a658c7556e8fe624297d2e2 Nov 28 12:38:14 crc kubenswrapper[4779]: I1128 12:38:14.064437 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 12:38:14 crc kubenswrapper[4779]: I1128 12:38:14.064622 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8-utilities\") pod \"community-operators-bgzr4\" (UID: \"35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8\") " pod="openshift-marketplace/community-operators-bgzr4" Nov 28 12:38:14 crc kubenswrapper[4779]: I1128 12:38:14.064667 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjm9g\" (UniqueName: \"kubernetes.io/projected/35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8-kube-api-access-qjm9g\") pod \"community-operators-bgzr4\" (UID: \"35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8\") " pod="openshift-marketplace/community-operators-bgzr4" Nov 28 12:38:14 crc kubenswrapper[4779]: I1128 12:38:14.064703 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8-catalog-content\") pod \"community-operators-bgzr4\" (UID: \"35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8\") " pod="openshift-marketplace/community-operators-bgzr4" Nov 28 12:38:14 crc kubenswrapper[4779]: I1128 12:38:14.065021 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8-catalog-content\") pod \"community-operators-bgzr4\" (UID: \"35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8\") " pod="openshift-marketplace/community-operators-bgzr4" Nov 28 12:38:14 crc kubenswrapper[4779]: I1128 12:38:14.065255 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8-utilities\") pod \"community-operators-bgzr4\" (UID: \"35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8\") " pod="openshift-marketplace/community-operators-bgzr4" Nov 28 12:38:14 crc kubenswrapper[4779]: I1128 12:38:14.078050 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 28 12:38:14 crc kubenswrapper[4779]: I1128 12:38:14.083165 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjm9g\" (UniqueName: \"kubernetes.io/projected/35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8-kube-api-access-qjm9g\") pod \"community-operators-bgzr4\" (UID: \"35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8\") " pod="openshift-marketplace/community-operators-bgzr4" Nov 28 12:38:14 crc kubenswrapper[4779]: I1128 12:38:14.117801 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bgzr4" Nov 28 12:38:14 crc kubenswrapper[4779]: I1128 12:38:14.136300 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nv895"] Nov 28 12:38:14 crc kubenswrapper[4779]: I1128 12:38:14.140485 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:14 crc kubenswrapper[4779]: W1128 12:38:14.147315 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc5ce4e8_6378_44fa_b81f_2e675c5c1ea5.slice/crio-edbdae4a094d13d14b44ca3ca9338d71437764ca81e4b20a345fb7a693ed2fdc WatchSource:0}: Error finding container edbdae4a094d13d14b44ca3ca9338d71437764ca81e4b20a345fb7a693ed2fdc: Status 404 returned error can't find the container with id edbdae4a094d13d14b44ca3ca9338d71437764ca81e4b20a345fb7a693ed2fdc Nov 28 12:38:14 crc kubenswrapper[4779]: I1128 12:38:14.334968 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bgzr4"] Nov 28 12:38:14 crc kubenswrapper[4779]: W1128 12:38:14.343105 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35ea5bf9_8f7b_43d6_ae1c_8cff8176bae8.slice/crio-f3b1d3cb6a09949704f42871efe5b63dccfdb0fb956768e1f557d634f3231ea0 WatchSource:0}: Error finding container f3b1d3cb6a09949704f42871efe5b63dccfdb0fb956768e1f557d634f3231ea0: Status 404 returned error can't find the container with id f3b1d3cb6a09949704f42871efe5b63dccfdb0fb956768e1f557d634f3231ea0 Nov 28 12:38:14 crc kubenswrapper[4779]: I1128 12:38:14.375322 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-gfm2w"] Nov 28 12:38:14 crc kubenswrapper[4779]: I1128 12:38:14.450168 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bgzr4" event={"ID":"35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8","Type":"ContainerStarted","Data":"f3b1d3cb6a09949704f42871efe5b63dccfdb0fb956768e1f557d634f3231ea0"} Nov 28 12:38:14 crc kubenswrapper[4779]: I1128 12:38:14.451863 4779 generic.go:334] "Generic (PLEG): container finished" podID="42148fd9-447b-43a5-b513-7cc37b19ab16" containerID="cd5c8eb2a19bfbb3e38099d48133fd79f6d7d19e36201d554a6561e20e7eb446" exitCode=0 Nov 28 12:38:14 crc kubenswrapper[4779]: I1128 12:38:14.451961 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tsxnv" event={"ID":"42148fd9-447b-43a5-b513-7cc37b19ab16","Type":"ContainerDied","Data":"cd5c8eb2a19bfbb3e38099d48133fd79f6d7d19e36201d554a6561e20e7eb446"} Nov 28 12:38:14 crc kubenswrapper[4779]: I1128 12:38:14.452021 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tsxnv" event={"ID":"42148fd9-447b-43a5-b513-7cc37b19ab16","Type":"ContainerStarted","Data":"b425776c25c35f2d49a5dd81d43b5ad7e236e6e97a658c7556e8fe624297d2e2"} Nov 28 12:38:14 crc kubenswrapper[4779]: I1128 12:38:14.453668 4779 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 12:38:14 crc kubenswrapper[4779]: I1128 12:38:14.453712 4779 generic.go:334] "Generic (PLEG): container finished" podID="b88224c6-06e6-41c7-bba9-cb04ae3361e0" containerID="a52a65567c485270a1404e8dee35c267d3e8a02e4f62a4665211b2525d0420a2" exitCode=0 Nov 28 12:38:14 crc kubenswrapper[4779]: I1128 12:38:14.453787 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xbp9s" event={"ID":"b88224c6-06e6-41c7-bba9-cb04ae3361e0","Type":"ContainerDied","Data":"a52a65567c485270a1404e8dee35c267d3e8a02e4f62a4665211b2525d0420a2"} Nov 28 12:38:14 crc kubenswrapper[4779]: I1128 12:38:14.453828 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xbp9s" event={"ID":"b88224c6-06e6-41c7-bba9-cb04ae3361e0","Type":"ContainerStarted","Data":"35bdeb077301f4e6fba5093ef63afed1142d4f690f8cb8a2b0901b5d59e5134f"} Nov 28 12:38:14 crc kubenswrapper[4779]: I1128 12:38:14.456261 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" event={"ID":"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd","Type":"ContainerStarted","Data":"dc712056739a0e5ec826da2cb96b95c4b28ec9e9d0399599c40b35839a9e81fc"} Nov 28 12:38:14 crc kubenswrapper[4779]: I1128 12:38:14.458695 4779 generic.go:334] "Generic (PLEG): container finished" podID="dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5" containerID="bcc55f13f6802257765d03ef2e099ebe54567447a9875ebd9777e59df80db553" exitCode=0 Nov 28 12:38:14 crc kubenswrapper[4779]: I1128 12:38:14.458736 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nv895" event={"ID":"dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5","Type":"ContainerDied","Data":"bcc55f13f6802257765d03ef2e099ebe54567447a9875ebd9777e59df80db553"} Nov 28 12:38:14 crc kubenswrapper[4779]: I1128 12:38:14.459879 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nv895" event={"ID":"dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5","Type":"ContainerStarted","Data":"edbdae4a094d13d14b44ca3ca9338d71437764ca81e4b20a345fb7a693ed2fdc"} Nov 28 12:38:14 crc kubenswrapper[4779]: I1128 12:38:14.685475 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405550-kkxf5" Nov 28 12:38:14 crc kubenswrapper[4779]: I1128 12:38:14.779254 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dm7h\" (UniqueName: \"kubernetes.io/projected/189cc15e-4851-49ad-a757-49451158a3d7-kube-api-access-5dm7h\") pod \"189cc15e-4851-49ad-a757-49451158a3d7\" (UID: \"189cc15e-4851-49ad-a757-49451158a3d7\") " Nov 28 12:38:14 crc kubenswrapper[4779]: I1128 12:38:14.779317 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/189cc15e-4851-49ad-a757-49451158a3d7-secret-volume\") pod \"189cc15e-4851-49ad-a757-49451158a3d7\" (UID: \"189cc15e-4851-49ad-a757-49451158a3d7\") " Nov 28 12:38:14 crc kubenswrapper[4779]: I1128 12:38:14.780295 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/189cc15e-4851-49ad-a757-49451158a3d7-config-volume\") pod \"189cc15e-4851-49ad-a757-49451158a3d7\" (UID: \"189cc15e-4851-49ad-a757-49451158a3d7\") " Nov 28 12:38:14 crc kubenswrapper[4779]: I1128 12:38:14.780608 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/189cc15e-4851-49ad-a757-49451158a3d7-config-volume" (OuterVolumeSpecName: "config-volume") pod "189cc15e-4851-49ad-a757-49451158a3d7" (UID: "189cc15e-4851-49ad-a757-49451158a3d7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:38:14 crc kubenswrapper[4779]: I1128 12:38:14.780859 4779 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/189cc15e-4851-49ad-a757-49451158a3d7-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 12:38:14 crc kubenswrapper[4779]: I1128 12:38:14.787016 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/189cc15e-4851-49ad-a757-49451158a3d7-kube-api-access-5dm7h" (OuterVolumeSpecName: "kube-api-access-5dm7h") pod "189cc15e-4851-49ad-a757-49451158a3d7" (UID: "189cc15e-4851-49ad-a757-49451158a3d7"). InnerVolumeSpecName "kube-api-access-5dm7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:38:14 crc kubenswrapper[4779]: I1128 12:38:14.787185 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/189cc15e-4851-49ad-a757-49451158a3d7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "189cc15e-4851-49ad-a757-49451158a3d7" (UID: "189cc15e-4851-49ad-a757-49451158a3d7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:38:14 crc kubenswrapper[4779]: I1128 12:38:14.834207 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-4jt92" Nov 28 12:38:14 crc kubenswrapper[4779]: I1128 12:38:14.852039 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-hp9zp" Nov 28 12:38:14 crc kubenswrapper[4779]: I1128 12:38:14.857231 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-hp9zp" Nov 28 12:38:14 crc kubenswrapper[4779]: I1128 12:38:14.881656 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dm7h\" (UniqueName: \"kubernetes.io/projected/189cc15e-4851-49ad-a757-49451158a3d7-kube-api-access-5dm7h\") on node \"crc\" DevicePath \"\"" Nov 28 12:38:14 crc kubenswrapper[4779]: I1128 12:38:14.881686 4779 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/189cc15e-4851-49ad-a757-49451158a3d7-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 12:38:14 crc kubenswrapper[4779]: I1128 12:38:14.897830 4779 patch_prober.go:28] interesting pod/router-default-5444994796-k5rpm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 12:38:14 crc kubenswrapper[4779]: [-]has-synced failed: reason withheld Nov 28 12:38:14 crc kubenswrapper[4779]: [+]process-running ok Nov 28 12:38:14 crc kubenswrapper[4779]: healthz check failed Nov 28 12:38:14 crc kubenswrapper[4779]: I1128 12:38:14.898114 4779 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k5rpm" podUID="51d49383-db9f-4a63-865c-4387ecf691ed" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 12:38:15 crc kubenswrapper[4779]: I1128 12:38:15.138221 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 28 12:38:15 crc kubenswrapper[4779]: E1128 12:38:15.138435 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="189cc15e-4851-49ad-a757-49451158a3d7" containerName="collect-profiles" Nov 28 12:38:15 crc kubenswrapper[4779]: I1128 12:38:15.138447 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="189cc15e-4851-49ad-a757-49451158a3d7" containerName="collect-profiles" Nov 28 12:38:15 crc kubenswrapper[4779]: I1128 12:38:15.138534 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="189cc15e-4851-49ad-a757-49451158a3d7" containerName="collect-profiles" Nov 28 12:38:15 crc kubenswrapper[4779]: I1128 12:38:15.138933 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 12:38:15 crc kubenswrapper[4779]: I1128 12:38:15.141466 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 28 12:38:15 crc kubenswrapper[4779]: I1128 12:38:15.141613 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 28 12:38:15 crc kubenswrapper[4779]: I1128 12:38:15.150610 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-ctt57" Nov 28 12:38:15 crc kubenswrapper[4779]: I1128 12:38:15.150641 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-ctt57" Nov 28 12:38:15 crc kubenswrapper[4779]: I1128 12:38:15.151326 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 28 12:38:15 crc kubenswrapper[4779]: I1128 12:38:15.153022 4779 patch_prober.go:28] interesting pod/console-f9d7485db-ctt57 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.16:8443/health\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Nov 28 12:38:15 crc kubenswrapper[4779]: I1128 12:38:15.153072 4779 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-ctt57" podUID="bb401509-3ef4-41bc-93db-fbee2b5454b9" containerName="console" probeResult="failure" output="Get \"https://10.217.0.16:8443/health\": dial tcp 10.217.0.16:8443: connect: connection refused" Nov 28 12:38:15 crc kubenswrapper[4779]: I1128 12:38:15.200198 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-svdf4"] Nov 28 12:38:15 crc kubenswrapper[4779]: I1128 12:38:15.203949 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-svdf4" Nov 28 12:38:15 crc kubenswrapper[4779]: I1128 12:38:15.207461 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 28 12:38:15 crc kubenswrapper[4779]: I1128 12:38:15.207980 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-svdf4"] Nov 28 12:38:15 crc kubenswrapper[4779]: I1128 12:38:15.289394 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e555d5fd-d9bd-4146-b28a-de1974211be0-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e555d5fd-d9bd-4146-b28a-de1974211be0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 12:38:15 crc kubenswrapper[4779]: I1128 12:38:15.290306 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83ce570d-f1e1-4168-9b49-3da4f6b31209-utilities\") pod \"redhat-marketplace-svdf4\" (UID: \"83ce570d-f1e1-4168-9b49-3da4f6b31209\") " pod="openshift-marketplace/redhat-marketplace-svdf4" Nov 28 12:38:15 crc kubenswrapper[4779]: I1128 12:38:15.290390 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e555d5fd-d9bd-4146-b28a-de1974211be0-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e555d5fd-d9bd-4146-b28a-de1974211be0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 12:38:15 crc kubenswrapper[4779]: I1128 12:38:15.291068 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83ce570d-f1e1-4168-9b49-3da4f6b31209-catalog-content\") pod \"redhat-marketplace-svdf4\" (UID: \"83ce570d-f1e1-4168-9b49-3da4f6b31209\") " pod="openshift-marketplace/redhat-marketplace-svdf4" Nov 28 12:38:15 crc kubenswrapper[4779]: I1128 12:38:15.291148 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmjd4\" (UniqueName: \"kubernetes.io/projected/83ce570d-f1e1-4168-9b49-3da4f6b31209-kube-api-access-wmjd4\") pod \"redhat-marketplace-svdf4\" (UID: \"83ce570d-f1e1-4168-9b49-3da4f6b31209\") " pod="openshift-marketplace/redhat-marketplace-svdf4" Nov 28 12:38:15 crc kubenswrapper[4779]: I1128 12:38:15.392561 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83ce570d-f1e1-4168-9b49-3da4f6b31209-utilities\") pod \"redhat-marketplace-svdf4\" (UID: \"83ce570d-f1e1-4168-9b49-3da4f6b31209\") " pod="openshift-marketplace/redhat-marketplace-svdf4" Nov 28 12:38:15 crc kubenswrapper[4779]: I1128 12:38:15.392685 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e555d5fd-d9bd-4146-b28a-de1974211be0-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e555d5fd-d9bd-4146-b28a-de1974211be0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 12:38:15 crc kubenswrapper[4779]: I1128 12:38:15.392717 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83ce570d-f1e1-4168-9b49-3da4f6b31209-catalog-content\") pod \"redhat-marketplace-svdf4\" (UID: \"83ce570d-f1e1-4168-9b49-3da4f6b31209\") " pod="openshift-marketplace/redhat-marketplace-svdf4" Nov 28 12:38:15 crc kubenswrapper[4779]: I1128 12:38:15.392754 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmjd4\" (UniqueName: \"kubernetes.io/projected/83ce570d-f1e1-4168-9b49-3da4f6b31209-kube-api-access-wmjd4\") pod \"redhat-marketplace-svdf4\" (UID: \"83ce570d-f1e1-4168-9b49-3da4f6b31209\") " pod="openshift-marketplace/redhat-marketplace-svdf4" Nov 28 12:38:15 crc kubenswrapper[4779]: I1128 12:38:15.392786 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e555d5fd-d9bd-4146-b28a-de1974211be0-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e555d5fd-d9bd-4146-b28a-de1974211be0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 12:38:15 crc kubenswrapper[4779]: I1128 12:38:15.392874 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e555d5fd-d9bd-4146-b28a-de1974211be0-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e555d5fd-d9bd-4146-b28a-de1974211be0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 12:38:15 crc kubenswrapper[4779]: I1128 12:38:15.393212 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83ce570d-f1e1-4168-9b49-3da4f6b31209-utilities\") pod \"redhat-marketplace-svdf4\" (UID: \"83ce570d-f1e1-4168-9b49-3da4f6b31209\") " pod="openshift-marketplace/redhat-marketplace-svdf4" Nov 28 12:38:15 crc kubenswrapper[4779]: I1128 12:38:15.396219 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83ce570d-f1e1-4168-9b49-3da4f6b31209-catalog-content\") pod \"redhat-marketplace-svdf4\" (UID: \"83ce570d-f1e1-4168-9b49-3da4f6b31209\") " pod="openshift-marketplace/redhat-marketplace-svdf4" Nov 28 12:38:15 crc kubenswrapper[4779]: I1128 12:38:15.410012 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e555d5fd-d9bd-4146-b28a-de1974211be0-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e555d5fd-d9bd-4146-b28a-de1974211be0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 12:38:15 crc kubenswrapper[4779]: I1128 12:38:15.410057 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmjd4\" (UniqueName: \"kubernetes.io/projected/83ce570d-f1e1-4168-9b49-3da4f6b31209-kube-api-access-wmjd4\") pod \"redhat-marketplace-svdf4\" (UID: \"83ce570d-f1e1-4168-9b49-3da4f6b31209\") " pod="openshift-marketplace/redhat-marketplace-svdf4" Nov 28 12:38:15 crc kubenswrapper[4779]: I1128 12:38:15.454280 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 12:38:15 crc kubenswrapper[4779]: I1128 12:38:15.493805 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" event={"ID":"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd","Type":"ContainerStarted","Data":"b390bff70b1780f9f3887253baa7db36e9ca7286b165deaa438995309eb8a05b"} Nov 28 12:38:15 crc kubenswrapper[4779]: I1128 12:38:15.494074 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:15 crc kubenswrapper[4779]: I1128 12:38:15.500397 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405550-kkxf5" event={"ID":"189cc15e-4851-49ad-a757-49451158a3d7","Type":"ContainerDied","Data":"1ccf019e5e90a49e8db56bb6689d31cb701b13e33c9cb56f50ee68184ba158fb"} Nov 28 12:38:15 crc kubenswrapper[4779]: I1128 12:38:15.500426 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ccf019e5e90a49e8db56bb6689d31cb701b13e33c9cb56f50ee68184ba158fb" Nov 28 12:38:15 crc kubenswrapper[4779]: I1128 12:38:15.500424 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405550-kkxf5" Nov 28 12:38:15 crc kubenswrapper[4779]: I1128 12:38:15.503892 4779 generic.go:334] "Generic (PLEG): container finished" podID="35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8" containerID="134d672ca608816dfa5b712006dbf0b4cf9c6d7a266646cbcc36de2b81c50ecf" exitCode=0 Nov 28 12:38:15 crc kubenswrapper[4779]: I1128 12:38:15.504478 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bgzr4" event={"ID":"35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8","Type":"ContainerDied","Data":"134d672ca608816dfa5b712006dbf0b4cf9c6d7a266646cbcc36de2b81c50ecf"} Nov 28 12:38:15 crc kubenswrapper[4779]: I1128 12:38:15.516969 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" podStartSLOduration=137.516947439 podStartE2EDuration="2m17.516947439s" podCreationTimestamp="2025-11-28 12:35:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:38:15.512310174 +0000 UTC m=+156.077985528" watchObservedRunningTime="2025-11-28 12:38:15.516947439 +0000 UTC m=+156.082622793" Nov 28 12:38:15 crc kubenswrapper[4779]: I1128 12:38:15.520833 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-svdf4" Nov 28 12:38:15 crc kubenswrapper[4779]: I1128 12:38:15.583455 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-x7l6c"] Nov 28 12:38:15 crc kubenswrapper[4779]: I1128 12:38:15.585138 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x7l6c" Nov 28 12:38:15 crc kubenswrapper[4779]: I1128 12:38:15.648702 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x7l6c"] Nov 28 12:38:15 crc kubenswrapper[4779]: I1128 12:38:15.711003 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq9s5\" (UniqueName: \"kubernetes.io/projected/b3dbcb58-e82e-47c8-b02d-b7cdca5b52df-kube-api-access-dq9s5\") pod \"redhat-marketplace-x7l6c\" (UID: \"b3dbcb58-e82e-47c8-b02d-b7cdca5b52df\") " pod="openshift-marketplace/redhat-marketplace-x7l6c" Nov 28 12:38:15 crc kubenswrapper[4779]: I1128 12:38:15.711057 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3dbcb58-e82e-47c8-b02d-b7cdca5b52df-utilities\") pod \"redhat-marketplace-x7l6c\" (UID: \"b3dbcb58-e82e-47c8-b02d-b7cdca5b52df\") " pod="openshift-marketplace/redhat-marketplace-x7l6c" Nov 28 12:38:15 crc kubenswrapper[4779]: I1128 12:38:15.711121 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3dbcb58-e82e-47c8-b02d-b7cdca5b52df-catalog-content\") pod \"redhat-marketplace-x7l6c\" (UID: \"b3dbcb58-e82e-47c8-b02d-b7cdca5b52df\") " pod="openshift-marketplace/redhat-marketplace-x7l6c" Nov 28 12:38:15 crc kubenswrapper[4779]: I1128 12:38:15.753397 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Nov 28 12:38:15 crc kubenswrapper[4779]: I1128 12:38:15.814549 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dq9s5\" (UniqueName: \"kubernetes.io/projected/b3dbcb58-e82e-47c8-b02d-b7cdca5b52df-kube-api-access-dq9s5\") pod \"redhat-marketplace-x7l6c\" (UID: \"b3dbcb58-e82e-47c8-b02d-b7cdca5b52df\") " pod="openshift-marketplace/redhat-marketplace-x7l6c" Nov 28 12:38:15 crc kubenswrapper[4779]: I1128 12:38:15.814617 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3dbcb58-e82e-47c8-b02d-b7cdca5b52df-utilities\") pod \"redhat-marketplace-x7l6c\" (UID: \"b3dbcb58-e82e-47c8-b02d-b7cdca5b52df\") " pod="openshift-marketplace/redhat-marketplace-x7l6c" Nov 28 12:38:15 crc kubenswrapper[4779]: I1128 12:38:15.815632 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3dbcb58-e82e-47c8-b02d-b7cdca5b52df-catalog-content\") pod \"redhat-marketplace-x7l6c\" (UID: \"b3dbcb58-e82e-47c8-b02d-b7cdca5b52df\") " pod="openshift-marketplace/redhat-marketplace-x7l6c" Nov 28 12:38:15 crc kubenswrapper[4779]: I1128 12:38:15.816227 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3dbcb58-e82e-47c8-b02d-b7cdca5b52df-catalog-content\") pod \"redhat-marketplace-x7l6c\" (UID: \"b3dbcb58-e82e-47c8-b02d-b7cdca5b52df\") " pod="openshift-marketplace/redhat-marketplace-x7l6c" Nov 28 12:38:15 crc kubenswrapper[4779]: I1128 12:38:15.816823 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3dbcb58-e82e-47c8-b02d-b7cdca5b52df-utilities\") pod \"redhat-marketplace-x7l6c\" (UID: \"b3dbcb58-e82e-47c8-b02d-b7cdca5b52df\") " pod="openshift-marketplace/redhat-marketplace-x7l6c" Nov 28 12:38:15 crc kubenswrapper[4779]: I1128 12:38:15.833652 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 28 12:38:15 crc kubenswrapper[4779]: I1128 12:38:15.853082 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dq9s5\" (UniqueName: \"kubernetes.io/projected/b3dbcb58-e82e-47c8-b02d-b7cdca5b52df-kube-api-access-dq9s5\") pod \"redhat-marketplace-x7l6c\" (UID: \"b3dbcb58-e82e-47c8-b02d-b7cdca5b52df\") " pod="openshift-marketplace/redhat-marketplace-x7l6c" Nov 28 12:38:15 crc kubenswrapper[4779]: I1128 12:38:15.896952 4779 patch_prober.go:28] interesting pod/router-default-5444994796-k5rpm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 12:38:15 crc kubenswrapper[4779]: [-]has-synced failed: reason withheld Nov 28 12:38:15 crc kubenswrapper[4779]: [+]process-running ok Nov 28 12:38:15 crc kubenswrapper[4779]: healthz check failed Nov 28 12:38:15 crc kubenswrapper[4779]: I1128 12:38:15.897012 4779 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k5rpm" podUID="51d49383-db9f-4a63-865c-4387ecf691ed" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 12:38:15 crc kubenswrapper[4779]: I1128 12:38:15.953711 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x7l6c" Nov 28 12:38:16 crc kubenswrapper[4779]: I1128 12:38:16.120926 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-svdf4"] Nov 28 12:38:16 crc kubenswrapper[4779]: I1128 12:38:16.256990 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x7l6c"] Nov 28 12:38:16 crc kubenswrapper[4779]: I1128 12:38:16.284653 4779 patch_prober.go:28] interesting pod/machine-config-daemon-kj9g2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 12:38:16 crc kubenswrapper[4779]: I1128 12:38:16.284711 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 12:38:16 crc kubenswrapper[4779]: I1128 12:38:16.519151 4779 generic.go:334] "Generic (PLEG): container finished" podID="b3dbcb58-e82e-47c8-b02d-b7cdca5b52df" containerID="3800c654e31aa4e2919ddd6ed96161a5d0a31455f445ce4a8eef6ea63b66c5a0" exitCode=0 Nov 28 12:38:16 crc kubenswrapper[4779]: I1128 12:38:16.519523 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x7l6c" event={"ID":"b3dbcb58-e82e-47c8-b02d-b7cdca5b52df","Type":"ContainerDied","Data":"3800c654e31aa4e2919ddd6ed96161a5d0a31455f445ce4a8eef6ea63b66c5a0"} Nov 28 12:38:16 crc kubenswrapper[4779]: I1128 12:38:16.519554 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x7l6c" event={"ID":"b3dbcb58-e82e-47c8-b02d-b7cdca5b52df","Type":"ContainerStarted","Data":"28ff0678cfaebefa3e74543ab201a5af0713748f85d9ce29a6ce8a3dcaa05292"} Nov 28 12:38:16 crc kubenswrapper[4779]: I1128 12:38:16.528707 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e555d5fd-d9bd-4146-b28a-de1974211be0","Type":"ContainerStarted","Data":"28e1595c95e29d9610a6e7b1eaa569214e8173d5219eaa5a1b63914a3475b82c"} Nov 28 12:38:16 crc kubenswrapper[4779]: I1128 12:38:16.528753 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e555d5fd-d9bd-4146-b28a-de1974211be0","Type":"ContainerStarted","Data":"50d26dd34362508cfdf6e821404d9b616e8b09e270b89bda0d8fd0061591f1c8"} Nov 28 12:38:16 crc kubenswrapper[4779]: I1128 12:38:16.534482 4779 generic.go:334] "Generic (PLEG): container finished" podID="83ce570d-f1e1-4168-9b49-3da4f6b31209" containerID="852465927e704c9a28d444fa960c537018e72e62ad6db0c39a048f6c67947ae7" exitCode=0 Nov 28 12:38:16 crc kubenswrapper[4779]: I1128 12:38:16.538400 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-svdf4" event={"ID":"83ce570d-f1e1-4168-9b49-3da4f6b31209","Type":"ContainerDied","Data":"852465927e704c9a28d444fa960c537018e72e62ad6db0c39a048f6c67947ae7"} Nov 28 12:38:16 crc kubenswrapper[4779]: I1128 12:38:16.538438 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-svdf4" event={"ID":"83ce570d-f1e1-4168-9b49-3da4f6b31209","Type":"ContainerStarted","Data":"56e32976c361ea8dbe4ddde109d17f6db2f1a86224cd96afddffc12c126a310d"} Nov 28 12:38:16 crc kubenswrapper[4779]: I1128 12:38:16.593161 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ntps2"] Nov 28 12:38:16 crc kubenswrapper[4779]: I1128 12:38:16.641892 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=1.6418575450000001 podStartE2EDuration="1.641857545s" podCreationTimestamp="2025-11-28 12:38:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:38:16.594246813 +0000 UTC m=+157.159922167" watchObservedRunningTime="2025-11-28 12:38:16.641857545 +0000 UTC m=+157.207532899" Nov 28 12:38:16 crc kubenswrapper[4779]: I1128 12:38:16.647853 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ntps2" Nov 28 12:38:16 crc kubenswrapper[4779]: I1128 12:38:16.654874 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 28 12:38:16 crc kubenswrapper[4779]: I1128 12:38:16.655691 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ntps2"] Nov 28 12:38:16 crc kubenswrapper[4779]: I1128 12:38:16.742753 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b54c6e50-f765-4a8c-b147-237821a03d11-utilities\") pod \"redhat-operators-ntps2\" (UID: \"b54c6e50-f765-4a8c-b147-237821a03d11\") " pod="openshift-marketplace/redhat-operators-ntps2" Nov 28 12:38:16 crc kubenswrapper[4779]: I1128 12:38:16.742833 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b54c6e50-f765-4a8c-b147-237821a03d11-catalog-content\") pod \"redhat-operators-ntps2\" (UID: \"b54c6e50-f765-4a8c-b147-237821a03d11\") " pod="openshift-marketplace/redhat-operators-ntps2" Nov 28 12:38:16 crc kubenswrapper[4779]: I1128 12:38:16.742854 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdbm2\" (UniqueName: \"kubernetes.io/projected/b54c6e50-f765-4a8c-b147-237821a03d11-kube-api-access-zdbm2\") pod \"redhat-operators-ntps2\" (UID: \"b54c6e50-f765-4a8c-b147-237821a03d11\") " pod="openshift-marketplace/redhat-operators-ntps2" Nov 28 12:38:16 crc kubenswrapper[4779]: I1128 12:38:16.844816 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b54c6e50-f765-4a8c-b147-237821a03d11-utilities\") pod \"redhat-operators-ntps2\" (UID: \"b54c6e50-f765-4a8c-b147-237821a03d11\") " pod="openshift-marketplace/redhat-operators-ntps2" Nov 28 12:38:16 crc kubenswrapper[4779]: I1128 12:38:16.844027 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b54c6e50-f765-4a8c-b147-237821a03d11-utilities\") pod \"redhat-operators-ntps2\" (UID: \"b54c6e50-f765-4a8c-b147-237821a03d11\") " pod="openshift-marketplace/redhat-operators-ntps2" Nov 28 12:38:16 crc kubenswrapper[4779]: I1128 12:38:16.844970 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b54c6e50-f765-4a8c-b147-237821a03d11-catalog-content\") pod \"redhat-operators-ntps2\" (UID: \"b54c6e50-f765-4a8c-b147-237821a03d11\") " pod="openshift-marketplace/redhat-operators-ntps2" Nov 28 12:38:16 crc kubenswrapper[4779]: I1128 12:38:16.844996 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdbm2\" (UniqueName: \"kubernetes.io/projected/b54c6e50-f765-4a8c-b147-237821a03d11-kube-api-access-zdbm2\") pod \"redhat-operators-ntps2\" (UID: \"b54c6e50-f765-4a8c-b147-237821a03d11\") " pod="openshift-marketplace/redhat-operators-ntps2" Nov 28 12:38:16 crc kubenswrapper[4779]: I1128 12:38:16.845246 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b54c6e50-f765-4a8c-b147-237821a03d11-catalog-content\") pod \"redhat-operators-ntps2\" (UID: \"b54c6e50-f765-4a8c-b147-237821a03d11\") " pod="openshift-marketplace/redhat-operators-ntps2" Nov 28 12:38:16 crc kubenswrapper[4779]: I1128 12:38:16.866881 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdbm2\" (UniqueName: \"kubernetes.io/projected/b54c6e50-f765-4a8c-b147-237821a03d11-kube-api-access-zdbm2\") pod \"redhat-operators-ntps2\" (UID: \"b54c6e50-f765-4a8c-b147-237821a03d11\") " pod="openshift-marketplace/redhat-operators-ntps2" Nov 28 12:38:16 crc kubenswrapper[4779]: I1128 12:38:16.893502 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-k5rpm" Nov 28 12:38:16 crc kubenswrapper[4779]: I1128 12:38:16.897828 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-k5rpm" Nov 28 12:38:16 crc kubenswrapper[4779]: I1128 12:38:16.963762 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 28 12:38:16 crc kubenswrapper[4779]: I1128 12:38:16.964707 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 12:38:16 crc kubenswrapper[4779]: I1128 12:38:16.972506 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Nov 28 12:38:16 crc kubenswrapper[4779]: I1128 12:38:16.972654 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Nov 28 12:38:16 crc kubenswrapper[4779]: I1128 12:38:16.976554 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 28 12:38:16 crc kubenswrapper[4779]: I1128 12:38:16.982974 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ntps2" Nov 28 12:38:17 crc kubenswrapper[4779]: I1128 12:38:17.005620 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gtp6j"] Nov 28 12:38:17 crc kubenswrapper[4779]: I1128 12:38:17.006623 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gtp6j" Nov 28 12:38:17 crc kubenswrapper[4779]: I1128 12:38:17.042624 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gtp6j"] Nov 28 12:38:17 crc kubenswrapper[4779]: I1128 12:38:17.047485 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/307e7c5f-582b-497f-ba2e-dba16e9f30be-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"307e7c5f-582b-497f-ba2e-dba16e9f30be\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 12:38:17 crc kubenswrapper[4779]: I1128 12:38:17.047525 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/307e7c5f-582b-497f-ba2e-dba16e9f30be-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"307e7c5f-582b-497f-ba2e-dba16e9f30be\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 12:38:17 crc kubenswrapper[4779]: I1128 12:38:17.153406 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/585d10b8-61e3-4059-9fe7-81895b9ca67d-catalog-content\") pod \"redhat-operators-gtp6j\" (UID: \"585d10b8-61e3-4059-9fe7-81895b9ca67d\") " pod="openshift-marketplace/redhat-operators-gtp6j" Nov 28 12:38:17 crc kubenswrapper[4779]: I1128 12:38:17.153508 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/585d10b8-61e3-4059-9fe7-81895b9ca67d-utilities\") pod \"redhat-operators-gtp6j\" (UID: \"585d10b8-61e3-4059-9fe7-81895b9ca67d\") " pod="openshift-marketplace/redhat-operators-gtp6j" Nov 28 12:38:17 crc kubenswrapper[4779]: I1128 12:38:17.153542 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/307e7c5f-582b-497f-ba2e-dba16e9f30be-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"307e7c5f-582b-497f-ba2e-dba16e9f30be\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 12:38:17 crc kubenswrapper[4779]: I1128 12:38:17.153568 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/307e7c5f-582b-497f-ba2e-dba16e9f30be-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"307e7c5f-582b-497f-ba2e-dba16e9f30be\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 12:38:17 crc kubenswrapper[4779]: I1128 12:38:17.153584 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl9t4\" (UniqueName: \"kubernetes.io/projected/585d10b8-61e3-4059-9fe7-81895b9ca67d-kube-api-access-sl9t4\") pod \"redhat-operators-gtp6j\" (UID: \"585d10b8-61e3-4059-9fe7-81895b9ca67d\") " pod="openshift-marketplace/redhat-operators-gtp6j" Nov 28 12:38:17 crc kubenswrapper[4779]: I1128 12:38:17.153897 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/307e7c5f-582b-497f-ba2e-dba16e9f30be-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"307e7c5f-582b-497f-ba2e-dba16e9f30be\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 12:38:17 crc kubenswrapper[4779]: I1128 12:38:17.174489 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/307e7c5f-582b-497f-ba2e-dba16e9f30be-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"307e7c5f-582b-497f-ba2e-dba16e9f30be\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 12:38:17 crc kubenswrapper[4779]: I1128 12:38:17.254726 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/585d10b8-61e3-4059-9fe7-81895b9ca67d-utilities\") pod \"redhat-operators-gtp6j\" (UID: \"585d10b8-61e3-4059-9fe7-81895b9ca67d\") " pod="openshift-marketplace/redhat-operators-gtp6j" Nov 28 12:38:17 crc kubenswrapper[4779]: I1128 12:38:17.255107 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sl9t4\" (UniqueName: \"kubernetes.io/projected/585d10b8-61e3-4059-9fe7-81895b9ca67d-kube-api-access-sl9t4\") pod \"redhat-operators-gtp6j\" (UID: \"585d10b8-61e3-4059-9fe7-81895b9ca67d\") " pod="openshift-marketplace/redhat-operators-gtp6j" Nov 28 12:38:17 crc kubenswrapper[4779]: I1128 12:38:17.255130 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/585d10b8-61e3-4059-9fe7-81895b9ca67d-catalog-content\") pod \"redhat-operators-gtp6j\" (UID: \"585d10b8-61e3-4059-9fe7-81895b9ca67d\") " pod="openshift-marketplace/redhat-operators-gtp6j" Nov 28 12:38:17 crc kubenswrapper[4779]: I1128 12:38:17.255490 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/585d10b8-61e3-4059-9fe7-81895b9ca67d-utilities\") pod \"redhat-operators-gtp6j\" (UID: \"585d10b8-61e3-4059-9fe7-81895b9ca67d\") " pod="openshift-marketplace/redhat-operators-gtp6j" Nov 28 12:38:17 crc kubenswrapper[4779]: I1128 12:38:17.255616 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/585d10b8-61e3-4059-9fe7-81895b9ca67d-catalog-content\") pod \"redhat-operators-gtp6j\" (UID: \"585d10b8-61e3-4059-9fe7-81895b9ca67d\") " pod="openshift-marketplace/redhat-operators-gtp6j" Nov 28 12:38:17 crc kubenswrapper[4779]: I1128 12:38:17.277932 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ntps2"] Nov 28 12:38:17 crc kubenswrapper[4779]: I1128 12:38:17.279497 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sl9t4\" (UniqueName: \"kubernetes.io/projected/585d10b8-61e3-4059-9fe7-81895b9ca67d-kube-api-access-sl9t4\") pod \"redhat-operators-gtp6j\" (UID: \"585d10b8-61e3-4059-9fe7-81895b9ca67d\") " pod="openshift-marketplace/redhat-operators-gtp6j" Nov 28 12:38:17 crc kubenswrapper[4779]: I1128 12:38:17.297606 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 12:38:17 crc kubenswrapper[4779]: I1128 12:38:17.365394 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gtp6j" Nov 28 12:38:17 crc kubenswrapper[4779]: I1128 12:38:17.576402 4779 generic.go:334] "Generic (PLEG): container finished" podID="e555d5fd-d9bd-4146-b28a-de1974211be0" containerID="28e1595c95e29d9610a6e7b1eaa569214e8173d5219eaa5a1b63914a3475b82c" exitCode=0 Nov 28 12:38:17 crc kubenswrapper[4779]: I1128 12:38:17.576538 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e555d5fd-d9bd-4146-b28a-de1974211be0","Type":"ContainerDied","Data":"28e1595c95e29d9610a6e7b1eaa569214e8173d5219eaa5a1b63914a3475b82c"} Nov 28 12:38:17 crc kubenswrapper[4779]: I1128 12:38:17.577922 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ntps2" event={"ID":"b54c6e50-f765-4a8c-b147-237821a03d11","Type":"ContainerStarted","Data":"6a89ead843be1c986ecce9648d0e52009571f3229cf4fe631b31137c992d3c35"} Nov 28 12:38:17 crc kubenswrapper[4779]: I1128 12:38:17.582594 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-k5rpm" Nov 28 12:38:17 crc kubenswrapper[4779]: I1128 12:38:17.723503 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gtp6j"] Nov 28 12:38:17 crc kubenswrapper[4779]: W1128 12:38:17.761719 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod585d10b8_61e3_4059_9fe7_81895b9ca67d.slice/crio-b7ce228d68520745b1e44484b515df8c1e16ae366c443353d830cfb60b16dfdc WatchSource:0}: Error finding container b7ce228d68520745b1e44484b515df8c1e16ae366c443353d830cfb60b16dfdc: Status 404 returned error can't find the container with id b7ce228d68520745b1e44484b515df8c1e16ae366c443353d830cfb60b16dfdc Nov 28 12:38:17 crc kubenswrapper[4779]: I1128 12:38:17.911898 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 28 12:38:18 crc kubenswrapper[4779]: I1128 12:38:18.596223 4779 generic.go:334] "Generic (PLEG): container finished" podID="b54c6e50-f765-4a8c-b147-237821a03d11" containerID="8e28d07389fb63d0ab4f940e1455f97d96512528314398a81dc7d88c94798cc2" exitCode=0 Nov 28 12:38:18 crc kubenswrapper[4779]: I1128 12:38:18.596314 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ntps2" event={"ID":"b54c6e50-f765-4a8c-b147-237821a03d11","Type":"ContainerDied","Data":"8e28d07389fb63d0ab4f940e1455f97d96512528314398a81dc7d88c94798cc2"} Nov 28 12:38:18 crc kubenswrapper[4779]: I1128 12:38:18.599187 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gtp6j" event={"ID":"585d10b8-61e3-4059-9fe7-81895b9ca67d","Type":"ContainerStarted","Data":"b7ce228d68520745b1e44484b515df8c1e16ae366c443353d830cfb60b16dfdc"} Nov 28 12:38:18 crc kubenswrapper[4779]: I1128 12:38:18.600389 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"307e7c5f-582b-497f-ba2e-dba16e9f30be","Type":"ContainerStarted","Data":"c2db38e40d7e7dc3cb2835ed54236938848e02d57b41e367422615757b045992"} Nov 28 12:38:18 crc kubenswrapper[4779]: I1128 12:38:18.960889 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 12:38:19 crc kubenswrapper[4779]: I1128 12:38:19.105480 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e555d5fd-d9bd-4146-b28a-de1974211be0-kubelet-dir\") pod \"e555d5fd-d9bd-4146-b28a-de1974211be0\" (UID: \"e555d5fd-d9bd-4146-b28a-de1974211be0\") " Nov 28 12:38:19 crc kubenswrapper[4779]: I1128 12:38:19.105587 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e555d5fd-d9bd-4146-b28a-de1974211be0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e555d5fd-d9bd-4146-b28a-de1974211be0" (UID: "e555d5fd-d9bd-4146-b28a-de1974211be0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:38:19 crc kubenswrapper[4779]: I1128 12:38:19.105819 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e555d5fd-d9bd-4146-b28a-de1974211be0-kube-api-access\") pod \"e555d5fd-d9bd-4146-b28a-de1974211be0\" (UID: \"e555d5fd-d9bd-4146-b28a-de1974211be0\") " Nov 28 12:38:19 crc kubenswrapper[4779]: I1128 12:38:19.107632 4779 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e555d5fd-d9bd-4146-b28a-de1974211be0-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 28 12:38:19 crc kubenswrapper[4779]: I1128 12:38:19.128592 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e555d5fd-d9bd-4146-b28a-de1974211be0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e555d5fd-d9bd-4146-b28a-de1974211be0" (UID: "e555d5fd-d9bd-4146-b28a-de1974211be0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:38:19 crc kubenswrapper[4779]: I1128 12:38:19.209998 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e555d5fd-d9bd-4146-b28a-de1974211be0-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 12:38:19 crc kubenswrapper[4779]: I1128 12:38:19.613612 4779 generic.go:334] "Generic (PLEG): container finished" podID="585d10b8-61e3-4059-9fe7-81895b9ca67d" containerID="d92b0c82f54a538889bdad928254c0dce04b66a0d46234809c2a2abbe4f6ce62" exitCode=0 Nov 28 12:38:19 crc kubenswrapper[4779]: I1128 12:38:19.613693 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gtp6j" event={"ID":"585d10b8-61e3-4059-9fe7-81895b9ca67d","Type":"ContainerDied","Data":"d92b0c82f54a538889bdad928254c0dce04b66a0d46234809c2a2abbe4f6ce62"} Nov 28 12:38:19 crc kubenswrapper[4779]: I1128 12:38:19.620860 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 12:38:19 crc kubenswrapper[4779]: I1128 12:38:19.620853 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e555d5fd-d9bd-4146-b28a-de1974211be0","Type":"ContainerDied","Data":"50d26dd34362508cfdf6e821404d9b616e8b09e270b89bda0d8fd0061591f1c8"} Nov 28 12:38:19 crc kubenswrapper[4779]: I1128 12:38:19.621032 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50d26dd34362508cfdf6e821404d9b616e8b09e270b89bda0d8fd0061591f1c8" Nov 28 12:38:19 crc kubenswrapper[4779]: I1128 12:38:19.626142 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"307e7c5f-582b-497f-ba2e-dba16e9f30be","Type":"ContainerStarted","Data":"154209fde8f2be4280a6c6850de9d45bb3ebfca04035a9ce226f7d66ba50fcfd"} Nov 28 12:38:19 crc kubenswrapper[4779]: I1128 12:38:19.648741 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.648718224 podStartE2EDuration="3.648718224s" podCreationTimestamp="2025-11-28 12:38:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:38:19.645495227 +0000 UTC m=+160.211170581" watchObservedRunningTime="2025-11-28 12:38:19.648718224 +0000 UTC m=+160.214393578" Nov 28 12:38:20 crc kubenswrapper[4779]: I1128 12:38:20.634890 4779 generic.go:334] "Generic (PLEG): container finished" podID="307e7c5f-582b-497f-ba2e-dba16e9f30be" containerID="154209fde8f2be4280a6c6850de9d45bb3ebfca04035a9ce226f7d66ba50fcfd" exitCode=0 Nov 28 12:38:20 crc kubenswrapper[4779]: I1128 12:38:20.634969 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"307e7c5f-582b-497f-ba2e-dba16e9f30be","Type":"ContainerDied","Data":"154209fde8f2be4280a6c6850de9d45bb3ebfca04035a9ce226f7d66ba50fcfd"} Nov 28 12:38:20 crc kubenswrapper[4779]: I1128 12:38:20.734228 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2d9943eb-ea06-476d-8736-0a45e588d9f4-metrics-certs\") pod \"network-metrics-daemon-c2psj\" (UID: \"2d9943eb-ea06-476d-8736-0a45e588d9f4\") " pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:38:20 crc kubenswrapper[4779]: I1128 12:38:20.739445 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2d9943eb-ea06-476d-8736-0a45e588d9f4-metrics-certs\") pod \"network-metrics-daemon-c2psj\" (UID: \"2d9943eb-ea06-476d-8736-0a45e588d9f4\") " pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:38:20 crc kubenswrapper[4779]: I1128 12:38:20.950466 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c2psj" Nov 28 12:38:21 crc kubenswrapper[4779]: I1128 12:38:21.530019 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-c2psj"] Nov 28 12:38:21 crc kubenswrapper[4779]: W1128 12:38:21.554317 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d9943eb_ea06_476d_8736_0a45e588d9f4.slice/crio-30534c4ad04092d657ace8644d2dde3eb69dbdbf18f7da98f5988a8e7af3b4ab WatchSource:0}: Error finding container 30534c4ad04092d657ace8644d2dde3eb69dbdbf18f7da98f5988a8e7af3b4ab: Status 404 returned error can't find the container with id 30534c4ad04092d657ace8644d2dde3eb69dbdbf18f7da98f5988a8e7af3b4ab Nov 28 12:38:21 crc kubenswrapper[4779]: I1128 12:38:21.642395 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-c2psj" event={"ID":"2d9943eb-ea06-476d-8736-0a45e588d9f4","Type":"ContainerStarted","Data":"30534c4ad04092d657ace8644d2dde3eb69dbdbf18f7da98f5988a8e7af3b4ab"} Nov 28 12:38:21 crc kubenswrapper[4779]: I1128 12:38:21.911211 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 12:38:22 crc kubenswrapper[4779]: I1128 12:38:22.056759 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/307e7c5f-582b-497f-ba2e-dba16e9f30be-kubelet-dir\") pod \"307e7c5f-582b-497f-ba2e-dba16e9f30be\" (UID: \"307e7c5f-582b-497f-ba2e-dba16e9f30be\") " Nov 28 12:38:22 crc kubenswrapper[4779]: I1128 12:38:22.056824 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/307e7c5f-582b-497f-ba2e-dba16e9f30be-kube-api-access\") pod \"307e7c5f-582b-497f-ba2e-dba16e9f30be\" (UID: \"307e7c5f-582b-497f-ba2e-dba16e9f30be\") " Nov 28 12:38:22 crc kubenswrapper[4779]: I1128 12:38:22.056865 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/307e7c5f-582b-497f-ba2e-dba16e9f30be-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "307e7c5f-582b-497f-ba2e-dba16e9f30be" (UID: "307e7c5f-582b-497f-ba2e-dba16e9f30be"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:38:22 crc kubenswrapper[4779]: I1128 12:38:22.057082 4779 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/307e7c5f-582b-497f-ba2e-dba16e9f30be-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 28 12:38:22 crc kubenswrapper[4779]: I1128 12:38:22.064327 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/307e7c5f-582b-497f-ba2e-dba16e9f30be-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "307e7c5f-582b-497f-ba2e-dba16e9f30be" (UID: "307e7c5f-582b-497f-ba2e-dba16e9f30be"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:38:22 crc kubenswrapper[4779]: I1128 12:38:22.159303 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/307e7c5f-582b-497f-ba2e-dba16e9f30be-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 12:38:22 crc kubenswrapper[4779]: I1128 12:38:22.187796 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-mrrkd" Nov 28 12:38:22 crc kubenswrapper[4779]: I1128 12:38:22.649999 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"307e7c5f-582b-497f-ba2e-dba16e9f30be","Type":"ContainerDied","Data":"c2db38e40d7e7dc3cb2835ed54236938848e02d57b41e367422615757b045992"} Nov 28 12:38:22 crc kubenswrapper[4779]: I1128 12:38:22.650040 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2db38e40d7e7dc3cb2835ed54236938848e02d57b41e367422615757b045992" Nov 28 12:38:22 crc kubenswrapper[4779]: I1128 12:38:22.650175 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 12:38:24 crc kubenswrapper[4779]: I1128 12:38:24.660452 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-c2psj" event={"ID":"2d9943eb-ea06-476d-8736-0a45e588d9f4","Type":"ContainerStarted","Data":"921b0a3a559987a071b324022ce46e9080e2b748a9b8d11cc46751e70771a383"} Nov 28 12:38:25 crc kubenswrapper[4779]: I1128 12:38:25.149835 4779 patch_prober.go:28] interesting pod/console-f9d7485db-ctt57 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.16:8443/health\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Nov 28 12:38:25 crc kubenswrapper[4779]: I1128 12:38:25.149892 4779 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-ctt57" podUID="bb401509-3ef4-41bc-93db-fbee2b5454b9" containerName="console" probeResult="failure" output="Get \"https://10.217.0.16:8443/health\": dial tcp 10.217.0.16:8443: connect: connection refused" Nov 28 12:38:34 crc kubenswrapper[4779]: I1128 12:38:34.148188 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:38:35 crc kubenswrapper[4779]: I1128 12:38:35.159084 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-ctt57" Nov 28 12:38:35 crc kubenswrapper[4779]: I1128 12:38:35.169214 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-ctt57" Nov 28 12:38:40 crc kubenswrapper[4779]: E1128 12:38:40.843613 4779 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 28 12:38:40 crc kubenswrapper[4779]: E1128 12:38:40.844815 4779 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m4zdq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-nv895_openshift-marketplace(dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 12:38:40 crc kubenswrapper[4779]: E1128 12:38:40.846046 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-nv895" podUID="dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5" Nov 28 12:38:40 crc kubenswrapper[4779]: E1128 12:38:40.930886 4779 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 28 12:38:40 crc kubenswrapper[4779]: E1128 12:38:40.931382 4779 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dq9s5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-x7l6c_openshift-marketplace(b3dbcb58-e82e-47c8-b02d-b7cdca5b52df): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 12:38:40 crc kubenswrapper[4779]: E1128 12:38:40.932776 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-x7l6c" podUID="b3dbcb58-e82e-47c8-b02d-b7cdca5b52df" Nov 28 12:38:46 crc kubenswrapper[4779]: I1128 12:38:46.284918 4779 patch_prober.go:28] interesting pod/machine-config-daemon-kj9g2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 12:38:46 crc kubenswrapper[4779]: I1128 12:38:46.285454 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 12:38:46 crc kubenswrapper[4779]: I1128 12:38:46.376816 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 12:38:47 crc kubenswrapper[4779]: I1128 12:38:47.091437 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hxbwl" Nov 28 12:38:50 crc kubenswrapper[4779]: I1128 12:38:50.153904 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 28 12:38:50 crc kubenswrapper[4779]: E1128 12:38:50.154377 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e555d5fd-d9bd-4146-b28a-de1974211be0" containerName="pruner" Nov 28 12:38:50 crc kubenswrapper[4779]: I1128 12:38:50.154404 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="e555d5fd-d9bd-4146-b28a-de1974211be0" containerName="pruner" Nov 28 12:38:50 crc kubenswrapper[4779]: E1128 12:38:50.154440 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="307e7c5f-582b-497f-ba2e-dba16e9f30be" containerName="pruner" Nov 28 12:38:50 crc kubenswrapper[4779]: I1128 12:38:50.154453 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="307e7c5f-582b-497f-ba2e-dba16e9f30be" containerName="pruner" Nov 28 12:38:50 crc kubenswrapper[4779]: I1128 12:38:50.154675 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="e555d5fd-d9bd-4146-b28a-de1974211be0" containerName="pruner" Nov 28 12:38:50 crc kubenswrapper[4779]: I1128 12:38:50.154710 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="307e7c5f-582b-497f-ba2e-dba16e9f30be" containerName="pruner" Nov 28 12:38:50 crc kubenswrapper[4779]: I1128 12:38:50.155501 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 12:38:50 crc kubenswrapper[4779]: I1128 12:38:50.159011 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 28 12:38:50 crc kubenswrapper[4779]: I1128 12:38:50.159145 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 28 12:38:50 crc kubenswrapper[4779]: I1128 12:38:50.174553 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 28 12:38:50 crc kubenswrapper[4779]: I1128 12:38:50.203784 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a71e93bc-498c-4b1b-bf39-2990a508a0fa-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a71e93bc-498c-4b1b-bf39-2990a508a0fa\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 12:38:50 crc kubenswrapper[4779]: I1128 12:38:50.203983 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a71e93bc-498c-4b1b-bf39-2990a508a0fa-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a71e93bc-498c-4b1b-bf39-2990a508a0fa\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 12:38:50 crc kubenswrapper[4779]: I1128 12:38:50.305352 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a71e93bc-498c-4b1b-bf39-2990a508a0fa-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a71e93bc-498c-4b1b-bf39-2990a508a0fa\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 12:38:50 crc kubenswrapper[4779]: I1128 12:38:50.305458 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a71e93bc-498c-4b1b-bf39-2990a508a0fa-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a71e93bc-498c-4b1b-bf39-2990a508a0fa\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 12:38:50 crc kubenswrapper[4779]: I1128 12:38:50.305624 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a71e93bc-498c-4b1b-bf39-2990a508a0fa-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a71e93bc-498c-4b1b-bf39-2990a508a0fa\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 12:38:50 crc kubenswrapper[4779]: I1128 12:38:50.342427 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a71e93bc-498c-4b1b-bf39-2990a508a0fa-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a71e93bc-498c-4b1b-bf39-2990a508a0fa\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 12:38:50 crc kubenswrapper[4779]: E1128 12:38:50.474503 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-nv895" podUID="dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5" Nov 28 12:38:50 crc kubenswrapper[4779]: E1128 12:38:50.474592 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-x7l6c" podUID="b3dbcb58-e82e-47c8-b02d-b7cdca5b52df" Nov 28 12:38:50 crc kubenswrapper[4779]: I1128 12:38:50.484067 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 12:38:50 crc kubenswrapper[4779]: E1128 12:38:50.565805 4779 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 28 12:38:50 crc kubenswrapper[4779]: E1128 12:38:50.566007 4779 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wmjd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-svdf4_openshift-marketplace(83ce570d-f1e1-4168-9b49-3da4f6b31209): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 12:38:50 crc kubenswrapper[4779]: E1128 12:38:50.567610 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-svdf4" podUID="83ce570d-f1e1-4168-9b49-3da4f6b31209" Nov 28 12:38:51 crc kubenswrapper[4779]: E1128 12:38:51.053335 4779 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 28 12:38:51 crc kubenswrapper[4779]: E1128 12:38:51.053483 4779 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qjm9g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-bgzr4_openshift-marketplace(35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 12:38:51 crc kubenswrapper[4779]: E1128 12:38:51.054748 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-bgzr4" podUID="35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8" Nov 28 12:38:52 crc kubenswrapper[4779]: E1128 12:38:52.365860 4779 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 28 12:38:52 crc kubenswrapper[4779]: E1128 12:38:52.366382 4779 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ftpj8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-tsxnv_openshift-marketplace(42148fd9-447b-43a5-b513-7cc37b19ab16): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 12:38:52 crc kubenswrapper[4779]: E1128 12:38:52.367467 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-tsxnv" podUID="42148fd9-447b-43a5-b513-7cc37b19ab16" Nov 28 12:38:52 crc kubenswrapper[4779]: E1128 12:38:52.427249 4779 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 28 12:38:52 crc kubenswrapper[4779]: E1128 12:38:52.427411 4779 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jqxlr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-xbp9s_openshift-marketplace(b88224c6-06e6-41c7-bba9-cb04ae3361e0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 12:38:52 crc kubenswrapper[4779]: E1128 12:38:52.428494 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-xbp9s" podUID="b88224c6-06e6-41c7-bba9-cb04ae3361e0" Nov 28 12:38:54 crc kubenswrapper[4779]: E1128 12:38:54.627310 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-bgzr4" podUID="35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8" Nov 28 12:38:54 crc kubenswrapper[4779]: E1128 12:38:54.627393 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-svdf4" podUID="83ce570d-f1e1-4168-9b49-3da4f6b31209" Nov 28 12:38:54 crc kubenswrapper[4779]: E1128 12:38:54.628588 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-xbp9s" podUID="b88224c6-06e6-41c7-bba9-cb04ae3361e0" Nov 28 12:38:54 crc kubenswrapper[4779]: E1128 12:38:54.628585 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-tsxnv" podUID="42148fd9-447b-43a5-b513-7cc37b19ab16" Nov 28 12:38:54 crc kubenswrapper[4779]: E1128 12:38:54.700981 4779 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 28 12:38:54 crc kubenswrapper[4779]: E1128 12:38:54.701838 4779 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sl9t4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-gtp6j_openshift-marketplace(585d10b8-61e3-4059-9fe7-81895b9ca67d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 12:38:54 crc kubenswrapper[4779]: E1128 12:38:54.703511 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-gtp6j" podUID="585d10b8-61e3-4059-9fe7-81895b9ca67d" Nov 28 12:38:54 crc kubenswrapper[4779]: E1128 12:38:54.882426 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gtp6j" podUID="585d10b8-61e3-4059-9fe7-81895b9ca67d" Nov 28 12:38:54 crc kubenswrapper[4779]: I1128 12:38:54.957562 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 28 12:38:55 crc kubenswrapper[4779]: I1128 12:38:55.139232 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 28 12:38:55 crc kubenswrapper[4779]: I1128 12:38:55.140547 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 28 12:38:55 crc kubenswrapper[4779]: I1128 12:38:55.154971 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 28 12:38:55 crc kubenswrapper[4779]: I1128 12:38:55.175960 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/054c0628-de67-429b-bb65-ac369cde4509-kubelet-dir\") pod \"installer-9-crc\" (UID: \"054c0628-de67-429b-bb65-ac369cde4509\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 12:38:55 crc kubenswrapper[4779]: I1128 12:38:55.176255 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/054c0628-de67-429b-bb65-ac369cde4509-var-lock\") pod \"installer-9-crc\" (UID: \"054c0628-de67-429b-bb65-ac369cde4509\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 12:38:55 crc kubenswrapper[4779]: I1128 12:38:55.176363 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/054c0628-de67-429b-bb65-ac369cde4509-kube-api-access\") pod \"installer-9-crc\" (UID: \"054c0628-de67-429b-bb65-ac369cde4509\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 12:38:55 crc kubenswrapper[4779]: I1128 12:38:55.277158 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/054c0628-de67-429b-bb65-ac369cde4509-kube-api-access\") pod \"installer-9-crc\" (UID: \"054c0628-de67-429b-bb65-ac369cde4509\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 12:38:55 crc kubenswrapper[4779]: I1128 12:38:55.277245 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/054c0628-de67-429b-bb65-ac369cde4509-kubelet-dir\") pod \"installer-9-crc\" (UID: \"054c0628-de67-429b-bb65-ac369cde4509\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 12:38:55 crc kubenswrapper[4779]: I1128 12:38:55.277354 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/054c0628-de67-429b-bb65-ac369cde4509-var-lock\") pod \"installer-9-crc\" (UID: \"054c0628-de67-429b-bb65-ac369cde4509\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 12:38:55 crc kubenswrapper[4779]: I1128 12:38:55.277474 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/054c0628-de67-429b-bb65-ac369cde4509-var-lock\") pod \"installer-9-crc\" (UID: \"054c0628-de67-429b-bb65-ac369cde4509\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 12:38:55 crc kubenswrapper[4779]: I1128 12:38:55.277935 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/054c0628-de67-429b-bb65-ac369cde4509-kubelet-dir\") pod \"installer-9-crc\" (UID: \"054c0628-de67-429b-bb65-ac369cde4509\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 12:38:55 crc kubenswrapper[4779]: I1128 12:38:55.301708 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/054c0628-de67-429b-bb65-ac369cde4509-kube-api-access\") pod \"installer-9-crc\" (UID: \"054c0628-de67-429b-bb65-ac369cde4509\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 12:38:55 crc kubenswrapper[4779]: I1128 12:38:55.475885 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 28 12:38:55 crc kubenswrapper[4779]: I1128 12:38:55.705937 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 28 12:38:55 crc kubenswrapper[4779]: W1128 12:38:55.708104 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod054c0628_de67_429b_bb65_ac369cde4509.slice/crio-fb573289255c067daab676b8315871782002b5df1a26f3e4ac27edf3b1d77090 WatchSource:0}: Error finding container fb573289255c067daab676b8315871782002b5df1a26f3e4ac27edf3b1d77090: Status 404 returned error can't find the container with id fb573289255c067daab676b8315871782002b5df1a26f3e4ac27edf3b1d77090 Nov 28 12:38:55 crc kubenswrapper[4779]: I1128 12:38:55.884770 4779 generic.go:334] "Generic (PLEG): container finished" podID="b54c6e50-f765-4a8c-b147-237821a03d11" containerID="d49785b3671b039bcf84d7819ebbcb9807bb00cea962024f0f1fcb2db492cca1" exitCode=0 Nov 28 12:38:55 crc kubenswrapper[4779]: I1128 12:38:55.884858 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ntps2" event={"ID":"b54c6e50-f765-4a8c-b147-237821a03d11","Type":"ContainerDied","Data":"d49785b3671b039bcf84d7819ebbcb9807bb00cea962024f0f1fcb2db492cca1"} Nov 28 12:38:55 crc kubenswrapper[4779]: I1128 12:38:55.887417 4779 generic.go:334] "Generic (PLEG): container finished" podID="a71e93bc-498c-4b1b-bf39-2990a508a0fa" containerID="caf784610b89f4bb03878953e81f045afa507ae86180f0ef095b756bd78bcdad" exitCode=0 Nov 28 12:38:55 crc kubenswrapper[4779]: I1128 12:38:55.887567 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a71e93bc-498c-4b1b-bf39-2990a508a0fa","Type":"ContainerDied","Data":"caf784610b89f4bb03878953e81f045afa507ae86180f0ef095b756bd78bcdad"} Nov 28 12:38:55 crc kubenswrapper[4779]: I1128 12:38:55.887596 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a71e93bc-498c-4b1b-bf39-2990a508a0fa","Type":"ContainerStarted","Data":"0630274c5dfb8b590730bdeb99c19aecff5910bafbafdf8d1adc57266b2c73fd"} Nov 28 12:38:55 crc kubenswrapper[4779]: I1128 12:38:55.888989 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"054c0628-de67-429b-bb65-ac369cde4509","Type":"ContainerStarted","Data":"fb573289255c067daab676b8315871782002b5df1a26f3e4ac27edf3b1d77090"} Nov 28 12:38:55 crc kubenswrapper[4779]: I1128 12:38:55.891587 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-c2psj" event={"ID":"2d9943eb-ea06-476d-8736-0a45e588d9f4","Type":"ContainerStarted","Data":"6ac4af9d24487ad8d7d93f31a2860e07437bc317b6f7ade7d3d63c4353cc162b"} Nov 28 12:38:55 crc kubenswrapper[4779]: I1128 12:38:55.934026 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-c2psj" podStartSLOduration=177.934005352 podStartE2EDuration="2m57.934005352s" podCreationTimestamp="2025-11-28 12:35:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:38:55.93168786 +0000 UTC m=+196.497363234" watchObservedRunningTime="2025-11-28 12:38:55.934005352 +0000 UTC m=+196.499680706" Nov 28 12:38:56 crc kubenswrapper[4779]: I1128 12:38:56.900770 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ntps2" event={"ID":"b54c6e50-f765-4a8c-b147-237821a03d11","Type":"ContainerStarted","Data":"56eb81dec693b51964300e27276315a8bf8d62ab1bc0a2b5ecae56b5245d5cde"} Nov 28 12:38:56 crc kubenswrapper[4779]: I1128 12:38:56.902451 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"054c0628-de67-429b-bb65-ac369cde4509","Type":"ContainerStarted","Data":"31c11ccbc7134d09aa016f32b08a4dc02e273a4130e32e5bd0718c01ad274507"} Nov 28 12:38:56 crc kubenswrapper[4779]: I1128 12:38:56.924659 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ntps2" podStartSLOduration=3.921404421 podStartE2EDuration="40.924640334s" podCreationTimestamp="2025-11-28 12:38:16 +0000 UTC" firstStartedPulling="2025-11-28 12:38:19.628042337 +0000 UTC m=+160.193717691" lastFinishedPulling="2025-11-28 12:38:56.63127825 +0000 UTC m=+197.196953604" observedRunningTime="2025-11-28 12:38:56.920141252 +0000 UTC m=+197.485816626" watchObservedRunningTime="2025-11-28 12:38:56.924640334 +0000 UTC m=+197.490315688" Nov 28 12:38:56 crc kubenswrapper[4779]: I1128 12:38:56.935332 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=1.935307613 podStartE2EDuration="1.935307613s" podCreationTimestamp="2025-11-28 12:38:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:38:56.934186862 +0000 UTC m=+197.499862236" watchObservedRunningTime="2025-11-28 12:38:56.935307613 +0000 UTC m=+197.500982977" Nov 28 12:38:56 crc kubenswrapper[4779]: I1128 12:38:56.984350 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ntps2" Nov 28 12:38:56 crc kubenswrapper[4779]: I1128 12:38:56.984405 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ntps2" Nov 28 12:38:57 crc kubenswrapper[4779]: I1128 12:38:57.132503 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 12:38:57 crc kubenswrapper[4779]: I1128 12:38:57.202279 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a71e93bc-498c-4b1b-bf39-2990a508a0fa-kubelet-dir\") pod \"a71e93bc-498c-4b1b-bf39-2990a508a0fa\" (UID: \"a71e93bc-498c-4b1b-bf39-2990a508a0fa\") " Nov 28 12:38:57 crc kubenswrapper[4779]: I1128 12:38:57.202399 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a71e93bc-498c-4b1b-bf39-2990a508a0fa-kube-api-access\") pod \"a71e93bc-498c-4b1b-bf39-2990a508a0fa\" (UID: \"a71e93bc-498c-4b1b-bf39-2990a508a0fa\") " Nov 28 12:38:57 crc kubenswrapper[4779]: I1128 12:38:57.202683 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a71e93bc-498c-4b1b-bf39-2990a508a0fa-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a71e93bc-498c-4b1b-bf39-2990a508a0fa" (UID: "a71e93bc-498c-4b1b-bf39-2990a508a0fa"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:38:57 crc kubenswrapper[4779]: I1128 12:38:57.202859 4779 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a71e93bc-498c-4b1b-bf39-2990a508a0fa-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 28 12:38:57 crc kubenswrapper[4779]: I1128 12:38:57.207596 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a71e93bc-498c-4b1b-bf39-2990a508a0fa-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a71e93bc-498c-4b1b-bf39-2990a508a0fa" (UID: "a71e93bc-498c-4b1b-bf39-2990a508a0fa"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:38:57 crc kubenswrapper[4779]: I1128 12:38:57.304316 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a71e93bc-498c-4b1b-bf39-2990a508a0fa-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 12:38:57 crc kubenswrapper[4779]: I1128 12:38:57.923206 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 12:38:57 crc kubenswrapper[4779]: I1128 12:38:57.923350 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a71e93bc-498c-4b1b-bf39-2990a508a0fa","Type":"ContainerDied","Data":"0630274c5dfb8b590730bdeb99c19aecff5910bafbafdf8d1adc57266b2c73fd"} Nov 28 12:38:57 crc kubenswrapper[4779]: I1128 12:38:57.923388 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0630274c5dfb8b590730bdeb99c19aecff5910bafbafdf8d1adc57266b2c73fd" Nov 28 12:38:58 crc kubenswrapper[4779]: I1128 12:38:58.051768 4779 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ntps2" podUID="b54c6e50-f765-4a8c-b147-237821a03d11" containerName="registry-server" probeResult="failure" output=< Nov 28 12:38:58 crc kubenswrapper[4779]: timeout: failed to connect service ":50051" within 1s Nov 28 12:38:58 crc kubenswrapper[4779]: > Nov 28 12:39:03 crc kubenswrapper[4779]: I1128 12:39:03.961071 4779 generic.go:334] "Generic (PLEG): container finished" podID="b3dbcb58-e82e-47c8-b02d-b7cdca5b52df" containerID="fd64b8d716208e977fe34c5f11a5ea19ea99f5a108db8a154f6812ec97286c38" exitCode=0 Nov 28 12:39:03 crc kubenswrapper[4779]: I1128 12:39:03.961745 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x7l6c" event={"ID":"b3dbcb58-e82e-47c8-b02d-b7cdca5b52df","Type":"ContainerDied","Data":"fd64b8d716208e977fe34c5f11a5ea19ea99f5a108db8a154f6812ec97286c38"} Nov 28 12:39:04 crc kubenswrapper[4779]: I1128 12:39:04.968359 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nv895" event={"ID":"dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5","Type":"ContainerStarted","Data":"27df372dcf909b68e1d9e9140787d605f56ad81a9ffab0b510df8366240cfe5f"} Nov 28 12:39:04 crc kubenswrapper[4779]: I1128 12:39:04.975190 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x7l6c" event={"ID":"b3dbcb58-e82e-47c8-b02d-b7cdca5b52df","Type":"ContainerStarted","Data":"8b02c52c49582c72a0ca149363c21b92f7da7bea5422de75e0848dfe12a8efaf"} Nov 28 12:39:05 crc kubenswrapper[4779]: I1128 12:39:05.008832 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-x7l6c" podStartSLOduration=2.056097054 podStartE2EDuration="50.008814348s" podCreationTimestamp="2025-11-28 12:38:15 +0000 UTC" firstStartedPulling="2025-11-28 12:38:16.522249095 +0000 UTC m=+157.087924449" lastFinishedPulling="2025-11-28 12:39:04.474966349 +0000 UTC m=+205.040641743" observedRunningTime="2025-11-28 12:39:05.007127642 +0000 UTC m=+205.572803046" watchObservedRunningTime="2025-11-28 12:39:05.008814348 +0000 UTC m=+205.574489702" Nov 28 12:39:05 crc kubenswrapper[4779]: I1128 12:39:05.954458 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-x7l6c" Nov 28 12:39:05 crc kubenswrapper[4779]: I1128 12:39:05.955032 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-x7l6c" Nov 28 12:39:05 crc kubenswrapper[4779]: I1128 12:39:05.984682 4779 generic.go:334] "Generic (PLEG): container finished" podID="dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5" containerID="27df372dcf909b68e1d9e9140787d605f56ad81a9ffab0b510df8366240cfe5f" exitCode=0 Nov 28 12:39:05 crc kubenswrapper[4779]: I1128 12:39:05.984753 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nv895" event={"ID":"dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5","Type":"ContainerDied","Data":"27df372dcf909b68e1d9e9140787d605f56ad81a9ffab0b510df8366240cfe5f"} Nov 28 12:39:06 crc kubenswrapper[4779]: I1128 12:39:06.996111 4779 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-x7l6c" podUID="b3dbcb58-e82e-47c8-b02d-b7cdca5b52df" containerName="registry-server" probeResult="failure" output=< Nov 28 12:39:06 crc kubenswrapper[4779]: timeout: failed to connect service ":50051" within 1s Nov 28 12:39:06 crc kubenswrapper[4779]: > Nov 28 12:39:07 crc kubenswrapper[4779]: I1128 12:39:07.045786 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ntps2" Nov 28 12:39:07 crc kubenswrapper[4779]: I1128 12:39:07.099296 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ntps2" Nov 28 12:39:10 crc kubenswrapper[4779]: I1128 12:39:10.008925 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nv895" event={"ID":"dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5","Type":"ContainerStarted","Data":"749e7172c93aea059abd0e08a451b0a74624a75cdf6f068481b10a618a70c173"} Nov 28 12:39:10 crc kubenswrapper[4779]: I1128 12:39:10.030049 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nv895" podStartSLOduration=2.222743179 podStartE2EDuration="57.03002555s" podCreationTimestamp="2025-11-28 12:38:13 +0000 UTC" firstStartedPulling="2025-11-28 12:38:14.463209218 +0000 UTC m=+155.028884572" lastFinishedPulling="2025-11-28 12:39:09.270491589 +0000 UTC m=+209.836166943" observedRunningTime="2025-11-28 12:39:10.027123542 +0000 UTC m=+210.592798916" watchObservedRunningTime="2025-11-28 12:39:10.03002555 +0000 UTC m=+210.595700904" Nov 28 12:39:11 crc kubenswrapper[4779]: I1128 12:39:11.020953 4779 generic.go:334] "Generic (PLEG): container finished" podID="35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8" containerID="363859923e5bc594ef4d36d343e001404c8d3b405152b94fa1cafe3b0218e6dc" exitCode=0 Nov 28 12:39:11 crc kubenswrapper[4779]: I1128 12:39:11.021043 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bgzr4" event={"ID":"35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8","Type":"ContainerDied","Data":"363859923e5bc594ef4d36d343e001404c8d3b405152b94fa1cafe3b0218e6dc"} Nov 28 12:39:11 crc kubenswrapper[4779]: I1128 12:39:11.025514 4779 generic.go:334] "Generic (PLEG): container finished" podID="42148fd9-447b-43a5-b513-7cc37b19ab16" containerID="f4fb6b0df5b911cb50225409e882c801ce78b9b8465e783d177f6213bedf0f27" exitCode=0 Nov 28 12:39:11 crc kubenswrapper[4779]: I1128 12:39:11.025584 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tsxnv" event={"ID":"42148fd9-447b-43a5-b513-7cc37b19ab16","Type":"ContainerDied","Data":"f4fb6b0df5b911cb50225409e882c801ce78b9b8465e783d177f6213bedf0f27"} Nov 28 12:39:11 crc kubenswrapper[4779]: I1128 12:39:11.033258 4779 generic.go:334] "Generic (PLEG): container finished" podID="83ce570d-f1e1-4168-9b49-3da4f6b31209" containerID="305b1a87468b65dbb5dcaf99ee16f8a51e115b06a3f61e656abf9531452cf3fb" exitCode=0 Nov 28 12:39:11 crc kubenswrapper[4779]: I1128 12:39:11.033297 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-svdf4" event={"ID":"83ce570d-f1e1-4168-9b49-3da4f6b31209","Type":"ContainerDied","Data":"305b1a87468b65dbb5dcaf99ee16f8a51e115b06a3f61e656abf9531452cf3fb"} Nov 28 12:39:11 crc kubenswrapper[4779]: I1128 12:39:11.035280 4779 generic.go:334] "Generic (PLEG): container finished" podID="b88224c6-06e6-41c7-bba9-cb04ae3361e0" containerID="8a7027c816b1c342706f6823486a004c029dbcf3900213bc802b7ae7e3f83e2a" exitCode=0 Nov 28 12:39:11 crc kubenswrapper[4779]: I1128 12:39:11.035354 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xbp9s" event={"ID":"b88224c6-06e6-41c7-bba9-cb04ae3361e0","Type":"ContainerDied","Data":"8a7027c816b1c342706f6823486a004c029dbcf3900213bc802b7ae7e3f83e2a"} Nov 28 12:39:11 crc kubenswrapper[4779]: I1128 12:39:11.037599 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gtp6j" event={"ID":"585d10b8-61e3-4059-9fe7-81895b9ca67d","Type":"ContainerStarted","Data":"a2e9a98e56451f470bffe9134f74e65600d14cc840aa8de9f1990e566427e8a4"} Nov 28 12:39:12 crc kubenswrapper[4779]: I1128 12:39:12.045221 4779 generic.go:334] "Generic (PLEG): container finished" podID="585d10b8-61e3-4059-9fe7-81895b9ca67d" containerID="a2e9a98e56451f470bffe9134f74e65600d14cc840aa8de9f1990e566427e8a4" exitCode=0 Nov 28 12:39:12 crc kubenswrapper[4779]: I1128 12:39:12.045332 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gtp6j" event={"ID":"585d10b8-61e3-4059-9fe7-81895b9ca67d","Type":"ContainerDied","Data":"a2e9a98e56451f470bffe9134f74e65600d14cc840aa8de9f1990e566427e8a4"} Nov 28 12:39:12 crc kubenswrapper[4779]: I1128 12:39:12.052238 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bgzr4" event={"ID":"35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8","Type":"ContainerStarted","Data":"33655e6356484bf5141fcd294f96433d8845c25a12f4b19513c16f0c4a4fb4a4"} Nov 28 12:39:12 crc kubenswrapper[4779]: I1128 12:39:12.055227 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tsxnv" event={"ID":"42148fd9-447b-43a5-b513-7cc37b19ab16","Type":"ContainerStarted","Data":"1056b142771cf2102daf3b9a5c47e9041b7884919153262835b8d4f11846b11d"} Nov 28 12:39:12 crc kubenswrapper[4779]: I1128 12:39:12.057580 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-svdf4" event={"ID":"83ce570d-f1e1-4168-9b49-3da4f6b31209","Type":"ContainerStarted","Data":"ae5a94c60b1165f00d9c67227a7fa3c2f315ebdca6720d8b0929c792e7f9a5bf"} Nov 28 12:39:12 crc kubenswrapper[4779]: I1128 12:39:12.059567 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xbp9s" event={"ID":"b88224c6-06e6-41c7-bba9-cb04ae3361e0","Type":"ContainerStarted","Data":"91c773c489065c1aa57e7daa45c3737e2e04fd2cceb08c784419c5f837f178a0"} Nov 28 12:39:12 crc kubenswrapper[4779]: I1128 12:39:12.118817 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tsxnv" podStartSLOduration=2.142286216 podStartE2EDuration="59.118800124s" podCreationTimestamp="2025-11-28 12:38:13 +0000 UTC" firstStartedPulling="2025-11-28 12:38:14.453419824 +0000 UTC m=+155.019095178" lastFinishedPulling="2025-11-28 12:39:11.429933732 +0000 UTC m=+211.995609086" observedRunningTime="2025-11-28 12:39:12.117292433 +0000 UTC m=+212.682967777" watchObservedRunningTime="2025-11-28 12:39:12.118800124 +0000 UTC m=+212.684475478" Nov 28 12:39:12 crc kubenswrapper[4779]: I1128 12:39:12.120137 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xbp9s" podStartSLOduration=2.047627195 podStartE2EDuration="59.12013073s" podCreationTimestamp="2025-11-28 12:38:13 +0000 UTC" firstStartedPulling="2025-11-28 12:38:14.454861233 +0000 UTC m=+155.020536587" lastFinishedPulling="2025-11-28 12:39:11.527364768 +0000 UTC m=+212.093040122" observedRunningTime="2025-11-28 12:39:12.099889252 +0000 UTC m=+212.665564606" watchObservedRunningTime="2025-11-28 12:39:12.12013073 +0000 UTC m=+212.685806084" Nov 28 12:39:12 crc kubenswrapper[4779]: I1128 12:39:12.156604 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bgzr4" podStartSLOduration=3.20800614 podStartE2EDuration="59.156576235s" podCreationTimestamp="2025-11-28 12:38:13 +0000 UTC" firstStartedPulling="2025-11-28 12:38:15.506865007 +0000 UTC m=+156.072540361" lastFinishedPulling="2025-11-28 12:39:11.455435112 +0000 UTC m=+212.021110456" observedRunningTime="2025-11-28 12:39:12.153509612 +0000 UTC m=+212.719184966" watchObservedRunningTime="2025-11-28 12:39:12.156576235 +0000 UTC m=+212.722251579" Nov 28 12:39:12 crc kubenswrapper[4779]: I1128 12:39:12.177452 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-svdf4" podStartSLOduration=2.220928098 podStartE2EDuration="57.177424139s" podCreationTimestamp="2025-11-28 12:38:15 +0000 UTC" firstStartedPulling="2025-11-28 12:38:16.543326122 +0000 UTC m=+157.109001476" lastFinishedPulling="2025-11-28 12:39:11.499822163 +0000 UTC m=+212.065497517" observedRunningTime="2025-11-28 12:39:12.173848552 +0000 UTC m=+212.739523906" watchObservedRunningTime="2025-11-28 12:39:12.177424139 +0000 UTC m=+212.743099493" Nov 28 12:39:13 crc kubenswrapper[4779]: I1128 12:39:13.066858 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gtp6j" event={"ID":"585d10b8-61e3-4059-9fe7-81895b9ca67d","Type":"ContainerStarted","Data":"0f8989cbd5ff13a95b322382bb00fcdeafa7dc4caa261633c5f0d0a2a57e30b0"} Nov 28 12:39:13 crc kubenswrapper[4779]: I1128 12:39:13.088077 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gtp6j" podStartSLOduration=4.182140184 podStartE2EDuration="57.088044458s" podCreationTimestamp="2025-11-28 12:38:16 +0000 UTC" firstStartedPulling="2025-11-28 12:38:19.617721199 +0000 UTC m=+160.183396553" lastFinishedPulling="2025-11-28 12:39:12.523625473 +0000 UTC m=+213.089300827" observedRunningTime="2025-11-28 12:39:13.083439103 +0000 UTC m=+213.649114467" watchObservedRunningTime="2025-11-28 12:39:13.088044458 +0000 UTC m=+213.653719812" Nov 28 12:39:13 crc kubenswrapper[4779]: I1128 12:39:13.548378 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xbp9s" Nov 28 12:39:13 crc kubenswrapper[4779]: I1128 12:39:13.548462 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xbp9s" Nov 28 12:39:13 crc kubenswrapper[4779]: I1128 12:39:13.596119 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xbp9s" Nov 28 12:39:13 crc kubenswrapper[4779]: I1128 12:39:13.721162 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tsxnv" Nov 28 12:39:13 crc kubenswrapper[4779]: I1128 12:39:13.721250 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tsxnv" Nov 28 12:39:13 crc kubenswrapper[4779]: I1128 12:39:13.917495 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nv895" Nov 28 12:39:13 crc kubenswrapper[4779]: I1128 12:39:13.917590 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nv895" Nov 28 12:39:13 crc kubenswrapper[4779]: I1128 12:39:13.956645 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nv895" Nov 28 12:39:14 crc kubenswrapper[4779]: I1128 12:39:14.106411 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nv895" Nov 28 12:39:14 crc kubenswrapper[4779]: I1128 12:39:14.118760 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bgzr4" Nov 28 12:39:14 crc kubenswrapper[4779]: I1128 12:39:14.118847 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bgzr4" Nov 28 12:39:14 crc kubenswrapper[4779]: I1128 12:39:14.161431 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bgzr4" Nov 28 12:39:14 crc kubenswrapper[4779]: I1128 12:39:14.794007 4779 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-tsxnv" podUID="42148fd9-447b-43a5-b513-7cc37b19ab16" containerName="registry-server" probeResult="failure" output=< Nov 28 12:39:14 crc kubenswrapper[4779]: timeout: failed to connect service ":50051" within 1s Nov 28 12:39:14 crc kubenswrapper[4779]: > Nov 28 12:39:14 crc kubenswrapper[4779]: I1128 12:39:14.964451 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nv895"] Nov 28 12:39:15 crc kubenswrapper[4779]: I1128 12:39:15.521794 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-svdf4" Nov 28 12:39:15 crc kubenswrapper[4779]: I1128 12:39:15.521902 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-svdf4" Nov 28 12:39:15 crc kubenswrapper[4779]: I1128 12:39:15.786521 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-svdf4" Nov 28 12:39:15 crc kubenswrapper[4779]: I1128 12:39:15.996956 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-x7l6c" Nov 28 12:39:16 crc kubenswrapper[4779]: I1128 12:39:16.041958 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-x7l6c" Nov 28 12:39:16 crc kubenswrapper[4779]: I1128 12:39:16.085608 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nv895" podUID="dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5" containerName="registry-server" containerID="cri-o://749e7172c93aea059abd0e08a451b0a74624a75cdf6f068481b10a618a70c173" gracePeriod=2 Nov 28 12:39:16 crc kubenswrapper[4779]: I1128 12:39:16.125968 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-svdf4" Nov 28 12:39:16 crc kubenswrapper[4779]: I1128 12:39:16.284799 4779 patch_prober.go:28] interesting pod/machine-config-daemon-kj9g2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 12:39:16 crc kubenswrapper[4779]: I1128 12:39:16.285463 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 12:39:16 crc kubenswrapper[4779]: I1128 12:39:16.285556 4779 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" Nov 28 12:39:16 crc kubenswrapper[4779]: I1128 12:39:16.286672 4779 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5f92b1378efd9146ee3cb61fef14092136e47b318d132a400c768bedf50d034e"} pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 12:39:16 crc kubenswrapper[4779]: I1128 12:39:16.286868 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" containerID="cri-o://5f92b1378efd9146ee3cb61fef14092136e47b318d132a400c768bedf50d034e" gracePeriod=600 Nov 28 12:39:17 crc kubenswrapper[4779]: I1128 12:39:17.366419 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x7l6c"] Nov 28 12:39:17 crc kubenswrapper[4779]: I1128 12:39:17.366497 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gtp6j" Nov 28 12:39:17 crc kubenswrapper[4779]: I1128 12:39:17.366517 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gtp6j" Nov 28 12:39:17 crc kubenswrapper[4779]: I1128 12:39:17.367048 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-x7l6c" podUID="b3dbcb58-e82e-47c8-b02d-b7cdca5b52df" containerName="registry-server" containerID="cri-o://8b02c52c49582c72a0ca149363c21b92f7da7bea5422de75e0848dfe12a8efaf" gracePeriod=2 Nov 28 12:39:18 crc kubenswrapper[4779]: I1128 12:39:18.421252 4779 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gtp6j" podUID="585d10b8-61e3-4059-9fe7-81895b9ca67d" containerName="registry-server" probeResult="failure" output=< Nov 28 12:39:18 crc kubenswrapper[4779]: timeout: failed to connect service ":50051" within 1s Nov 28 12:39:18 crc kubenswrapper[4779]: > Nov 28 12:39:18 crc kubenswrapper[4779]: I1128 12:39:18.525964 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x7l6c" Nov 28 12:39:18 crc kubenswrapper[4779]: I1128 12:39:18.537950 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dq9s5\" (UniqueName: \"kubernetes.io/projected/b3dbcb58-e82e-47c8-b02d-b7cdca5b52df-kube-api-access-dq9s5\") pod \"b3dbcb58-e82e-47c8-b02d-b7cdca5b52df\" (UID: \"b3dbcb58-e82e-47c8-b02d-b7cdca5b52df\") " Nov 28 12:39:18 crc kubenswrapper[4779]: I1128 12:39:18.538240 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3dbcb58-e82e-47c8-b02d-b7cdca5b52df-utilities\") pod \"b3dbcb58-e82e-47c8-b02d-b7cdca5b52df\" (UID: \"b3dbcb58-e82e-47c8-b02d-b7cdca5b52df\") " Nov 28 12:39:18 crc kubenswrapper[4779]: I1128 12:39:18.538268 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3dbcb58-e82e-47c8-b02d-b7cdca5b52df-catalog-content\") pod \"b3dbcb58-e82e-47c8-b02d-b7cdca5b52df\" (UID: \"b3dbcb58-e82e-47c8-b02d-b7cdca5b52df\") " Nov 28 12:39:18 crc kubenswrapper[4779]: I1128 12:39:18.539727 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3dbcb58-e82e-47c8-b02d-b7cdca5b52df-utilities" (OuterVolumeSpecName: "utilities") pod "b3dbcb58-e82e-47c8-b02d-b7cdca5b52df" (UID: "b3dbcb58-e82e-47c8-b02d-b7cdca5b52df"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:39:18 crc kubenswrapper[4779]: I1128 12:39:18.550470 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3dbcb58-e82e-47c8-b02d-b7cdca5b52df-kube-api-access-dq9s5" (OuterVolumeSpecName: "kube-api-access-dq9s5") pod "b3dbcb58-e82e-47c8-b02d-b7cdca5b52df" (UID: "b3dbcb58-e82e-47c8-b02d-b7cdca5b52df"). InnerVolumeSpecName "kube-api-access-dq9s5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:39:18 crc kubenswrapper[4779]: I1128 12:39:18.568879 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3dbcb58-e82e-47c8-b02d-b7cdca5b52df-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b3dbcb58-e82e-47c8-b02d-b7cdca5b52df" (UID: "b3dbcb58-e82e-47c8-b02d-b7cdca5b52df"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:39:18 crc kubenswrapper[4779]: I1128 12:39:18.639544 4779 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3dbcb58-e82e-47c8-b02d-b7cdca5b52df-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 12:39:18 crc kubenswrapper[4779]: I1128 12:39:18.639593 4779 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3dbcb58-e82e-47c8-b02d-b7cdca5b52df-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 12:39:18 crc kubenswrapper[4779]: I1128 12:39:18.639614 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dq9s5\" (UniqueName: \"kubernetes.io/projected/b3dbcb58-e82e-47c8-b02d-b7cdca5b52df-kube-api-access-dq9s5\") on node \"crc\" DevicePath \"\"" Nov 28 12:39:19 crc kubenswrapper[4779]: I1128 12:39:19.113503 4779 generic.go:334] "Generic (PLEG): container finished" podID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerID="5f92b1378efd9146ee3cb61fef14092136e47b318d132a400c768bedf50d034e" exitCode=0 Nov 28 12:39:19 crc kubenswrapper[4779]: I1128 12:39:19.113600 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" event={"ID":"3b2a3eb4-4de5-491b-b466-3a35b7d745ec","Type":"ContainerDied","Data":"5f92b1378efd9146ee3cb61fef14092136e47b318d132a400c768bedf50d034e"} Nov 28 12:39:19 crc kubenswrapper[4779]: I1128 12:39:19.118172 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x7l6c" Nov 28 12:39:19 crc kubenswrapper[4779]: I1128 12:39:19.118182 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x7l6c" event={"ID":"b3dbcb58-e82e-47c8-b02d-b7cdca5b52df","Type":"ContainerDied","Data":"8b02c52c49582c72a0ca149363c21b92f7da7bea5422de75e0848dfe12a8efaf"} Nov 28 12:39:19 crc kubenswrapper[4779]: I1128 12:39:19.118163 4779 generic.go:334] "Generic (PLEG): container finished" podID="b3dbcb58-e82e-47c8-b02d-b7cdca5b52df" containerID="8b02c52c49582c72a0ca149363c21b92f7da7bea5422de75e0848dfe12a8efaf" exitCode=0 Nov 28 12:39:19 crc kubenswrapper[4779]: I1128 12:39:19.118272 4779 scope.go:117] "RemoveContainer" containerID="8b02c52c49582c72a0ca149363c21b92f7da7bea5422de75e0848dfe12a8efaf" Nov 28 12:39:19 crc kubenswrapper[4779]: I1128 12:39:19.118418 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x7l6c" event={"ID":"b3dbcb58-e82e-47c8-b02d-b7cdca5b52df","Type":"ContainerDied","Data":"28ff0678cfaebefa3e74543ab201a5af0713748f85d9ce29a6ce8a3dcaa05292"} Nov 28 12:39:19 crc kubenswrapper[4779]: I1128 12:39:19.125333 4779 generic.go:334] "Generic (PLEG): container finished" podID="dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5" containerID="749e7172c93aea059abd0e08a451b0a74624a75cdf6f068481b10a618a70c173" exitCode=0 Nov 28 12:39:19 crc kubenswrapper[4779]: I1128 12:39:19.125414 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nv895" event={"ID":"dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5","Type":"ContainerDied","Data":"749e7172c93aea059abd0e08a451b0a74624a75cdf6f068481b10a618a70c173"} Nov 28 12:39:19 crc kubenswrapper[4779]: I1128 12:39:19.164009 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x7l6c"] Nov 28 12:39:19 crc kubenswrapper[4779]: I1128 12:39:19.167173 4779 scope.go:117] "RemoveContainer" containerID="fd64b8d716208e977fe34c5f11a5ea19ea99f5a108db8a154f6812ec97286c38" Nov 28 12:39:19 crc kubenswrapper[4779]: I1128 12:39:19.167516 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-x7l6c"] Nov 28 12:39:19 crc kubenswrapper[4779]: I1128 12:39:19.198313 4779 scope.go:117] "RemoveContainer" containerID="3800c654e31aa4e2919ddd6ed96161a5d0a31455f445ce4a8eef6ea63b66c5a0" Nov 28 12:39:19 crc kubenswrapper[4779]: I1128 12:39:19.217895 4779 scope.go:117] "RemoveContainer" containerID="8b02c52c49582c72a0ca149363c21b92f7da7bea5422de75e0848dfe12a8efaf" Nov 28 12:39:19 crc kubenswrapper[4779]: E1128 12:39:19.218736 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b02c52c49582c72a0ca149363c21b92f7da7bea5422de75e0848dfe12a8efaf\": container with ID starting with 8b02c52c49582c72a0ca149363c21b92f7da7bea5422de75e0848dfe12a8efaf not found: ID does not exist" containerID="8b02c52c49582c72a0ca149363c21b92f7da7bea5422de75e0848dfe12a8efaf" Nov 28 12:39:19 crc kubenswrapper[4779]: I1128 12:39:19.218792 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b02c52c49582c72a0ca149363c21b92f7da7bea5422de75e0848dfe12a8efaf"} err="failed to get container status \"8b02c52c49582c72a0ca149363c21b92f7da7bea5422de75e0848dfe12a8efaf\": rpc error: code = NotFound desc = could not find container \"8b02c52c49582c72a0ca149363c21b92f7da7bea5422de75e0848dfe12a8efaf\": container with ID starting with 8b02c52c49582c72a0ca149363c21b92f7da7bea5422de75e0848dfe12a8efaf not found: ID does not exist" Nov 28 12:39:19 crc kubenswrapper[4779]: I1128 12:39:19.218830 4779 scope.go:117] "RemoveContainer" containerID="fd64b8d716208e977fe34c5f11a5ea19ea99f5a108db8a154f6812ec97286c38" Nov 28 12:39:19 crc kubenswrapper[4779]: E1128 12:39:19.220377 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd64b8d716208e977fe34c5f11a5ea19ea99f5a108db8a154f6812ec97286c38\": container with ID starting with fd64b8d716208e977fe34c5f11a5ea19ea99f5a108db8a154f6812ec97286c38 not found: ID does not exist" containerID="fd64b8d716208e977fe34c5f11a5ea19ea99f5a108db8a154f6812ec97286c38" Nov 28 12:39:19 crc kubenswrapper[4779]: I1128 12:39:19.220457 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd64b8d716208e977fe34c5f11a5ea19ea99f5a108db8a154f6812ec97286c38"} err="failed to get container status \"fd64b8d716208e977fe34c5f11a5ea19ea99f5a108db8a154f6812ec97286c38\": rpc error: code = NotFound desc = could not find container \"fd64b8d716208e977fe34c5f11a5ea19ea99f5a108db8a154f6812ec97286c38\": container with ID starting with fd64b8d716208e977fe34c5f11a5ea19ea99f5a108db8a154f6812ec97286c38 not found: ID does not exist" Nov 28 12:39:19 crc kubenswrapper[4779]: I1128 12:39:19.220516 4779 scope.go:117] "RemoveContainer" containerID="3800c654e31aa4e2919ddd6ed96161a5d0a31455f445ce4a8eef6ea63b66c5a0" Nov 28 12:39:19 crc kubenswrapper[4779]: E1128 12:39:19.223611 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3800c654e31aa4e2919ddd6ed96161a5d0a31455f445ce4a8eef6ea63b66c5a0\": container with ID starting with 3800c654e31aa4e2919ddd6ed96161a5d0a31455f445ce4a8eef6ea63b66c5a0 not found: ID does not exist" containerID="3800c654e31aa4e2919ddd6ed96161a5d0a31455f445ce4a8eef6ea63b66c5a0" Nov 28 12:39:19 crc kubenswrapper[4779]: I1128 12:39:19.223695 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3800c654e31aa4e2919ddd6ed96161a5d0a31455f445ce4a8eef6ea63b66c5a0"} err="failed to get container status \"3800c654e31aa4e2919ddd6ed96161a5d0a31455f445ce4a8eef6ea63b66c5a0\": rpc error: code = NotFound desc = could not find container \"3800c654e31aa4e2919ddd6ed96161a5d0a31455f445ce4a8eef6ea63b66c5a0\": container with ID starting with 3800c654e31aa4e2919ddd6ed96161a5d0a31455f445ce4a8eef6ea63b66c5a0 not found: ID does not exist" Nov 28 12:39:19 crc kubenswrapper[4779]: I1128 12:39:19.734500 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3dbcb58-e82e-47c8-b02d-b7cdca5b52df" path="/var/lib/kubelet/pods/b3dbcb58-e82e-47c8-b02d-b7cdca5b52df/volumes" Nov 28 12:39:19 crc kubenswrapper[4779]: I1128 12:39:19.779002 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nv895" Nov 28 12:39:19 crc kubenswrapper[4779]: I1128 12:39:19.857498 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4zdq\" (UniqueName: \"kubernetes.io/projected/dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5-kube-api-access-m4zdq\") pod \"dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5\" (UID: \"dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5\") " Nov 28 12:39:19 crc kubenswrapper[4779]: I1128 12:39:19.857574 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5-utilities\") pod \"dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5\" (UID: \"dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5\") " Nov 28 12:39:19 crc kubenswrapper[4779]: I1128 12:39:19.857634 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5-catalog-content\") pod \"dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5\" (UID: \"dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5\") " Nov 28 12:39:19 crc kubenswrapper[4779]: I1128 12:39:19.859036 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5-utilities" (OuterVolumeSpecName: "utilities") pod "dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5" (UID: "dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:39:19 crc kubenswrapper[4779]: I1128 12:39:19.867742 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5-kube-api-access-m4zdq" (OuterVolumeSpecName: "kube-api-access-m4zdq") pod "dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5" (UID: "dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5"). InnerVolumeSpecName "kube-api-access-m4zdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:39:19 crc kubenswrapper[4779]: I1128 12:39:19.905442 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5" (UID: "dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:39:19 crc kubenswrapper[4779]: I1128 12:39:19.959933 4779 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 12:39:19 crc kubenswrapper[4779]: I1128 12:39:19.960376 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4zdq\" (UniqueName: \"kubernetes.io/projected/dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5-kube-api-access-m4zdq\") on node \"crc\" DevicePath \"\"" Nov 28 12:39:19 crc kubenswrapper[4779]: I1128 12:39:19.960502 4779 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 12:39:20 crc kubenswrapper[4779]: I1128 12:39:20.137006 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" event={"ID":"3b2a3eb4-4de5-491b-b466-3a35b7d745ec","Type":"ContainerStarted","Data":"655348a98a3eea4baa5e428dc13dd64fc735d8645fc4d4eaf09b66ffacea7023"} Nov 28 12:39:20 crc kubenswrapper[4779]: I1128 12:39:20.142750 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nv895" event={"ID":"dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5","Type":"ContainerDied","Data":"edbdae4a094d13d14b44ca3ca9338d71437764ca81e4b20a345fb7a693ed2fdc"} Nov 28 12:39:20 crc kubenswrapper[4779]: I1128 12:39:20.142838 4779 scope.go:117] "RemoveContainer" containerID="749e7172c93aea059abd0e08a451b0a74624a75cdf6f068481b10a618a70c173" Nov 28 12:39:20 crc kubenswrapper[4779]: I1128 12:39:20.143133 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nv895" Nov 28 12:39:20 crc kubenswrapper[4779]: I1128 12:39:20.175085 4779 scope.go:117] "RemoveContainer" containerID="27df372dcf909b68e1d9e9140787d605f56ad81a9ffab0b510df8366240cfe5f" Nov 28 12:39:20 crc kubenswrapper[4779]: I1128 12:39:20.210405 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nv895"] Nov 28 12:39:20 crc kubenswrapper[4779]: I1128 12:39:20.223232 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nv895"] Nov 28 12:39:20 crc kubenswrapper[4779]: I1128 12:39:20.224547 4779 scope.go:117] "RemoveContainer" containerID="bcc55f13f6802257765d03ef2e099ebe54567447a9875ebd9777e59df80db553" Nov 28 12:39:21 crc kubenswrapper[4779]: I1128 12:39:21.733348 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5" path="/var/lib/kubelet/pods/dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5/volumes" Nov 28 12:39:23 crc kubenswrapper[4779]: I1128 12:39:23.626150 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xbp9s" Nov 28 12:39:23 crc kubenswrapper[4779]: I1128 12:39:23.784062 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tsxnv" Nov 28 12:39:23 crc kubenswrapper[4779]: I1128 12:39:23.851808 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tsxnv" Nov 28 12:39:24 crc kubenswrapper[4779]: I1128 12:39:24.192675 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bgzr4" Nov 28 12:39:25 crc kubenswrapper[4779]: I1128 12:39:25.305318 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-n97k6"] Nov 28 12:39:25 crc kubenswrapper[4779]: I1128 12:39:25.565237 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bgzr4"] Nov 28 12:39:25 crc kubenswrapper[4779]: I1128 12:39:25.565794 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bgzr4" podUID="35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8" containerName="registry-server" containerID="cri-o://33655e6356484bf5141fcd294f96433d8845c25a12f4b19513c16f0c4a4fb4a4" gracePeriod=2 Nov 28 12:39:27 crc kubenswrapper[4779]: I1128 12:39:27.457982 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gtp6j" Nov 28 12:39:27 crc kubenswrapper[4779]: I1128 12:39:27.521619 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gtp6j" Nov 28 12:39:28 crc kubenswrapper[4779]: I1128 12:39:28.203249 4779 generic.go:334] "Generic (PLEG): container finished" podID="35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8" containerID="33655e6356484bf5141fcd294f96433d8845c25a12f4b19513c16f0c4a4fb4a4" exitCode=0 Nov 28 12:39:28 crc kubenswrapper[4779]: I1128 12:39:28.203396 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bgzr4" event={"ID":"35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8","Type":"ContainerDied","Data":"33655e6356484bf5141fcd294f96433d8845c25a12f4b19513c16f0c4a4fb4a4"} Nov 28 12:39:28 crc kubenswrapper[4779]: I1128 12:39:28.365792 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gtp6j"] Nov 28 12:39:28 crc kubenswrapper[4779]: I1128 12:39:28.667966 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bgzr4" Nov 28 12:39:28 crc kubenswrapper[4779]: I1128 12:39:28.705254 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8-catalog-content\") pod \"35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8\" (UID: \"35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8\") " Nov 28 12:39:28 crc kubenswrapper[4779]: I1128 12:39:28.712555 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8-utilities\") pod \"35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8\" (UID: \"35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8\") " Nov 28 12:39:28 crc kubenswrapper[4779]: I1128 12:39:28.712718 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjm9g\" (UniqueName: \"kubernetes.io/projected/35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8-kube-api-access-qjm9g\") pod \"35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8\" (UID: \"35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8\") " Nov 28 12:39:28 crc kubenswrapper[4779]: I1128 12:39:28.713743 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8-utilities" (OuterVolumeSpecName: "utilities") pod "35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8" (UID: "35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:39:28 crc kubenswrapper[4779]: I1128 12:39:28.721335 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8-kube-api-access-qjm9g" (OuterVolumeSpecName: "kube-api-access-qjm9g") pod "35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8" (UID: "35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8"). InnerVolumeSpecName "kube-api-access-qjm9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:39:28 crc kubenswrapper[4779]: I1128 12:39:28.755342 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8" (UID: "35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:39:28 crc kubenswrapper[4779]: I1128 12:39:28.834126 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjm9g\" (UniqueName: \"kubernetes.io/projected/35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8-kube-api-access-qjm9g\") on node \"crc\" DevicePath \"\"" Nov 28 12:39:28 crc kubenswrapper[4779]: I1128 12:39:28.834171 4779 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 12:39:28 crc kubenswrapper[4779]: I1128 12:39:28.834186 4779 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 12:39:29 crc kubenswrapper[4779]: I1128 12:39:29.210518 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gtp6j" podUID="585d10b8-61e3-4059-9fe7-81895b9ca67d" containerName="registry-server" containerID="cri-o://0f8989cbd5ff13a95b322382bb00fcdeafa7dc4caa261633c5f0d0a2a57e30b0" gracePeriod=2 Nov 28 12:39:29 crc kubenswrapper[4779]: I1128 12:39:29.211035 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bgzr4" Nov 28 12:39:29 crc kubenswrapper[4779]: I1128 12:39:29.213316 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bgzr4" event={"ID":"35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8","Type":"ContainerDied","Data":"f3b1d3cb6a09949704f42871efe5b63dccfdb0fb956768e1f557d634f3231ea0"} Nov 28 12:39:29 crc kubenswrapper[4779]: I1128 12:39:29.213432 4779 scope.go:117] "RemoveContainer" containerID="33655e6356484bf5141fcd294f96433d8845c25a12f4b19513c16f0c4a4fb4a4" Nov 28 12:39:29 crc kubenswrapper[4779]: I1128 12:39:29.231987 4779 scope.go:117] "RemoveContainer" containerID="363859923e5bc594ef4d36d343e001404c8d3b405152b94fa1cafe3b0218e6dc" Nov 28 12:39:29 crc kubenswrapper[4779]: I1128 12:39:29.243528 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bgzr4"] Nov 28 12:39:29 crc kubenswrapper[4779]: I1128 12:39:29.246022 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bgzr4"] Nov 28 12:39:29 crc kubenswrapper[4779]: I1128 12:39:29.250860 4779 scope.go:117] "RemoveContainer" containerID="134d672ca608816dfa5b712006dbf0b4cf9c6d7a266646cbcc36de2b81c50ecf" Nov 28 12:39:29 crc kubenswrapper[4779]: I1128 12:39:29.525681 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gtp6j" Nov 28 12:39:29 crc kubenswrapper[4779]: I1128 12:39:29.645497 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/585d10b8-61e3-4059-9fe7-81895b9ca67d-utilities\") pod \"585d10b8-61e3-4059-9fe7-81895b9ca67d\" (UID: \"585d10b8-61e3-4059-9fe7-81895b9ca67d\") " Nov 28 12:39:29 crc kubenswrapper[4779]: I1128 12:39:29.645564 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sl9t4\" (UniqueName: \"kubernetes.io/projected/585d10b8-61e3-4059-9fe7-81895b9ca67d-kube-api-access-sl9t4\") pod \"585d10b8-61e3-4059-9fe7-81895b9ca67d\" (UID: \"585d10b8-61e3-4059-9fe7-81895b9ca67d\") " Nov 28 12:39:29 crc kubenswrapper[4779]: I1128 12:39:29.645604 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/585d10b8-61e3-4059-9fe7-81895b9ca67d-catalog-content\") pod \"585d10b8-61e3-4059-9fe7-81895b9ca67d\" (UID: \"585d10b8-61e3-4059-9fe7-81895b9ca67d\") " Nov 28 12:39:29 crc kubenswrapper[4779]: I1128 12:39:29.646295 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/585d10b8-61e3-4059-9fe7-81895b9ca67d-utilities" (OuterVolumeSpecName: "utilities") pod "585d10b8-61e3-4059-9fe7-81895b9ca67d" (UID: "585d10b8-61e3-4059-9fe7-81895b9ca67d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:39:29 crc kubenswrapper[4779]: I1128 12:39:29.650081 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/585d10b8-61e3-4059-9fe7-81895b9ca67d-kube-api-access-sl9t4" (OuterVolumeSpecName: "kube-api-access-sl9t4") pod "585d10b8-61e3-4059-9fe7-81895b9ca67d" (UID: "585d10b8-61e3-4059-9fe7-81895b9ca67d"). InnerVolumeSpecName "kube-api-access-sl9t4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:39:29 crc kubenswrapper[4779]: I1128 12:39:29.732605 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8" path="/var/lib/kubelet/pods/35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8/volumes" Nov 28 12:39:29 crc kubenswrapper[4779]: I1128 12:39:29.747540 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sl9t4\" (UniqueName: \"kubernetes.io/projected/585d10b8-61e3-4059-9fe7-81895b9ca67d-kube-api-access-sl9t4\") on node \"crc\" DevicePath \"\"" Nov 28 12:39:29 crc kubenswrapper[4779]: I1128 12:39:29.747571 4779 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/585d10b8-61e3-4059-9fe7-81895b9ca67d-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 12:39:29 crc kubenswrapper[4779]: I1128 12:39:29.750840 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/585d10b8-61e3-4059-9fe7-81895b9ca67d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "585d10b8-61e3-4059-9fe7-81895b9ca67d" (UID: "585d10b8-61e3-4059-9fe7-81895b9ca67d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:39:29 crc kubenswrapper[4779]: I1128 12:39:29.848310 4779 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/585d10b8-61e3-4059-9fe7-81895b9ca67d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 12:39:30 crc kubenswrapper[4779]: I1128 12:39:30.218282 4779 generic.go:334] "Generic (PLEG): container finished" podID="585d10b8-61e3-4059-9fe7-81895b9ca67d" containerID="0f8989cbd5ff13a95b322382bb00fcdeafa7dc4caa261633c5f0d0a2a57e30b0" exitCode=0 Nov 28 12:39:30 crc kubenswrapper[4779]: I1128 12:39:30.218369 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gtp6j" event={"ID":"585d10b8-61e3-4059-9fe7-81895b9ca67d","Type":"ContainerDied","Data":"0f8989cbd5ff13a95b322382bb00fcdeafa7dc4caa261633c5f0d0a2a57e30b0"} Nov 28 12:39:30 crc kubenswrapper[4779]: I1128 12:39:30.218404 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gtp6j" event={"ID":"585d10b8-61e3-4059-9fe7-81895b9ca67d","Type":"ContainerDied","Data":"b7ce228d68520745b1e44484b515df8c1e16ae366c443353d830cfb60b16dfdc"} Nov 28 12:39:30 crc kubenswrapper[4779]: I1128 12:39:30.218423 4779 scope.go:117] "RemoveContainer" containerID="0f8989cbd5ff13a95b322382bb00fcdeafa7dc4caa261633c5f0d0a2a57e30b0" Nov 28 12:39:30 crc kubenswrapper[4779]: I1128 12:39:30.218698 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gtp6j" Nov 28 12:39:30 crc kubenswrapper[4779]: I1128 12:39:30.241614 4779 scope.go:117] "RemoveContainer" containerID="a2e9a98e56451f470bffe9134f74e65600d14cc840aa8de9f1990e566427e8a4" Nov 28 12:39:30 crc kubenswrapper[4779]: I1128 12:39:30.248064 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gtp6j"] Nov 28 12:39:30 crc kubenswrapper[4779]: I1128 12:39:30.251930 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gtp6j"] Nov 28 12:39:30 crc kubenswrapper[4779]: I1128 12:39:30.276475 4779 scope.go:117] "RemoveContainer" containerID="d92b0c82f54a538889bdad928254c0dce04b66a0d46234809c2a2abbe4f6ce62" Nov 28 12:39:30 crc kubenswrapper[4779]: I1128 12:39:30.293948 4779 scope.go:117] "RemoveContainer" containerID="0f8989cbd5ff13a95b322382bb00fcdeafa7dc4caa261633c5f0d0a2a57e30b0" Nov 28 12:39:30 crc kubenswrapper[4779]: E1128 12:39:30.294624 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f8989cbd5ff13a95b322382bb00fcdeafa7dc4caa261633c5f0d0a2a57e30b0\": container with ID starting with 0f8989cbd5ff13a95b322382bb00fcdeafa7dc4caa261633c5f0d0a2a57e30b0 not found: ID does not exist" containerID="0f8989cbd5ff13a95b322382bb00fcdeafa7dc4caa261633c5f0d0a2a57e30b0" Nov 28 12:39:30 crc kubenswrapper[4779]: I1128 12:39:30.294715 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f8989cbd5ff13a95b322382bb00fcdeafa7dc4caa261633c5f0d0a2a57e30b0"} err="failed to get container status \"0f8989cbd5ff13a95b322382bb00fcdeafa7dc4caa261633c5f0d0a2a57e30b0\": rpc error: code = NotFound desc = could not find container \"0f8989cbd5ff13a95b322382bb00fcdeafa7dc4caa261633c5f0d0a2a57e30b0\": container with ID starting with 0f8989cbd5ff13a95b322382bb00fcdeafa7dc4caa261633c5f0d0a2a57e30b0 not found: ID does not exist" Nov 28 12:39:30 crc kubenswrapper[4779]: I1128 12:39:30.294761 4779 scope.go:117] "RemoveContainer" containerID="a2e9a98e56451f470bffe9134f74e65600d14cc840aa8de9f1990e566427e8a4" Nov 28 12:39:30 crc kubenswrapper[4779]: E1128 12:39:30.295417 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2e9a98e56451f470bffe9134f74e65600d14cc840aa8de9f1990e566427e8a4\": container with ID starting with a2e9a98e56451f470bffe9134f74e65600d14cc840aa8de9f1990e566427e8a4 not found: ID does not exist" containerID="a2e9a98e56451f470bffe9134f74e65600d14cc840aa8de9f1990e566427e8a4" Nov 28 12:39:30 crc kubenswrapper[4779]: I1128 12:39:30.295485 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2e9a98e56451f470bffe9134f74e65600d14cc840aa8de9f1990e566427e8a4"} err="failed to get container status \"a2e9a98e56451f470bffe9134f74e65600d14cc840aa8de9f1990e566427e8a4\": rpc error: code = NotFound desc = could not find container \"a2e9a98e56451f470bffe9134f74e65600d14cc840aa8de9f1990e566427e8a4\": container with ID starting with a2e9a98e56451f470bffe9134f74e65600d14cc840aa8de9f1990e566427e8a4 not found: ID does not exist" Nov 28 12:39:30 crc kubenswrapper[4779]: I1128 12:39:30.295649 4779 scope.go:117] "RemoveContainer" containerID="d92b0c82f54a538889bdad928254c0dce04b66a0d46234809c2a2abbe4f6ce62" Nov 28 12:39:30 crc kubenswrapper[4779]: E1128 12:39:30.296145 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d92b0c82f54a538889bdad928254c0dce04b66a0d46234809c2a2abbe4f6ce62\": container with ID starting with d92b0c82f54a538889bdad928254c0dce04b66a0d46234809c2a2abbe4f6ce62 not found: ID does not exist" containerID="d92b0c82f54a538889bdad928254c0dce04b66a0d46234809c2a2abbe4f6ce62" Nov 28 12:39:30 crc kubenswrapper[4779]: I1128 12:39:30.296194 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d92b0c82f54a538889bdad928254c0dce04b66a0d46234809c2a2abbe4f6ce62"} err="failed to get container status \"d92b0c82f54a538889bdad928254c0dce04b66a0d46234809c2a2abbe4f6ce62\": rpc error: code = NotFound desc = could not find container \"d92b0c82f54a538889bdad928254c0dce04b66a0d46234809c2a2abbe4f6ce62\": container with ID starting with d92b0c82f54a538889bdad928254c0dce04b66a0d46234809c2a2abbe4f6ce62 not found: ID does not exist" Nov 28 12:39:31 crc kubenswrapper[4779]: I1128 12:39:31.734829 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="585d10b8-61e3-4059-9fe7-81895b9ca67d" path="/var/lib/kubelet/pods/585d10b8-61e3-4059-9fe7-81895b9ca67d/volumes" Nov 28 12:39:33 crc kubenswrapper[4779]: I1128 12:39:33.789642 4779 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 28 12:39:33 crc kubenswrapper[4779]: I1128 12:39:33.790257 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://6912a42c418059dabf07c7d940bf1c4102c8dcf91cd4dd6ca0b177f4acd276ec" gracePeriod=15 Nov 28 12:39:33 crc kubenswrapper[4779]: I1128 12:39:33.790300 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://a3db38b748527004df103120db865f7848491344dfdf5c89a6db10f4d15e6a74" gracePeriod=15 Nov 28 12:39:33 crc kubenswrapper[4779]: I1128 12:39:33.790342 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://aaf14e5e2229156dc442c92253ef1f23c75a5a6f5dec2d2537cddcdd1df54b92" gracePeriod=15 Nov 28 12:39:33 crc kubenswrapper[4779]: I1128 12:39:33.790481 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://3bafddd2d81f67f1445e3714d50eba5cfd6f75d60c2cb47d16f2086861a10bd6" gracePeriod=15 Nov 28 12:39:33 crc kubenswrapper[4779]: I1128 12:39:33.790385 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://2a76dbc5b41ebf68792cd449e4a245678be24151f0c980eedd06f956674b2435" gracePeriod=15 Nov 28 12:39:33 crc kubenswrapper[4779]: I1128 12:39:33.791255 4779 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 28 12:39:33 crc kubenswrapper[4779]: E1128 12:39:33.791525 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 28 12:39:33 crc kubenswrapper[4779]: I1128 12:39:33.791545 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 28 12:39:33 crc kubenswrapper[4779]: E1128 12:39:33.791560 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5" containerName="extract-content" Nov 28 12:39:33 crc kubenswrapper[4779]: I1128 12:39:33.791569 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5" containerName="extract-content" Nov 28 12:39:33 crc kubenswrapper[4779]: E1128 12:39:33.791581 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8" containerName="registry-server" Nov 28 12:39:33 crc kubenswrapper[4779]: I1128 12:39:33.791590 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8" containerName="registry-server" Nov 28 12:39:33 crc kubenswrapper[4779]: E1128 12:39:33.791604 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Nov 28 12:39:33 crc kubenswrapper[4779]: I1128 12:39:33.791612 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Nov 28 12:39:33 crc kubenswrapper[4779]: E1128 12:39:33.791621 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 28 12:39:33 crc kubenswrapper[4779]: I1128 12:39:33.791629 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 28 12:39:33 crc kubenswrapper[4779]: E1128 12:39:33.791640 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3dbcb58-e82e-47c8-b02d-b7cdca5b52df" containerName="extract-content" Nov 28 12:39:33 crc kubenswrapper[4779]: I1128 12:39:33.791648 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3dbcb58-e82e-47c8-b02d-b7cdca5b52df" containerName="extract-content" Nov 28 12:39:33 crc kubenswrapper[4779]: E1128 12:39:33.791663 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="585d10b8-61e3-4059-9fe7-81895b9ca67d" containerName="extract-utilities" Nov 28 12:39:33 crc kubenswrapper[4779]: I1128 12:39:33.791673 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="585d10b8-61e3-4059-9fe7-81895b9ca67d" containerName="extract-utilities" Nov 28 12:39:33 crc kubenswrapper[4779]: E1128 12:39:33.791687 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8" containerName="extract-utilities" Nov 28 12:39:33 crc kubenswrapper[4779]: I1128 12:39:33.791696 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8" containerName="extract-utilities" Nov 28 12:39:33 crc kubenswrapper[4779]: E1128 12:39:33.791708 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3dbcb58-e82e-47c8-b02d-b7cdca5b52df" containerName="registry-server" Nov 28 12:39:33 crc kubenswrapper[4779]: I1128 12:39:33.791716 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3dbcb58-e82e-47c8-b02d-b7cdca5b52df" containerName="registry-server" Nov 28 12:39:33 crc kubenswrapper[4779]: E1128 12:39:33.791727 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="585d10b8-61e3-4059-9fe7-81895b9ca67d" containerName="registry-server" Nov 28 12:39:33 crc kubenswrapper[4779]: I1128 12:39:33.791736 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="585d10b8-61e3-4059-9fe7-81895b9ca67d" containerName="registry-server" Nov 28 12:39:33 crc kubenswrapper[4779]: E1128 12:39:33.791746 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 28 12:39:33 crc kubenswrapper[4779]: I1128 12:39:33.791754 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 28 12:39:33 crc kubenswrapper[4779]: E1128 12:39:33.791766 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3dbcb58-e82e-47c8-b02d-b7cdca5b52df" containerName="extract-utilities" Nov 28 12:39:33 crc kubenswrapper[4779]: I1128 12:39:33.791774 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3dbcb58-e82e-47c8-b02d-b7cdca5b52df" containerName="extract-utilities" Nov 28 12:39:33 crc kubenswrapper[4779]: E1128 12:39:33.791785 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="585d10b8-61e3-4059-9fe7-81895b9ca67d" containerName="extract-content" Nov 28 12:39:33 crc kubenswrapper[4779]: I1128 12:39:33.791793 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="585d10b8-61e3-4059-9fe7-81895b9ca67d" containerName="extract-content" Nov 28 12:39:33 crc kubenswrapper[4779]: E1128 12:39:33.791804 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 28 12:39:33 crc kubenswrapper[4779]: I1128 12:39:33.791812 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 28 12:39:33 crc kubenswrapper[4779]: E1128 12:39:33.791823 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 28 12:39:33 crc kubenswrapper[4779]: I1128 12:39:33.791832 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 28 12:39:33 crc kubenswrapper[4779]: E1128 12:39:33.791845 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 28 12:39:33 crc kubenswrapper[4779]: I1128 12:39:33.791853 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 28 12:39:33 crc kubenswrapper[4779]: E1128 12:39:33.791865 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5" containerName="registry-server" Nov 28 12:39:33 crc kubenswrapper[4779]: I1128 12:39:33.791873 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5" containerName="registry-server" Nov 28 12:39:33 crc kubenswrapper[4779]: E1128 12:39:33.791882 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8" containerName="extract-content" Nov 28 12:39:33 crc kubenswrapper[4779]: I1128 12:39:33.791890 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8" containerName="extract-content" Nov 28 12:39:33 crc kubenswrapper[4779]: E1128 12:39:33.791903 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5" containerName="extract-utilities" Nov 28 12:39:33 crc kubenswrapper[4779]: I1128 12:39:33.791912 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5" containerName="extract-utilities" Nov 28 12:39:33 crc kubenswrapper[4779]: E1128 12:39:33.791920 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a71e93bc-498c-4b1b-bf39-2990a508a0fa" containerName="pruner" Nov 28 12:39:33 crc kubenswrapper[4779]: I1128 12:39:33.791928 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="a71e93bc-498c-4b1b-bf39-2990a508a0fa" containerName="pruner" Nov 28 12:39:33 crc kubenswrapper[4779]: I1128 12:39:33.792039 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="a71e93bc-498c-4b1b-bf39-2990a508a0fa" containerName="pruner" Nov 28 12:39:33 crc kubenswrapper[4779]: I1128 12:39:33.792051 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 28 12:39:33 crc kubenswrapper[4779]: I1128 12:39:33.792067 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="585d10b8-61e3-4059-9fe7-81895b9ca67d" containerName="registry-server" Nov 28 12:39:33 crc kubenswrapper[4779]: I1128 12:39:33.792076 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 28 12:39:33 crc kubenswrapper[4779]: I1128 12:39:33.792086 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 28 12:39:33 crc kubenswrapper[4779]: I1128 12:39:33.792117 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 28 12:39:33 crc kubenswrapper[4779]: I1128 12:39:33.792128 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 28 12:39:33 crc kubenswrapper[4779]: I1128 12:39:33.792141 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc5ce4e8-6378-44fa-b81f-2e675c5c1ea5" containerName="registry-server" Nov 28 12:39:33 crc kubenswrapper[4779]: I1128 12:39:33.792152 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 28 12:39:33 crc kubenswrapper[4779]: I1128 12:39:33.792166 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3dbcb58-e82e-47c8-b02d-b7cdca5b52df" containerName="registry-server" Nov 28 12:39:33 crc kubenswrapper[4779]: I1128 12:39:33.792180 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="35ea5bf9-8f7b-43d6-ae1c-8cff8176bae8" containerName="registry-server" Nov 28 12:39:33 crc kubenswrapper[4779]: I1128 12:39:33.793705 4779 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 28 12:39:33 crc kubenswrapper[4779]: I1128 12:39:33.794296 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 12:39:33 crc kubenswrapper[4779]: I1128 12:39:33.801945 4779 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Nov 28 12:39:33 crc kubenswrapper[4779]: I1128 12:39:33.832402 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 28 12:39:33 crc kubenswrapper[4779]: I1128 12:39:33.911219 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 12:39:33 crc kubenswrapper[4779]: I1128 12:39:33.911385 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 12:39:33 crc kubenswrapper[4779]: I1128 12:39:33.911445 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 12:39:33 crc kubenswrapper[4779]: I1128 12:39:33.911501 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 12:39:33 crc kubenswrapper[4779]: I1128 12:39:33.911527 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 12:39:33 crc kubenswrapper[4779]: I1128 12:39:33.911651 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 12:39:33 crc kubenswrapper[4779]: I1128 12:39:33.911688 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 12:39:33 crc kubenswrapper[4779]: I1128 12:39:33.911711 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 12:39:34 crc kubenswrapper[4779]: I1128 12:39:34.013438 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 12:39:34 crc kubenswrapper[4779]: I1128 12:39:34.013510 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 12:39:34 crc kubenswrapper[4779]: I1128 12:39:34.013529 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 12:39:34 crc kubenswrapper[4779]: I1128 12:39:34.013575 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 12:39:34 crc kubenswrapper[4779]: I1128 12:39:34.013596 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 12:39:34 crc kubenswrapper[4779]: I1128 12:39:34.013614 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 12:39:34 crc kubenswrapper[4779]: I1128 12:39:34.013655 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 12:39:34 crc kubenswrapper[4779]: I1128 12:39:34.013664 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 12:39:34 crc kubenswrapper[4779]: I1128 12:39:34.013688 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 12:39:34 crc kubenswrapper[4779]: I1128 12:39:34.013713 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 12:39:34 crc kubenswrapper[4779]: I1128 12:39:34.013715 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 12:39:34 crc kubenswrapper[4779]: I1128 12:39:34.013718 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 12:39:34 crc kubenswrapper[4779]: I1128 12:39:34.013757 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 12:39:34 crc kubenswrapper[4779]: I1128 12:39:34.013753 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 12:39:34 crc kubenswrapper[4779]: I1128 12:39:34.013688 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 12:39:34 crc kubenswrapper[4779]: I1128 12:39:34.013828 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 12:39:34 crc kubenswrapper[4779]: I1128 12:39:34.127814 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 12:39:34 crc kubenswrapper[4779]: E1128 12:39:34.170011 4779 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.107:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187c2c09b1a027a9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 12:39:34.168782761 +0000 UTC m=+234.734458155,LastTimestamp:2025-11-28 12:39:34.168782761 +0000 UTC m=+234.734458155,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 28 12:39:34 crc kubenswrapper[4779]: I1128 12:39:34.253155 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"9bc4aeef1a441f6f20af86c211842f44f504a9c2a2a2c85cc8bbe490a148219a"} Nov 28 12:39:34 crc kubenswrapper[4779]: I1128 12:39:34.256776 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 28 12:39:34 crc kubenswrapper[4779]: I1128 12:39:34.259394 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 28 12:39:34 crc kubenswrapper[4779]: I1128 12:39:34.260849 4779 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a3db38b748527004df103120db865f7848491344dfdf5c89a6db10f4d15e6a74" exitCode=0 Nov 28 12:39:34 crc kubenswrapper[4779]: I1128 12:39:34.260913 4779 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3bafddd2d81f67f1445e3714d50eba5cfd6f75d60c2cb47d16f2086861a10bd6" exitCode=0 Nov 28 12:39:34 crc kubenswrapper[4779]: I1128 12:39:34.260948 4779 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="aaf14e5e2229156dc442c92253ef1f23c75a5a6f5dec2d2537cddcdd1df54b92" exitCode=0 Nov 28 12:39:34 crc kubenswrapper[4779]: I1128 12:39:34.260972 4779 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2a76dbc5b41ebf68792cd449e4a245678be24151f0c980eedd06f956674b2435" exitCode=2 Nov 28 12:39:34 crc kubenswrapper[4779]: I1128 12:39:34.260996 4779 scope.go:117] "RemoveContainer" containerID="9026b47ba3a0076e3f66e452bc9a223292a17659f2b80d04ef6eb6a5c0448710" Nov 28 12:39:34 crc kubenswrapper[4779]: I1128 12:39:34.264350 4779 generic.go:334] "Generic (PLEG): container finished" podID="054c0628-de67-429b-bb65-ac369cde4509" containerID="31c11ccbc7134d09aa016f32b08a4dc02e273a4130e32e5bd0718c01ad274507" exitCode=0 Nov 28 12:39:34 crc kubenswrapper[4779]: I1128 12:39:34.264415 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"054c0628-de67-429b-bb65-ac369cde4509","Type":"ContainerDied","Data":"31c11ccbc7134d09aa016f32b08a4dc02e273a4130e32e5bd0718c01ad274507"} Nov 28 12:39:34 crc kubenswrapper[4779]: I1128 12:39:34.265701 4779 status_manager.go:851] "Failed to get status for pod" podUID="054c0628-de67-429b-bb65-ac369cde4509" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 12:39:34 crc kubenswrapper[4779]: I1128 12:39:34.266457 4779 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 12:39:35 crc kubenswrapper[4779]: I1128 12:39:35.274012 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 28 12:39:35 crc kubenswrapper[4779]: I1128 12:39:35.279302 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"75ed654f4d3746231833efb7c36a194bd92f5bfa95df7cfb54607fb3b6ecc8ee"} Nov 28 12:39:35 crc kubenswrapper[4779]: I1128 12:39:35.280341 4779 status_manager.go:851] "Failed to get status for pod" podUID="054c0628-de67-429b-bb65-ac369cde4509" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 12:39:35 crc kubenswrapper[4779]: I1128 12:39:35.281088 4779 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 12:39:35 crc kubenswrapper[4779]: I1128 12:39:35.575412 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 28 12:39:35 crc kubenswrapper[4779]: I1128 12:39:35.576808 4779 status_manager.go:851] "Failed to get status for pod" podUID="054c0628-de67-429b-bb65-ac369cde4509" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 12:39:35 crc kubenswrapper[4779]: I1128 12:39:35.577452 4779 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 12:39:35 crc kubenswrapper[4779]: I1128 12:39:35.637468 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/054c0628-de67-429b-bb65-ac369cde4509-kubelet-dir\") pod \"054c0628-de67-429b-bb65-ac369cde4509\" (UID: \"054c0628-de67-429b-bb65-ac369cde4509\") " Nov 28 12:39:35 crc kubenswrapper[4779]: I1128 12:39:35.637581 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/054c0628-de67-429b-bb65-ac369cde4509-var-lock\") pod \"054c0628-de67-429b-bb65-ac369cde4509\" (UID: \"054c0628-de67-429b-bb65-ac369cde4509\") " Nov 28 12:39:35 crc kubenswrapper[4779]: I1128 12:39:35.637632 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/054c0628-de67-429b-bb65-ac369cde4509-kube-api-access\") pod \"054c0628-de67-429b-bb65-ac369cde4509\" (UID: \"054c0628-de67-429b-bb65-ac369cde4509\") " Nov 28 12:39:35 crc kubenswrapper[4779]: I1128 12:39:35.637663 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/054c0628-de67-429b-bb65-ac369cde4509-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "054c0628-de67-429b-bb65-ac369cde4509" (UID: "054c0628-de67-429b-bb65-ac369cde4509"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:39:35 crc kubenswrapper[4779]: I1128 12:39:35.637730 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/054c0628-de67-429b-bb65-ac369cde4509-var-lock" (OuterVolumeSpecName: "var-lock") pod "054c0628-de67-429b-bb65-ac369cde4509" (UID: "054c0628-de67-429b-bb65-ac369cde4509"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:39:35 crc kubenswrapper[4779]: I1128 12:39:35.637876 4779 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/054c0628-de67-429b-bb65-ac369cde4509-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 28 12:39:35 crc kubenswrapper[4779]: I1128 12:39:35.637896 4779 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/054c0628-de67-429b-bb65-ac369cde4509-var-lock\") on node \"crc\" DevicePath \"\"" Nov 28 12:39:35 crc kubenswrapper[4779]: I1128 12:39:35.646739 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/054c0628-de67-429b-bb65-ac369cde4509-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "054c0628-de67-429b-bb65-ac369cde4509" (UID: "054c0628-de67-429b-bb65-ac369cde4509"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:39:35 crc kubenswrapper[4779]: I1128 12:39:35.740951 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/054c0628-de67-429b-bb65-ac369cde4509-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 12:39:36 crc kubenswrapper[4779]: I1128 12:39:36.163047 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 28 12:39:36 crc kubenswrapper[4779]: I1128 12:39:36.164204 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 12:39:36 crc kubenswrapper[4779]: I1128 12:39:36.164698 4779 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 12:39:36 crc kubenswrapper[4779]: I1128 12:39:36.164880 4779 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 12:39:36 crc kubenswrapper[4779]: I1128 12:39:36.165332 4779 status_manager.go:851] "Failed to get status for pod" podUID="054c0628-de67-429b-bb65-ac369cde4509" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 12:39:36 crc kubenswrapper[4779]: I1128 12:39:36.257438 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 28 12:39:36 crc kubenswrapper[4779]: I1128 12:39:36.257562 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 28 12:39:36 crc kubenswrapper[4779]: I1128 12:39:36.257602 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 28 12:39:36 crc kubenswrapper[4779]: I1128 12:39:36.257593 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:39:36 crc kubenswrapper[4779]: I1128 12:39:36.257699 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:39:36 crc kubenswrapper[4779]: I1128 12:39:36.257746 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:39:36 crc kubenswrapper[4779]: I1128 12:39:36.257974 4779 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Nov 28 12:39:36 crc kubenswrapper[4779]: I1128 12:39:36.257994 4779 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 28 12:39:36 crc kubenswrapper[4779]: I1128 12:39:36.258007 4779 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 28 12:39:36 crc kubenswrapper[4779]: I1128 12:39:36.287644 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"054c0628-de67-429b-bb65-ac369cde4509","Type":"ContainerDied","Data":"fb573289255c067daab676b8315871782002b5df1a26f3e4ac27edf3b1d77090"} Nov 28 12:39:36 crc kubenswrapper[4779]: I1128 12:39:36.287750 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb573289255c067daab676b8315871782002b5df1a26f3e4ac27edf3b1d77090" Nov 28 12:39:36 crc kubenswrapper[4779]: I1128 12:39:36.287668 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 28 12:39:36 crc kubenswrapper[4779]: I1128 12:39:36.291457 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 28 12:39:36 crc kubenswrapper[4779]: I1128 12:39:36.292488 4779 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6912a42c418059dabf07c7d940bf1c4102c8dcf91cd4dd6ca0b177f4acd276ec" exitCode=0 Nov 28 12:39:36 crc kubenswrapper[4779]: I1128 12:39:36.292618 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 12:39:36 crc kubenswrapper[4779]: I1128 12:39:36.292747 4779 scope.go:117] "RemoveContainer" containerID="a3db38b748527004df103120db865f7848491344dfdf5c89a6db10f4d15e6a74" Nov 28 12:39:36 crc kubenswrapper[4779]: I1128 12:39:36.294356 4779 status_manager.go:851] "Failed to get status for pod" podUID="054c0628-de67-429b-bb65-ac369cde4509" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 12:39:36 crc kubenswrapper[4779]: I1128 12:39:36.294940 4779 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 12:39:36 crc kubenswrapper[4779]: I1128 12:39:36.295329 4779 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 12:39:36 crc kubenswrapper[4779]: I1128 12:39:36.312888 4779 status_manager.go:851] "Failed to get status for pod" podUID="054c0628-de67-429b-bb65-ac369cde4509" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 12:39:36 crc kubenswrapper[4779]: I1128 12:39:36.313708 4779 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 12:39:36 crc kubenswrapper[4779]: I1128 12:39:36.314338 4779 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 12:39:36 crc kubenswrapper[4779]: I1128 12:39:36.317627 4779 scope.go:117] "RemoveContainer" containerID="3bafddd2d81f67f1445e3714d50eba5cfd6f75d60c2cb47d16f2086861a10bd6" Nov 28 12:39:36 crc kubenswrapper[4779]: I1128 12:39:36.339443 4779 scope.go:117] "RemoveContainer" containerID="aaf14e5e2229156dc442c92253ef1f23c75a5a6f5dec2d2537cddcdd1df54b92" Nov 28 12:39:36 crc kubenswrapper[4779]: I1128 12:39:36.360521 4779 scope.go:117] "RemoveContainer" containerID="2a76dbc5b41ebf68792cd449e4a245678be24151f0c980eedd06f956674b2435" Nov 28 12:39:36 crc kubenswrapper[4779]: I1128 12:39:36.381700 4779 scope.go:117] "RemoveContainer" containerID="6912a42c418059dabf07c7d940bf1c4102c8dcf91cd4dd6ca0b177f4acd276ec" Nov 28 12:39:36 crc kubenswrapper[4779]: I1128 12:39:36.403286 4779 scope.go:117] "RemoveContainer" containerID="22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a" Nov 28 12:39:36 crc kubenswrapper[4779]: I1128 12:39:36.434325 4779 scope.go:117] "RemoveContainer" containerID="a3db38b748527004df103120db865f7848491344dfdf5c89a6db10f4d15e6a74" Nov 28 12:39:36 crc kubenswrapper[4779]: E1128 12:39:36.435247 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3db38b748527004df103120db865f7848491344dfdf5c89a6db10f4d15e6a74\": container with ID starting with a3db38b748527004df103120db865f7848491344dfdf5c89a6db10f4d15e6a74 not found: ID does not exist" containerID="a3db38b748527004df103120db865f7848491344dfdf5c89a6db10f4d15e6a74" Nov 28 12:39:36 crc kubenswrapper[4779]: I1128 12:39:36.435327 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3db38b748527004df103120db865f7848491344dfdf5c89a6db10f4d15e6a74"} err="failed to get container status \"a3db38b748527004df103120db865f7848491344dfdf5c89a6db10f4d15e6a74\": rpc error: code = NotFound desc = could not find container \"a3db38b748527004df103120db865f7848491344dfdf5c89a6db10f4d15e6a74\": container with ID starting with a3db38b748527004df103120db865f7848491344dfdf5c89a6db10f4d15e6a74 not found: ID does not exist" Nov 28 12:39:36 crc kubenswrapper[4779]: I1128 12:39:36.435386 4779 scope.go:117] "RemoveContainer" containerID="3bafddd2d81f67f1445e3714d50eba5cfd6f75d60c2cb47d16f2086861a10bd6" Nov 28 12:39:36 crc kubenswrapper[4779]: E1128 12:39:36.436081 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bafddd2d81f67f1445e3714d50eba5cfd6f75d60c2cb47d16f2086861a10bd6\": container with ID starting with 3bafddd2d81f67f1445e3714d50eba5cfd6f75d60c2cb47d16f2086861a10bd6 not found: ID does not exist" containerID="3bafddd2d81f67f1445e3714d50eba5cfd6f75d60c2cb47d16f2086861a10bd6" Nov 28 12:39:36 crc kubenswrapper[4779]: I1128 12:39:36.436150 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bafddd2d81f67f1445e3714d50eba5cfd6f75d60c2cb47d16f2086861a10bd6"} err="failed to get container status \"3bafddd2d81f67f1445e3714d50eba5cfd6f75d60c2cb47d16f2086861a10bd6\": rpc error: code = NotFound desc = could not find container \"3bafddd2d81f67f1445e3714d50eba5cfd6f75d60c2cb47d16f2086861a10bd6\": container with ID starting with 3bafddd2d81f67f1445e3714d50eba5cfd6f75d60c2cb47d16f2086861a10bd6 not found: ID does not exist" Nov 28 12:39:36 crc kubenswrapper[4779]: I1128 12:39:36.436186 4779 scope.go:117] "RemoveContainer" containerID="aaf14e5e2229156dc442c92253ef1f23c75a5a6f5dec2d2537cddcdd1df54b92" Nov 28 12:39:36 crc kubenswrapper[4779]: E1128 12:39:36.436811 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aaf14e5e2229156dc442c92253ef1f23c75a5a6f5dec2d2537cddcdd1df54b92\": container with ID starting with aaf14e5e2229156dc442c92253ef1f23c75a5a6f5dec2d2537cddcdd1df54b92 not found: ID does not exist" containerID="aaf14e5e2229156dc442c92253ef1f23c75a5a6f5dec2d2537cddcdd1df54b92" Nov 28 12:39:36 crc kubenswrapper[4779]: I1128 12:39:36.436845 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aaf14e5e2229156dc442c92253ef1f23c75a5a6f5dec2d2537cddcdd1df54b92"} err="failed to get container status \"aaf14e5e2229156dc442c92253ef1f23c75a5a6f5dec2d2537cddcdd1df54b92\": rpc error: code = NotFound desc = could not find container \"aaf14e5e2229156dc442c92253ef1f23c75a5a6f5dec2d2537cddcdd1df54b92\": container with ID starting with aaf14e5e2229156dc442c92253ef1f23c75a5a6f5dec2d2537cddcdd1df54b92 not found: ID does not exist" Nov 28 12:39:36 crc kubenswrapper[4779]: I1128 12:39:36.436866 4779 scope.go:117] "RemoveContainer" containerID="2a76dbc5b41ebf68792cd449e4a245678be24151f0c980eedd06f956674b2435" Nov 28 12:39:36 crc kubenswrapper[4779]: E1128 12:39:36.437498 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a76dbc5b41ebf68792cd449e4a245678be24151f0c980eedd06f956674b2435\": container with ID starting with 2a76dbc5b41ebf68792cd449e4a245678be24151f0c980eedd06f956674b2435 not found: ID does not exist" containerID="2a76dbc5b41ebf68792cd449e4a245678be24151f0c980eedd06f956674b2435" Nov 28 12:39:36 crc kubenswrapper[4779]: I1128 12:39:36.437572 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a76dbc5b41ebf68792cd449e4a245678be24151f0c980eedd06f956674b2435"} err="failed to get container status \"2a76dbc5b41ebf68792cd449e4a245678be24151f0c980eedd06f956674b2435\": rpc error: code = NotFound desc = could not find container \"2a76dbc5b41ebf68792cd449e4a245678be24151f0c980eedd06f956674b2435\": container with ID starting with 2a76dbc5b41ebf68792cd449e4a245678be24151f0c980eedd06f956674b2435 not found: ID does not exist" Nov 28 12:39:36 crc kubenswrapper[4779]: I1128 12:39:36.437621 4779 scope.go:117] "RemoveContainer" containerID="6912a42c418059dabf07c7d940bf1c4102c8dcf91cd4dd6ca0b177f4acd276ec" Nov 28 12:39:36 crc kubenswrapper[4779]: E1128 12:39:36.438176 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6912a42c418059dabf07c7d940bf1c4102c8dcf91cd4dd6ca0b177f4acd276ec\": container with ID starting with 6912a42c418059dabf07c7d940bf1c4102c8dcf91cd4dd6ca0b177f4acd276ec not found: ID does not exist" containerID="6912a42c418059dabf07c7d940bf1c4102c8dcf91cd4dd6ca0b177f4acd276ec" Nov 28 12:39:36 crc kubenswrapper[4779]: I1128 12:39:36.438229 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6912a42c418059dabf07c7d940bf1c4102c8dcf91cd4dd6ca0b177f4acd276ec"} err="failed to get container status \"6912a42c418059dabf07c7d940bf1c4102c8dcf91cd4dd6ca0b177f4acd276ec\": rpc error: code = NotFound desc = could not find container \"6912a42c418059dabf07c7d940bf1c4102c8dcf91cd4dd6ca0b177f4acd276ec\": container with ID starting with 6912a42c418059dabf07c7d940bf1c4102c8dcf91cd4dd6ca0b177f4acd276ec not found: ID does not exist" Nov 28 12:39:36 crc kubenswrapper[4779]: I1128 12:39:36.438265 4779 scope.go:117] "RemoveContainer" containerID="22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a" Nov 28 12:39:36 crc kubenswrapper[4779]: E1128 12:39:36.438716 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\": container with ID starting with 22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a not found: ID does not exist" containerID="22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a" Nov 28 12:39:36 crc kubenswrapper[4779]: I1128 12:39:36.438763 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a"} err="failed to get container status \"22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\": rpc error: code = NotFound desc = could not find container \"22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a\": container with ID starting with 22cb1821a0274b593e00c2ab83616211b992e44d558a418a8833ad0d743d715a not found: ID does not exist" Nov 28 12:39:37 crc kubenswrapper[4779]: E1128 12:39:37.734544 4779 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.107:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" volumeName="registry-storage" Nov 28 12:39:37 crc kubenswrapper[4779]: I1128 12:39:37.740211 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Nov 28 12:39:39 crc kubenswrapper[4779]: I1128 12:39:39.730355 4779 status_manager.go:851] "Failed to get status for pod" podUID="054c0628-de67-429b-bb65-ac369cde4509" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 12:39:39 crc kubenswrapper[4779]: I1128 12:39:39.731389 4779 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 12:39:41 crc kubenswrapper[4779]: E1128 12:39:41.443046 4779 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.107:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187c2c09b1a027a9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 12:39:34.168782761 +0000 UTC m=+234.734458155,LastTimestamp:2025-11-28 12:39:34.168782761 +0000 UTC m=+234.734458155,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 28 12:39:41 crc kubenswrapper[4779]: E1128 12:39:41.541157 4779 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 12:39:41 crc kubenswrapper[4779]: E1128 12:39:41.541759 4779 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 12:39:41 crc kubenswrapper[4779]: E1128 12:39:41.542383 4779 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 12:39:41 crc kubenswrapper[4779]: E1128 12:39:41.542994 4779 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 12:39:41 crc kubenswrapper[4779]: E1128 12:39:41.543525 4779 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 12:39:41 crc kubenswrapper[4779]: I1128 12:39:41.543575 4779 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Nov 28 12:39:41 crc kubenswrapper[4779]: E1128 12:39:41.543915 4779 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="200ms" Nov 28 12:39:41 crc kubenswrapper[4779]: E1128 12:39:41.745169 4779 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="400ms" Nov 28 12:39:42 crc kubenswrapper[4779]: E1128 12:39:42.146993 4779 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="800ms" Nov 28 12:39:42 crc kubenswrapper[4779]: E1128 12:39:42.948600 4779 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="1.6s" Nov 28 12:39:44 crc kubenswrapper[4779]: E1128 12:39:44.549839 4779 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="3.2s" Nov 28 12:39:46 crc kubenswrapper[4779]: I1128 12:39:46.907004 4779 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Nov 28 12:39:46 crc kubenswrapper[4779]: I1128 12:39:46.907413 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Nov 28 12:39:47 crc kubenswrapper[4779]: I1128 12:39:47.404536 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 28 12:39:47 crc kubenswrapper[4779]: I1128 12:39:47.404625 4779 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="0417da6607c0d549767642332fa4fb21bbef525d7073d0a352120092d3450f2b" exitCode=1 Nov 28 12:39:47 crc kubenswrapper[4779]: I1128 12:39:47.404671 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"0417da6607c0d549767642332fa4fb21bbef525d7073d0a352120092d3450f2b"} Nov 28 12:39:47 crc kubenswrapper[4779]: I1128 12:39:47.405430 4779 scope.go:117] "RemoveContainer" containerID="0417da6607c0d549767642332fa4fb21bbef525d7073d0a352120092d3450f2b" Nov 28 12:39:47 crc kubenswrapper[4779]: I1128 12:39:47.405845 4779 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 12:39:47 crc kubenswrapper[4779]: I1128 12:39:47.406546 4779 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 12:39:47 crc kubenswrapper[4779]: I1128 12:39:47.407212 4779 status_manager.go:851] "Failed to get status for pod" podUID="054c0628-de67-429b-bb65-ac369cde4509" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 12:39:47 crc kubenswrapper[4779]: E1128 12:39:47.751784 4779 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="6.4s" Nov 28 12:39:48 crc kubenswrapper[4779]: I1128 12:39:48.420265 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 28 12:39:48 crc kubenswrapper[4779]: I1128 12:39:48.420693 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9bfa1f6025503181fed75a4d42e33410dd84ae74282e38ce502b67228405840f"} Nov 28 12:39:48 crc kubenswrapper[4779]: I1128 12:39:48.421979 4779 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 12:39:48 crc kubenswrapper[4779]: I1128 12:39:48.422879 4779 status_manager.go:851] "Failed to get status for pod" podUID="054c0628-de67-429b-bb65-ac369cde4509" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 12:39:48 crc kubenswrapper[4779]: I1128 12:39:48.423466 4779 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 12:39:48 crc kubenswrapper[4779]: I1128 12:39:48.725672 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 12:39:48 crc kubenswrapper[4779]: I1128 12:39:48.727085 4779 status_manager.go:851] "Failed to get status for pod" podUID="054c0628-de67-429b-bb65-ac369cde4509" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 12:39:48 crc kubenswrapper[4779]: I1128 12:39:48.727806 4779 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 12:39:48 crc kubenswrapper[4779]: I1128 12:39:48.728397 4779 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 12:39:48 crc kubenswrapper[4779]: I1128 12:39:48.749267 4779 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b303d954-23c9-4fc9-8e79-981009172099" Nov 28 12:39:48 crc kubenswrapper[4779]: I1128 12:39:48.749417 4779 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b303d954-23c9-4fc9-8e79-981009172099" Nov 28 12:39:48 crc kubenswrapper[4779]: E1128 12:39:48.750480 4779 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 12:39:48 crc kubenswrapper[4779]: I1128 12:39:48.751232 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 12:39:48 crc kubenswrapper[4779]: W1128 12:39:48.780423 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-d66bfab78e62f91f10f24572ebb5c0442c814cb1d6aa0b39e0a7b82ce79436f8 WatchSource:0}: Error finding container d66bfab78e62f91f10f24572ebb5c0442c814cb1d6aa0b39e0a7b82ce79436f8: Status 404 returned error can't find the container with id d66bfab78e62f91f10f24572ebb5c0442c814cb1d6aa0b39e0a7b82ce79436f8 Nov 28 12:39:49 crc kubenswrapper[4779]: I1128 12:39:49.433121 4779 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="96386788c24ba1835ad0c00188f36172cf55dbb33e4004ac9f0e32681c90b1af" exitCode=0 Nov 28 12:39:49 crc kubenswrapper[4779]: I1128 12:39:49.433240 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"96386788c24ba1835ad0c00188f36172cf55dbb33e4004ac9f0e32681c90b1af"} Nov 28 12:39:49 crc kubenswrapper[4779]: I1128 12:39:49.434310 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d66bfab78e62f91f10f24572ebb5c0442c814cb1d6aa0b39e0a7b82ce79436f8"} Nov 28 12:39:49 crc kubenswrapper[4779]: I1128 12:39:49.434789 4779 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b303d954-23c9-4fc9-8e79-981009172099" Nov 28 12:39:49 crc kubenswrapper[4779]: I1128 12:39:49.434819 4779 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b303d954-23c9-4fc9-8e79-981009172099" Nov 28 12:39:49 crc kubenswrapper[4779]: E1128 12:39:49.435468 4779 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 12:39:49 crc kubenswrapper[4779]: I1128 12:39:49.435475 4779 status_manager.go:851] "Failed to get status for pod" podUID="054c0628-de67-429b-bb65-ac369cde4509" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 12:39:49 crc kubenswrapper[4779]: I1128 12:39:49.436572 4779 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 12:39:49 crc kubenswrapper[4779]: I1128 12:39:49.437153 4779 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 12:39:49 crc kubenswrapper[4779]: I1128 12:39:49.735634 4779 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 12:39:49 crc kubenswrapper[4779]: I1128 12:39:49.736207 4779 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 12:39:49 crc kubenswrapper[4779]: I1128 12:39:49.736889 4779 status_manager.go:851] "Failed to get status for pod" podUID="054c0628-de67-429b-bb65-ac369cde4509" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 12:39:49 crc kubenswrapper[4779]: I1128 12:39:49.737361 4779 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Nov 28 12:39:50 crc kubenswrapper[4779]: I1128 12:39:50.335409 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-n97k6" podUID="7cbccab5-86c6-4c0f-82f6-9ae159b32cce" containerName="oauth-openshift" containerID="cri-o://5b5fd665d09c9a690febd4809c288c653a351aaa3d2025ca956148eeefe5f2ab" gracePeriod=15 Nov 28 12:39:50 crc kubenswrapper[4779]: I1128 12:39:50.451423 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a9c037781387b8e8b530020577d34ac2fa536941f9473dfc767e40f03cb4643d"} Nov 28 12:39:50 crc kubenswrapper[4779]: I1128 12:39:50.451494 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c04b0be889370b28fe81ec7f64b9c89cb80453c1d1aadc6290149b7b0b478a56"} Nov 28 12:39:50 crc kubenswrapper[4779]: I1128 12:39:50.824178 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 12:39:50 crc kubenswrapper[4779]: I1128 12:39:50.838271 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-n97k6" Nov 28 12:39:50 crc kubenswrapper[4779]: I1128 12:39:50.882621 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-user-template-error\") pod \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\" (UID: \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\") " Nov 28 12:39:50 crc kubenswrapper[4779]: I1128 12:39:50.882661 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-audit-dir\") pod \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\" (UID: \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\") " Nov 28 12:39:50 crc kubenswrapper[4779]: I1128 12:39:50.882686 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-system-trusted-ca-bundle\") pod \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\" (UID: \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\") " Nov 28 12:39:50 crc kubenswrapper[4779]: I1128 12:39:50.882705 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-system-session\") pod \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\" (UID: \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\") " Nov 28 12:39:50 crc kubenswrapper[4779]: I1128 12:39:50.882726 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-system-cliconfig\") pod \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\" (UID: \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\") " Nov 28 12:39:50 crc kubenswrapper[4779]: I1128 12:39:50.882750 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-user-template-provider-selection\") pod \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\" (UID: \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\") " Nov 28 12:39:50 crc kubenswrapper[4779]: I1128 12:39:50.882770 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-system-ocp-branding-template\") pod \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\" (UID: \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\") " Nov 28 12:39:50 crc kubenswrapper[4779]: I1128 12:39:50.882788 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-user-idp-0-file-data\") pod \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\" (UID: \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\") " Nov 28 12:39:50 crc kubenswrapper[4779]: I1128 12:39:50.882815 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-system-service-ca\") pod \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\" (UID: \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\") " Nov 28 12:39:50 crc kubenswrapper[4779]: I1128 12:39:50.882842 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-user-template-login\") pod \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\" (UID: \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\") " Nov 28 12:39:50 crc kubenswrapper[4779]: I1128 12:39:50.882869 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-audit-policies\") pod \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\" (UID: \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\") " Nov 28 12:39:50 crc kubenswrapper[4779]: I1128 12:39:50.882924 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-system-router-certs\") pod \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\" (UID: \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\") " Nov 28 12:39:50 crc kubenswrapper[4779]: I1128 12:39:50.882938 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-system-serving-cert\") pod \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\" (UID: \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\") " Nov 28 12:39:50 crc kubenswrapper[4779]: I1128 12:39:50.882960 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tv2t\" (UniqueName: \"kubernetes.io/projected/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-kube-api-access-7tv2t\") pod \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\" (UID: \"7cbccab5-86c6-4c0f-82f6-9ae159b32cce\") " Nov 28 12:39:50 crc kubenswrapper[4779]: I1128 12:39:50.884909 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "7cbccab5-86c6-4c0f-82f6-9ae159b32cce" (UID: "7cbccab5-86c6-4c0f-82f6-9ae159b32cce"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:39:50 crc kubenswrapper[4779]: I1128 12:39:50.887725 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "7cbccab5-86c6-4c0f-82f6-9ae159b32cce" (UID: "7cbccab5-86c6-4c0f-82f6-9ae159b32cce"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:39:50 crc kubenswrapper[4779]: I1128 12:39:50.887780 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "7cbccab5-86c6-4c0f-82f6-9ae159b32cce" (UID: "7cbccab5-86c6-4c0f-82f6-9ae159b32cce"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:39:50 crc kubenswrapper[4779]: I1128 12:39:50.889363 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "7cbccab5-86c6-4c0f-82f6-9ae159b32cce" (UID: "7cbccab5-86c6-4c0f-82f6-9ae159b32cce"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:39:50 crc kubenswrapper[4779]: I1128 12:39:50.889852 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "7cbccab5-86c6-4c0f-82f6-9ae159b32cce" (UID: "7cbccab5-86c6-4c0f-82f6-9ae159b32cce"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:39:50 crc kubenswrapper[4779]: I1128 12:39:50.889858 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-kube-api-access-7tv2t" (OuterVolumeSpecName: "kube-api-access-7tv2t") pod "7cbccab5-86c6-4c0f-82f6-9ae159b32cce" (UID: "7cbccab5-86c6-4c0f-82f6-9ae159b32cce"). InnerVolumeSpecName "kube-api-access-7tv2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:39:50 crc kubenswrapper[4779]: I1128 12:39:50.890515 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "7cbccab5-86c6-4c0f-82f6-9ae159b32cce" (UID: "7cbccab5-86c6-4c0f-82f6-9ae159b32cce"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:39:50 crc kubenswrapper[4779]: I1128 12:39:50.895149 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "7cbccab5-86c6-4c0f-82f6-9ae159b32cce" (UID: "7cbccab5-86c6-4c0f-82f6-9ae159b32cce"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:39:50 crc kubenswrapper[4779]: I1128 12:39:50.900871 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "7cbccab5-86c6-4c0f-82f6-9ae159b32cce" (UID: "7cbccab5-86c6-4c0f-82f6-9ae159b32cce"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:39:50 crc kubenswrapper[4779]: I1128 12:39:50.903680 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "7cbccab5-86c6-4c0f-82f6-9ae159b32cce" (UID: "7cbccab5-86c6-4c0f-82f6-9ae159b32cce"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:39:50 crc kubenswrapper[4779]: I1128 12:39:50.905225 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "7cbccab5-86c6-4c0f-82f6-9ae159b32cce" (UID: "7cbccab5-86c6-4c0f-82f6-9ae159b32cce"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:39:50 crc kubenswrapper[4779]: I1128 12:39:50.905509 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "7cbccab5-86c6-4c0f-82f6-9ae159b32cce" (UID: "7cbccab5-86c6-4c0f-82f6-9ae159b32cce"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:39:50 crc kubenswrapper[4779]: I1128 12:39:50.905744 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "7cbccab5-86c6-4c0f-82f6-9ae159b32cce" (UID: "7cbccab5-86c6-4c0f-82f6-9ae159b32cce"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:39:50 crc kubenswrapper[4779]: I1128 12:39:50.905878 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "7cbccab5-86c6-4c0f-82f6-9ae159b32cce" (UID: "7cbccab5-86c6-4c0f-82f6-9ae159b32cce"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:39:50 crc kubenswrapper[4779]: I1128 12:39:50.984221 4779 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 28 12:39:50 crc kubenswrapper[4779]: I1128 12:39:50.984471 4779 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 28 12:39:50 crc kubenswrapper[4779]: I1128 12:39:50.984483 4779 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:39:50 crc kubenswrapper[4779]: I1128 12:39:50.984491 4779 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 12:39:50 crc kubenswrapper[4779]: I1128 12:39:50.984502 4779 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 28 12:39:50 crc kubenswrapper[4779]: I1128 12:39:50.984512 4779 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 28 12:39:50 crc kubenswrapper[4779]: I1128 12:39:50.984520 4779 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 12:39:50 crc kubenswrapper[4779]: I1128 12:39:50.984555 4779 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 28 12:39:50 crc kubenswrapper[4779]: I1128 12:39:50.984568 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tv2t\" (UniqueName: \"kubernetes.io/projected/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-kube-api-access-7tv2t\") on node \"crc\" DevicePath \"\"" Nov 28 12:39:50 crc kubenswrapper[4779]: I1128 12:39:50.984577 4779 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 28 12:39:50 crc kubenswrapper[4779]: I1128 12:39:50.984586 4779 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 28 12:39:50 crc kubenswrapper[4779]: I1128 12:39:50.984594 4779 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:39:50 crc kubenswrapper[4779]: I1128 12:39:50.984603 4779 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 28 12:39:50 crc kubenswrapper[4779]: I1128 12:39:50.984612 4779 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7cbccab5-86c6-4c0f-82f6-9ae159b32cce-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 28 12:39:51 crc kubenswrapper[4779]: I1128 12:39:51.460130 4779 generic.go:334] "Generic (PLEG): container finished" podID="7cbccab5-86c6-4c0f-82f6-9ae159b32cce" containerID="5b5fd665d09c9a690febd4809c288c653a351aaa3d2025ca956148eeefe5f2ab" exitCode=0 Nov 28 12:39:51 crc kubenswrapper[4779]: I1128 12:39:51.460192 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-n97k6" event={"ID":"7cbccab5-86c6-4c0f-82f6-9ae159b32cce","Type":"ContainerDied","Data":"5b5fd665d09c9a690febd4809c288c653a351aaa3d2025ca956148eeefe5f2ab"} Nov 28 12:39:51 crc kubenswrapper[4779]: I1128 12:39:51.460217 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-n97k6" event={"ID":"7cbccab5-86c6-4c0f-82f6-9ae159b32cce","Type":"ContainerDied","Data":"be097b15d07ec9b2278cc964578069431e8480592cd501cebb3185e946015a3c"} Nov 28 12:39:51 crc kubenswrapper[4779]: I1128 12:39:51.460220 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-n97k6" Nov 28 12:39:51 crc kubenswrapper[4779]: I1128 12:39:51.460232 4779 scope.go:117] "RemoveContainer" containerID="5b5fd665d09c9a690febd4809c288c653a351aaa3d2025ca956148eeefe5f2ab" Nov 28 12:39:51 crc kubenswrapper[4779]: I1128 12:39:51.465083 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f334a8e4606287a29b53a14a193079e474e945803344b6e1a59675a54bd39393"} Nov 28 12:39:51 crc kubenswrapper[4779]: I1128 12:39:51.465125 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ec774826c98fc982989c37b951897c2b8a7f2aa804647bb12af77c8d80dec46c"} Nov 28 12:39:51 crc kubenswrapper[4779]: I1128 12:39:51.465134 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8b043f17528df86815545d40251a72dfd57e65b0c713185beee4f95b3d132320"} Nov 28 12:39:51 crc kubenswrapper[4779]: I1128 12:39:51.465392 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 12:39:51 crc kubenswrapper[4779]: I1128 12:39:51.465440 4779 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b303d954-23c9-4fc9-8e79-981009172099" Nov 28 12:39:51 crc kubenswrapper[4779]: I1128 12:39:51.465466 4779 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b303d954-23c9-4fc9-8e79-981009172099" Nov 28 12:39:51 crc kubenswrapper[4779]: I1128 12:39:51.487391 4779 scope.go:117] "RemoveContainer" containerID="5b5fd665d09c9a690febd4809c288c653a351aaa3d2025ca956148eeefe5f2ab" Nov 28 12:39:51 crc kubenswrapper[4779]: E1128 12:39:51.487740 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b5fd665d09c9a690febd4809c288c653a351aaa3d2025ca956148eeefe5f2ab\": container with ID starting with 5b5fd665d09c9a690febd4809c288c653a351aaa3d2025ca956148eeefe5f2ab not found: ID does not exist" containerID="5b5fd665d09c9a690febd4809c288c653a351aaa3d2025ca956148eeefe5f2ab" Nov 28 12:39:51 crc kubenswrapper[4779]: I1128 12:39:51.487782 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b5fd665d09c9a690febd4809c288c653a351aaa3d2025ca956148eeefe5f2ab"} err="failed to get container status \"5b5fd665d09c9a690febd4809c288c653a351aaa3d2025ca956148eeefe5f2ab\": rpc error: code = NotFound desc = could not find container \"5b5fd665d09c9a690febd4809c288c653a351aaa3d2025ca956148eeefe5f2ab\": container with ID starting with 5b5fd665d09c9a690febd4809c288c653a351aaa3d2025ca956148eeefe5f2ab not found: ID does not exist" Nov 28 12:39:53 crc kubenswrapper[4779]: I1128 12:39:53.751557 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 12:39:53 crc kubenswrapper[4779]: I1128 12:39:53.752253 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 12:39:53 crc kubenswrapper[4779]: I1128 12:39:53.757219 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 12:39:55 crc kubenswrapper[4779]: I1128 12:39:55.437994 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 12:39:55 crc kubenswrapper[4779]: I1128 12:39:55.438931 4779 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Nov 28 12:39:55 crc kubenswrapper[4779]: I1128 12:39:55.438984 4779 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Nov 28 12:39:56 crc kubenswrapper[4779]: I1128 12:39:56.530959 4779 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 12:39:56 crc kubenswrapper[4779]: I1128 12:39:56.589582 4779 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="63dc5fb5-7c98-44f5-8c9b-e9eec2c6e829" Nov 28 12:39:57 crc kubenswrapper[4779]: I1128 12:39:57.527218 4779 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b303d954-23c9-4fc9-8e79-981009172099" Nov 28 12:39:57 crc kubenswrapper[4779]: I1128 12:39:57.527266 4779 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b303d954-23c9-4fc9-8e79-981009172099" Nov 28 12:39:57 crc kubenswrapper[4779]: I1128 12:39:57.558233 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 12:39:57 crc kubenswrapper[4779]: I1128 12:39:57.566605 4779 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="63dc5fb5-7c98-44f5-8c9b-e9eec2c6e829" Nov 28 12:39:58 crc kubenswrapper[4779]: I1128 12:39:58.533850 4779 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b303d954-23c9-4fc9-8e79-981009172099" Nov 28 12:39:58 crc kubenswrapper[4779]: I1128 12:39:58.534287 4779 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b303d954-23c9-4fc9-8e79-981009172099" Nov 28 12:39:58 crc kubenswrapper[4779]: I1128 12:39:58.538005 4779 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="63dc5fb5-7c98-44f5-8c9b-e9eec2c6e829" Nov 28 12:40:05 crc kubenswrapper[4779]: I1128 12:40:05.437541 4779 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Nov 28 12:40:05 crc kubenswrapper[4779]: I1128 12:40:05.437883 4779 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Nov 28 12:40:06 crc kubenswrapper[4779]: I1128 12:40:06.468652 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 28 12:40:06 crc kubenswrapper[4779]: I1128 12:40:06.657321 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 28 12:40:06 crc kubenswrapper[4779]: I1128 12:40:06.662791 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 28 12:40:06 crc kubenswrapper[4779]: I1128 12:40:06.885861 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 28 12:40:07 crc kubenswrapper[4779]: I1128 12:40:07.496073 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 28 12:40:07 crc kubenswrapper[4779]: I1128 12:40:07.593503 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 28 12:40:07 crc kubenswrapper[4779]: I1128 12:40:07.964548 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 28 12:40:08 crc kubenswrapper[4779]: I1128 12:40:08.317345 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 28 12:40:08 crc kubenswrapper[4779]: I1128 12:40:08.346589 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 28 12:40:08 crc kubenswrapper[4779]: I1128 12:40:08.588744 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 28 12:40:08 crc kubenswrapper[4779]: I1128 12:40:08.624654 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 28 12:40:08 crc kubenswrapper[4779]: I1128 12:40:08.676558 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 28 12:40:08 crc kubenswrapper[4779]: I1128 12:40:08.764866 4779 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 28 12:40:08 crc kubenswrapper[4779]: I1128 12:40:08.832734 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 28 12:40:08 crc kubenswrapper[4779]: I1128 12:40:08.953642 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 28 12:40:09 crc kubenswrapper[4779]: I1128 12:40:09.001389 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 28 12:40:09 crc kubenswrapper[4779]: I1128 12:40:09.062619 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 28 12:40:09 crc kubenswrapper[4779]: I1128 12:40:09.307677 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 28 12:40:09 crc kubenswrapper[4779]: I1128 12:40:09.320719 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 28 12:40:09 crc kubenswrapper[4779]: I1128 12:40:09.571696 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 28 12:40:09 crc kubenswrapper[4779]: I1128 12:40:09.581138 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 28 12:40:09 crc kubenswrapper[4779]: I1128 12:40:09.612531 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 28 12:40:09 crc kubenswrapper[4779]: I1128 12:40:09.697364 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 28 12:40:09 crc kubenswrapper[4779]: I1128 12:40:09.794130 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 28 12:40:09 crc kubenswrapper[4779]: I1128 12:40:09.806307 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 28 12:40:09 crc kubenswrapper[4779]: I1128 12:40:09.874408 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 28 12:40:09 crc kubenswrapper[4779]: I1128 12:40:09.880620 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 28 12:40:10 crc kubenswrapper[4779]: I1128 12:40:10.138004 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 28 12:40:10 crc kubenswrapper[4779]: I1128 12:40:10.192056 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 28 12:40:10 crc kubenswrapper[4779]: I1128 12:40:10.588531 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 28 12:40:10 crc kubenswrapper[4779]: I1128 12:40:10.606148 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 28 12:40:10 crc kubenswrapper[4779]: I1128 12:40:10.608179 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 28 12:40:10 crc kubenswrapper[4779]: I1128 12:40:10.742564 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 28 12:40:10 crc kubenswrapper[4779]: I1128 12:40:10.760134 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 28 12:40:10 crc kubenswrapper[4779]: I1128 12:40:10.776507 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 28 12:40:10 crc kubenswrapper[4779]: I1128 12:40:10.811160 4779 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 28 12:40:10 crc kubenswrapper[4779]: I1128 12:40:10.834801 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 28 12:40:10 crc kubenswrapper[4779]: I1128 12:40:10.984208 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 28 12:40:11 crc kubenswrapper[4779]: I1128 12:40:11.061379 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 28 12:40:11 crc kubenswrapper[4779]: I1128 12:40:11.134126 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 28 12:40:11 crc kubenswrapper[4779]: I1128 12:40:11.151634 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 28 12:40:11 crc kubenswrapper[4779]: I1128 12:40:11.167574 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 28 12:40:11 crc kubenswrapper[4779]: I1128 12:40:11.272436 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 28 12:40:11 crc kubenswrapper[4779]: I1128 12:40:11.307705 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 28 12:40:11 crc kubenswrapper[4779]: I1128 12:40:11.425710 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 28 12:40:11 crc kubenswrapper[4779]: I1128 12:40:11.464994 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 28 12:40:11 crc kubenswrapper[4779]: I1128 12:40:11.468820 4779 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 28 12:40:11 crc kubenswrapper[4779]: I1128 12:40:11.478669 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 28 12:40:11 crc kubenswrapper[4779]: I1128 12:40:11.515662 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 28 12:40:11 crc kubenswrapper[4779]: I1128 12:40:11.558517 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 28 12:40:11 crc kubenswrapper[4779]: I1128 12:40:11.604870 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 28 12:40:11 crc kubenswrapper[4779]: I1128 12:40:11.717443 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 28 12:40:11 crc kubenswrapper[4779]: I1128 12:40:11.750880 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 28 12:40:11 crc kubenswrapper[4779]: I1128 12:40:11.789697 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 28 12:40:11 crc kubenswrapper[4779]: I1128 12:40:11.835387 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 28 12:40:11 crc kubenswrapper[4779]: I1128 12:40:11.858611 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 28 12:40:11 crc kubenswrapper[4779]: I1128 12:40:11.941070 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 28 12:40:11 crc kubenswrapper[4779]: I1128 12:40:11.950069 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 28 12:40:11 crc kubenswrapper[4779]: I1128 12:40:11.975386 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 28 12:40:11 crc kubenswrapper[4779]: I1128 12:40:11.980888 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 28 12:40:11 crc kubenswrapper[4779]: I1128 12:40:11.986183 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 28 12:40:12 crc kubenswrapper[4779]: I1128 12:40:12.107606 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 28 12:40:12 crc kubenswrapper[4779]: I1128 12:40:12.111549 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 28 12:40:12 crc kubenswrapper[4779]: I1128 12:40:12.146165 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 28 12:40:12 crc kubenswrapper[4779]: I1128 12:40:12.146520 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 28 12:40:12 crc kubenswrapper[4779]: I1128 12:40:12.192510 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 28 12:40:12 crc kubenswrapper[4779]: I1128 12:40:12.195341 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 28 12:40:12 crc kubenswrapper[4779]: I1128 12:40:12.220573 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 28 12:40:12 crc kubenswrapper[4779]: I1128 12:40:12.277360 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 28 12:40:12 crc kubenswrapper[4779]: I1128 12:40:12.302824 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 28 12:40:12 crc kubenswrapper[4779]: I1128 12:40:12.334288 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 28 12:40:12 crc kubenswrapper[4779]: I1128 12:40:12.347759 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 28 12:40:12 crc kubenswrapper[4779]: I1128 12:40:12.370544 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 28 12:40:12 crc kubenswrapper[4779]: I1128 12:40:12.411838 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 28 12:40:12 crc kubenswrapper[4779]: I1128 12:40:12.493359 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 28 12:40:12 crc kubenswrapper[4779]: I1128 12:40:12.498359 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 28 12:40:12 crc kubenswrapper[4779]: I1128 12:40:12.512524 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 28 12:40:12 crc kubenswrapper[4779]: I1128 12:40:12.614208 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 28 12:40:12 crc kubenswrapper[4779]: I1128 12:40:12.725773 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 28 12:40:13 crc kubenswrapper[4779]: I1128 12:40:13.046367 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 28 12:40:13 crc kubenswrapper[4779]: I1128 12:40:13.076247 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 28 12:40:13 crc kubenswrapper[4779]: I1128 12:40:13.260591 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 28 12:40:13 crc kubenswrapper[4779]: I1128 12:40:13.293487 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 28 12:40:13 crc kubenswrapper[4779]: I1128 12:40:13.374998 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 28 12:40:13 crc kubenswrapper[4779]: I1128 12:40:13.391779 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 28 12:40:13 crc kubenswrapper[4779]: I1128 12:40:13.481403 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 28 12:40:13 crc kubenswrapper[4779]: I1128 12:40:13.491276 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 28 12:40:13 crc kubenswrapper[4779]: I1128 12:40:13.621387 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 28 12:40:13 crc kubenswrapper[4779]: I1128 12:40:13.875274 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 28 12:40:14 crc kubenswrapper[4779]: I1128 12:40:14.078627 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 28 12:40:14 crc kubenswrapper[4779]: I1128 12:40:14.161839 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 28 12:40:14 crc kubenswrapper[4779]: I1128 12:40:14.317225 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 28 12:40:14 crc kubenswrapper[4779]: I1128 12:40:14.349780 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 28 12:40:14 crc kubenswrapper[4779]: I1128 12:40:14.363008 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 28 12:40:14 crc kubenswrapper[4779]: I1128 12:40:14.394382 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 28 12:40:14 crc kubenswrapper[4779]: I1128 12:40:14.394674 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 28 12:40:14 crc kubenswrapper[4779]: I1128 12:40:14.400176 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 28 12:40:14 crc kubenswrapper[4779]: I1128 12:40:14.468642 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 28 12:40:14 crc kubenswrapper[4779]: I1128 12:40:14.492217 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 28 12:40:14 crc kubenswrapper[4779]: I1128 12:40:14.583366 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 28 12:40:14 crc kubenswrapper[4779]: I1128 12:40:14.693592 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 28 12:40:14 crc kubenswrapper[4779]: I1128 12:40:14.720768 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 28 12:40:14 crc kubenswrapper[4779]: I1128 12:40:14.747514 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 28 12:40:14 crc kubenswrapper[4779]: I1128 12:40:14.759748 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 28 12:40:14 crc kubenswrapper[4779]: I1128 12:40:14.772267 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 28 12:40:14 crc kubenswrapper[4779]: I1128 12:40:14.997385 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 28 12:40:15 crc kubenswrapper[4779]: I1128 12:40:15.060281 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 28 12:40:15 crc kubenswrapper[4779]: I1128 12:40:15.149067 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 28 12:40:15 crc kubenswrapper[4779]: I1128 12:40:15.214638 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 28 12:40:15 crc kubenswrapper[4779]: I1128 12:40:15.222128 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 28 12:40:15 crc kubenswrapper[4779]: I1128 12:40:15.308462 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 28 12:40:15 crc kubenswrapper[4779]: I1128 12:40:15.438518 4779 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Nov 28 12:40:15 crc kubenswrapper[4779]: I1128 12:40:15.438930 4779 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Nov 28 12:40:15 crc kubenswrapper[4779]: I1128 12:40:15.439196 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 12:40:15 crc kubenswrapper[4779]: I1128 12:40:15.440268 4779 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"9bfa1f6025503181fed75a4d42e33410dd84ae74282e38ce502b67228405840f"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Nov 28 12:40:15 crc kubenswrapper[4779]: I1128 12:40:15.440644 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://9bfa1f6025503181fed75a4d42e33410dd84ae74282e38ce502b67228405840f" gracePeriod=30 Nov 28 12:40:15 crc kubenswrapper[4779]: I1128 12:40:15.447707 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 28 12:40:15 crc kubenswrapper[4779]: I1128 12:40:15.450682 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 28 12:40:15 crc kubenswrapper[4779]: I1128 12:40:15.461057 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 28 12:40:15 crc kubenswrapper[4779]: I1128 12:40:15.484865 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 28 12:40:15 crc kubenswrapper[4779]: I1128 12:40:15.737358 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 28 12:40:15 crc kubenswrapper[4779]: I1128 12:40:15.851134 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 28 12:40:15 crc kubenswrapper[4779]: I1128 12:40:15.854808 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 28 12:40:15 crc kubenswrapper[4779]: I1128 12:40:15.893787 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 28 12:40:15 crc kubenswrapper[4779]: I1128 12:40:15.908372 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 28 12:40:15 crc kubenswrapper[4779]: I1128 12:40:15.983215 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 28 12:40:15 crc kubenswrapper[4779]: I1128 12:40:15.984476 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 28 12:40:15 crc kubenswrapper[4779]: I1128 12:40:15.994330 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 28 12:40:16 crc kubenswrapper[4779]: I1128 12:40:16.002998 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 28 12:40:16 crc kubenswrapper[4779]: I1128 12:40:16.036900 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 28 12:40:16 crc kubenswrapper[4779]: I1128 12:40:16.074996 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 28 12:40:16 crc kubenswrapper[4779]: I1128 12:40:16.075294 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 28 12:40:16 crc kubenswrapper[4779]: I1128 12:40:16.140786 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 28 12:40:16 crc kubenswrapper[4779]: I1128 12:40:16.155743 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 28 12:40:16 crc kubenswrapper[4779]: I1128 12:40:16.170449 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 28 12:40:16 crc kubenswrapper[4779]: I1128 12:40:16.222343 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 28 12:40:16 crc kubenswrapper[4779]: I1128 12:40:16.280411 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 28 12:40:16 crc kubenswrapper[4779]: I1128 12:40:16.305925 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 28 12:40:16 crc kubenswrapper[4779]: I1128 12:40:16.325134 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 28 12:40:16 crc kubenswrapper[4779]: I1128 12:40:16.396977 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 28 12:40:16 crc kubenswrapper[4779]: I1128 12:40:16.440164 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 28 12:40:16 crc kubenswrapper[4779]: I1128 12:40:16.563215 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 28 12:40:16 crc kubenswrapper[4779]: I1128 12:40:16.631133 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 28 12:40:16 crc kubenswrapper[4779]: I1128 12:40:16.701071 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 28 12:40:16 crc kubenswrapper[4779]: I1128 12:40:16.813589 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 28 12:40:16 crc kubenswrapper[4779]: I1128 12:40:16.836148 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 28 12:40:16 crc kubenswrapper[4779]: I1128 12:40:16.868363 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 28 12:40:16 crc kubenswrapper[4779]: I1128 12:40:16.991946 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 28 12:40:17 crc kubenswrapper[4779]: I1128 12:40:17.092796 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 28 12:40:17 crc kubenswrapper[4779]: I1128 12:40:17.181592 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 28 12:40:17 crc kubenswrapper[4779]: I1128 12:40:17.185871 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 28 12:40:17 crc kubenswrapper[4779]: I1128 12:40:17.282821 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 28 12:40:17 crc kubenswrapper[4779]: I1128 12:40:17.318192 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 28 12:40:17 crc kubenswrapper[4779]: I1128 12:40:17.331139 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 28 12:40:17 crc kubenswrapper[4779]: I1128 12:40:17.376309 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 28 12:40:17 crc kubenswrapper[4779]: I1128 12:40:17.406443 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 28 12:40:17 crc kubenswrapper[4779]: I1128 12:40:17.422061 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 28 12:40:17 crc kubenswrapper[4779]: I1128 12:40:17.498116 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 28 12:40:17 crc kubenswrapper[4779]: I1128 12:40:17.524610 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 28 12:40:17 crc kubenswrapper[4779]: I1128 12:40:17.581537 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 28 12:40:17 crc kubenswrapper[4779]: I1128 12:40:17.681610 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 28 12:40:17 crc kubenswrapper[4779]: I1128 12:40:17.712311 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 28 12:40:17 crc kubenswrapper[4779]: I1128 12:40:17.791627 4779 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 28 12:40:17 crc kubenswrapper[4779]: I1128 12:40:17.808163 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 28 12:40:17 crc kubenswrapper[4779]: I1128 12:40:17.824938 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 28 12:40:17 crc kubenswrapper[4779]: I1128 12:40:17.848708 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 28 12:40:17 crc kubenswrapper[4779]: I1128 12:40:17.923721 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 28 12:40:17 crc kubenswrapper[4779]: I1128 12:40:17.951620 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 28 12:40:17 crc kubenswrapper[4779]: I1128 12:40:17.962723 4779 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 28 12:40:17 crc kubenswrapper[4779]: I1128 12:40:17.971964 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=44.971934153 podStartE2EDuration="44.971934153s" podCreationTimestamp="2025-11-28 12:39:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:39:56.67163752 +0000 UTC m=+257.237312904" watchObservedRunningTime="2025-11-28 12:40:17.971934153 +0000 UTC m=+278.537609547" Nov 28 12:40:17 crc kubenswrapper[4779]: I1128 12:40:17.974128 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-n97k6"] Nov 28 12:40:17 crc kubenswrapper[4779]: I1128 12:40:17.974213 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-bb968f6ff-qnv78"] Nov 28 12:40:17 crc kubenswrapper[4779]: E1128 12:40:17.974749 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cbccab5-86c6-4c0f-82f6-9ae159b32cce" containerName="oauth-openshift" Nov 28 12:40:17 crc kubenswrapper[4779]: I1128 12:40:17.974785 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cbccab5-86c6-4c0f-82f6-9ae159b32cce" containerName="oauth-openshift" Nov 28 12:40:17 crc kubenswrapper[4779]: E1128 12:40:17.974832 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="054c0628-de67-429b-bb65-ac369cde4509" containerName="installer" Nov 28 12:40:17 crc kubenswrapper[4779]: I1128 12:40:17.974848 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="054c0628-de67-429b-bb65-ac369cde4509" containerName="installer" Nov 28 12:40:17 crc kubenswrapper[4779]: I1128 12:40:17.974760 4779 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b303d954-23c9-4fc9-8e79-981009172099" Nov 28 12:40:17 crc kubenswrapper[4779]: I1128 12:40:17.974891 4779 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b303d954-23c9-4fc9-8e79-981009172099" Nov 28 12:40:17 crc kubenswrapper[4779]: I1128 12:40:17.975027 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="054c0628-de67-429b-bb65-ac369cde4509" containerName="installer" Nov 28 12:40:17 crc kubenswrapper[4779]: I1128 12:40:17.975048 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cbccab5-86c6-4c0f-82f6-9ae159b32cce" containerName="oauth-openshift" Nov 28 12:40:17 crc kubenswrapper[4779]: I1128 12:40:17.976439 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-bb968f6ff-qnv78" Nov 28 12:40:17 crc kubenswrapper[4779]: I1128 12:40:17.980259 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 28 12:40:17 crc kubenswrapper[4779]: I1128 12:40:17.981333 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 28 12:40:17 crc kubenswrapper[4779]: I1128 12:40:17.983311 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 28 12:40:17 crc kubenswrapper[4779]: I1128 12:40:17.983550 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 28 12:40:17 crc kubenswrapper[4779]: I1128 12:40:17.983735 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 28 12:40:17 crc kubenswrapper[4779]: I1128 12:40:17.984250 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 28 12:40:17 crc kubenswrapper[4779]: I1128 12:40:17.984306 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 12:40:17 crc kubenswrapper[4779]: I1128 12:40:17.985058 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 28 12:40:17 crc kubenswrapper[4779]: I1128 12:40:17.985435 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 28 12:40:17 crc kubenswrapper[4779]: I1128 12:40:17.987415 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 28 12:40:17 crc kubenswrapper[4779]: I1128 12:40:17.988020 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 28 12:40:17 crc kubenswrapper[4779]: I1128 12:40:17.988325 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 28 12:40:17 crc kubenswrapper[4779]: I1128 12:40:17.989297 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 28 12:40:17 crc kubenswrapper[4779]: I1128 12:40:17.991462 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.001960 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.007537 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.008089 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.026926 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=22.02689676 podStartE2EDuration="22.02689676s" podCreationTimestamp="2025-11-28 12:39:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:40:18.019169991 +0000 UTC m=+278.584845405" watchObservedRunningTime="2025-11-28 12:40:18.02689676 +0000 UTC m=+278.592572154" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.027913 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.069439 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.072129 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.076496 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.102883 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ef77ca2e-cd2b-4980-84d2-f4343b46cc47-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-bb968f6ff-qnv78\" (UID: \"ef77ca2e-cd2b-4980-84d2-f4343b46cc47\") " pod="openshift-authentication/oauth-openshift-bb968f6ff-qnv78" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.102953 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ef77ca2e-cd2b-4980-84d2-f4343b46cc47-v4-0-config-system-session\") pod \"oauth-openshift-bb968f6ff-qnv78\" (UID: \"ef77ca2e-cd2b-4980-84d2-f4343b46cc47\") " pod="openshift-authentication/oauth-openshift-bb968f6ff-qnv78" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.103012 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef77ca2e-cd2b-4980-84d2-f4343b46cc47-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-bb968f6ff-qnv78\" (UID: \"ef77ca2e-cd2b-4980-84d2-f4343b46cc47\") " pod="openshift-authentication/oauth-openshift-bb968f6ff-qnv78" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.103053 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ef77ca2e-cd2b-4980-84d2-f4343b46cc47-v4-0-config-system-serving-cert\") pod \"oauth-openshift-bb968f6ff-qnv78\" (UID: \"ef77ca2e-cd2b-4980-84d2-f4343b46cc47\") " pod="openshift-authentication/oauth-openshift-bb968f6ff-qnv78" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.103087 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ef77ca2e-cd2b-4980-84d2-f4343b46cc47-audit-policies\") pod \"oauth-openshift-bb968f6ff-qnv78\" (UID: \"ef77ca2e-cd2b-4980-84d2-f4343b46cc47\") " pod="openshift-authentication/oauth-openshift-bb968f6ff-qnv78" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.103163 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ef77ca2e-cd2b-4980-84d2-f4343b46cc47-v4-0-config-system-service-ca\") pod \"oauth-openshift-bb968f6ff-qnv78\" (UID: \"ef77ca2e-cd2b-4980-84d2-f4343b46cc47\") " pod="openshift-authentication/oauth-openshift-bb968f6ff-qnv78" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.103205 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ef77ca2e-cd2b-4980-84d2-f4343b46cc47-v4-0-config-system-router-certs\") pod \"oauth-openshift-bb968f6ff-qnv78\" (UID: \"ef77ca2e-cd2b-4980-84d2-f4343b46cc47\") " pod="openshift-authentication/oauth-openshift-bb968f6ff-qnv78" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.103235 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ef77ca2e-cd2b-4980-84d2-f4343b46cc47-v4-0-config-user-template-error\") pod \"oauth-openshift-bb968f6ff-qnv78\" (UID: \"ef77ca2e-cd2b-4980-84d2-f4343b46cc47\") " pod="openshift-authentication/oauth-openshift-bb968f6ff-qnv78" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.103270 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ef77ca2e-cd2b-4980-84d2-f4343b46cc47-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-bb968f6ff-qnv78\" (UID: \"ef77ca2e-cd2b-4980-84d2-f4343b46cc47\") " pod="openshift-authentication/oauth-openshift-bb968f6ff-qnv78" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.103312 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ef77ca2e-cd2b-4980-84d2-f4343b46cc47-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-bb968f6ff-qnv78\" (UID: \"ef77ca2e-cd2b-4980-84d2-f4343b46cc47\") " pod="openshift-authentication/oauth-openshift-bb968f6ff-qnv78" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.103440 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ef77ca2e-cd2b-4980-84d2-f4343b46cc47-audit-dir\") pod \"oauth-openshift-bb968f6ff-qnv78\" (UID: \"ef77ca2e-cd2b-4980-84d2-f4343b46cc47\") " pod="openshift-authentication/oauth-openshift-bb968f6ff-qnv78" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.103476 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ef77ca2e-cd2b-4980-84d2-f4343b46cc47-v4-0-config-system-cliconfig\") pod \"oauth-openshift-bb968f6ff-qnv78\" (UID: \"ef77ca2e-cd2b-4980-84d2-f4343b46cc47\") " pod="openshift-authentication/oauth-openshift-bb968f6ff-qnv78" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.103517 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fqt7\" (UniqueName: \"kubernetes.io/projected/ef77ca2e-cd2b-4980-84d2-f4343b46cc47-kube-api-access-9fqt7\") pod \"oauth-openshift-bb968f6ff-qnv78\" (UID: \"ef77ca2e-cd2b-4980-84d2-f4343b46cc47\") " pod="openshift-authentication/oauth-openshift-bb968f6ff-qnv78" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.103552 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ef77ca2e-cd2b-4980-84d2-f4343b46cc47-v4-0-config-user-template-login\") pod \"oauth-openshift-bb968f6ff-qnv78\" (UID: \"ef77ca2e-cd2b-4980-84d2-f4343b46cc47\") " pod="openshift-authentication/oauth-openshift-bb968f6ff-qnv78" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.204764 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ef77ca2e-cd2b-4980-84d2-f4343b46cc47-audit-dir\") pod \"oauth-openshift-bb968f6ff-qnv78\" (UID: \"ef77ca2e-cd2b-4980-84d2-f4343b46cc47\") " pod="openshift-authentication/oauth-openshift-bb968f6ff-qnv78" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.204825 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ef77ca2e-cd2b-4980-84d2-f4343b46cc47-v4-0-config-system-cliconfig\") pod \"oauth-openshift-bb968f6ff-qnv78\" (UID: \"ef77ca2e-cd2b-4980-84d2-f4343b46cc47\") " pod="openshift-authentication/oauth-openshift-bb968f6ff-qnv78" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.204859 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fqt7\" (UniqueName: \"kubernetes.io/projected/ef77ca2e-cd2b-4980-84d2-f4343b46cc47-kube-api-access-9fqt7\") pod \"oauth-openshift-bb968f6ff-qnv78\" (UID: \"ef77ca2e-cd2b-4980-84d2-f4343b46cc47\") " pod="openshift-authentication/oauth-openshift-bb968f6ff-qnv78" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.204886 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ef77ca2e-cd2b-4980-84d2-f4343b46cc47-v4-0-config-user-template-login\") pod \"oauth-openshift-bb968f6ff-qnv78\" (UID: \"ef77ca2e-cd2b-4980-84d2-f4343b46cc47\") " pod="openshift-authentication/oauth-openshift-bb968f6ff-qnv78" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.204914 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ef77ca2e-cd2b-4980-84d2-f4343b46cc47-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-bb968f6ff-qnv78\" (UID: \"ef77ca2e-cd2b-4980-84d2-f4343b46cc47\") " pod="openshift-authentication/oauth-openshift-bb968f6ff-qnv78" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.204913 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ef77ca2e-cd2b-4980-84d2-f4343b46cc47-audit-dir\") pod \"oauth-openshift-bb968f6ff-qnv78\" (UID: \"ef77ca2e-cd2b-4980-84d2-f4343b46cc47\") " pod="openshift-authentication/oauth-openshift-bb968f6ff-qnv78" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.204939 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ef77ca2e-cd2b-4980-84d2-f4343b46cc47-v4-0-config-system-session\") pod \"oauth-openshift-bb968f6ff-qnv78\" (UID: \"ef77ca2e-cd2b-4980-84d2-f4343b46cc47\") " pod="openshift-authentication/oauth-openshift-bb968f6ff-qnv78" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.204996 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef77ca2e-cd2b-4980-84d2-f4343b46cc47-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-bb968f6ff-qnv78\" (UID: \"ef77ca2e-cd2b-4980-84d2-f4343b46cc47\") " pod="openshift-authentication/oauth-openshift-bb968f6ff-qnv78" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.205020 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ef77ca2e-cd2b-4980-84d2-f4343b46cc47-v4-0-config-system-serving-cert\") pod \"oauth-openshift-bb968f6ff-qnv78\" (UID: \"ef77ca2e-cd2b-4980-84d2-f4343b46cc47\") " pod="openshift-authentication/oauth-openshift-bb968f6ff-qnv78" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.205042 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ef77ca2e-cd2b-4980-84d2-f4343b46cc47-audit-policies\") pod \"oauth-openshift-bb968f6ff-qnv78\" (UID: \"ef77ca2e-cd2b-4980-84d2-f4343b46cc47\") " pod="openshift-authentication/oauth-openshift-bb968f6ff-qnv78" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.205071 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ef77ca2e-cd2b-4980-84d2-f4343b46cc47-v4-0-config-system-service-ca\") pod \"oauth-openshift-bb968f6ff-qnv78\" (UID: \"ef77ca2e-cd2b-4980-84d2-f4343b46cc47\") " pod="openshift-authentication/oauth-openshift-bb968f6ff-qnv78" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.205117 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ef77ca2e-cd2b-4980-84d2-f4343b46cc47-v4-0-config-system-router-certs\") pod \"oauth-openshift-bb968f6ff-qnv78\" (UID: \"ef77ca2e-cd2b-4980-84d2-f4343b46cc47\") " pod="openshift-authentication/oauth-openshift-bb968f6ff-qnv78" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.205140 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ef77ca2e-cd2b-4980-84d2-f4343b46cc47-v4-0-config-user-template-error\") pod \"oauth-openshift-bb968f6ff-qnv78\" (UID: \"ef77ca2e-cd2b-4980-84d2-f4343b46cc47\") " pod="openshift-authentication/oauth-openshift-bb968f6ff-qnv78" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.205165 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ef77ca2e-cd2b-4980-84d2-f4343b46cc47-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-bb968f6ff-qnv78\" (UID: \"ef77ca2e-cd2b-4980-84d2-f4343b46cc47\") " pod="openshift-authentication/oauth-openshift-bb968f6ff-qnv78" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.205190 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ef77ca2e-cd2b-4980-84d2-f4343b46cc47-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-bb968f6ff-qnv78\" (UID: \"ef77ca2e-cd2b-4980-84d2-f4343b46cc47\") " pod="openshift-authentication/oauth-openshift-bb968f6ff-qnv78" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.206235 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ef77ca2e-cd2b-4980-84d2-f4343b46cc47-audit-policies\") pod \"oauth-openshift-bb968f6ff-qnv78\" (UID: \"ef77ca2e-cd2b-4980-84d2-f4343b46cc47\") " pod="openshift-authentication/oauth-openshift-bb968f6ff-qnv78" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.206282 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ef77ca2e-cd2b-4980-84d2-f4343b46cc47-v4-0-config-system-cliconfig\") pod \"oauth-openshift-bb968f6ff-qnv78\" (UID: \"ef77ca2e-cd2b-4980-84d2-f4343b46cc47\") " pod="openshift-authentication/oauth-openshift-bb968f6ff-qnv78" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.208168 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ef77ca2e-cd2b-4980-84d2-f4343b46cc47-v4-0-config-system-service-ca\") pod \"oauth-openshift-bb968f6ff-qnv78\" (UID: \"ef77ca2e-cd2b-4980-84d2-f4343b46cc47\") " pod="openshift-authentication/oauth-openshift-bb968f6ff-qnv78" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.209856 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef77ca2e-cd2b-4980-84d2-f4343b46cc47-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-bb968f6ff-qnv78\" (UID: \"ef77ca2e-cd2b-4980-84d2-f4343b46cc47\") " pod="openshift-authentication/oauth-openshift-bb968f6ff-qnv78" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.210604 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ef77ca2e-cd2b-4980-84d2-f4343b46cc47-v4-0-config-system-router-certs\") pod \"oauth-openshift-bb968f6ff-qnv78\" (UID: \"ef77ca2e-cd2b-4980-84d2-f4343b46cc47\") " pod="openshift-authentication/oauth-openshift-bb968f6ff-qnv78" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.210661 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ef77ca2e-cd2b-4980-84d2-f4343b46cc47-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-bb968f6ff-qnv78\" (UID: \"ef77ca2e-cd2b-4980-84d2-f4343b46cc47\") " pod="openshift-authentication/oauth-openshift-bb968f6ff-qnv78" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.211694 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ef77ca2e-cd2b-4980-84d2-f4343b46cc47-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-bb968f6ff-qnv78\" (UID: \"ef77ca2e-cd2b-4980-84d2-f4343b46cc47\") " pod="openshift-authentication/oauth-openshift-bb968f6ff-qnv78" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.213020 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ef77ca2e-cd2b-4980-84d2-f4343b46cc47-v4-0-config-user-template-login\") pod \"oauth-openshift-bb968f6ff-qnv78\" (UID: \"ef77ca2e-cd2b-4980-84d2-f4343b46cc47\") " pod="openshift-authentication/oauth-openshift-bb968f6ff-qnv78" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.214005 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ef77ca2e-cd2b-4980-84d2-f4343b46cc47-v4-0-config-user-template-error\") pod \"oauth-openshift-bb968f6ff-qnv78\" (UID: \"ef77ca2e-cd2b-4980-84d2-f4343b46cc47\") " pod="openshift-authentication/oauth-openshift-bb968f6ff-qnv78" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.214869 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ef77ca2e-cd2b-4980-84d2-f4343b46cc47-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-bb968f6ff-qnv78\" (UID: \"ef77ca2e-cd2b-4980-84d2-f4343b46cc47\") " pod="openshift-authentication/oauth-openshift-bb968f6ff-qnv78" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.215483 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ef77ca2e-cd2b-4980-84d2-f4343b46cc47-v4-0-config-system-session\") pod \"oauth-openshift-bb968f6ff-qnv78\" (UID: \"ef77ca2e-cd2b-4980-84d2-f4343b46cc47\") " pod="openshift-authentication/oauth-openshift-bb968f6ff-qnv78" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.219035 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ef77ca2e-cd2b-4980-84d2-f4343b46cc47-v4-0-config-system-serving-cert\") pod \"oauth-openshift-bb968f6ff-qnv78\" (UID: \"ef77ca2e-cd2b-4980-84d2-f4343b46cc47\") " pod="openshift-authentication/oauth-openshift-bb968f6ff-qnv78" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.231657 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fqt7\" (UniqueName: \"kubernetes.io/projected/ef77ca2e-cd2b-4980-84d2-f4343b46cc47-kube-api-access-9fqt7\") pod \"oauth-openshift-bb968f6ff-qnv78\" (UID: \"ef77ca2e-cd2b-4980-84d2-f4343b46cc47\") " pod="openshift-authentication/oauth-openshift-bb968f6ff-qnv78" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.252744 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.254245 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.261858 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.267721 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.275401 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.291222 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.306242 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.310700 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-bb968f6ff-qnv78" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.319720 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.348604 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.386595 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.413117 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.434472 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.454900 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.530417 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-bb968f6ff-qnv78"] Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.539887 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.555940 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.621550 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.639354 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.664454 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.664820 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-bb968f6ff-qnv78" event={"ID":"ef77ca2e-cd2b-4980-84d2-f4343b46cc47","Type":"ContainerStarted","Data":"78432eaee8a4adc703a382266b76450a2e639ceb2f10e39ed5533bc309280302"} Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.674509 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.816268 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.917773 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 28 12:40:18 crc kubenswrapper[4779]: I1128 12:40:18.967466 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 28 12:40:19 crc kubenswrapper[4779]: I1128 12:40:19.009703 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 28 12:40:19 crc kubenswrapper[4779]: I1128 12:40:19.049149 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 28 12:40:19 crc kubenswrapper[4779]: I1128 12:40:19.158499 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 28 12:40:19 crc kubenswrapper[4779]: I1128 12:40:19.206080 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 28 12:40:19 crc kubenswrapper[4779]: I1128 12:40:19.260122 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 28 12:40:19 crc kubenswrapper[4779]: I1128 12:40:19.262960 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 28 12:40:19 crc kubenswrapper[4779]: I1128 12:40:19.325071 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 28 12:40:19 crc kubenswrapper[4779]: I1128 12:40:19.328836 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 28 12:40:19 crc kubenswrapper[4779]: I1128 12:40:19.453893 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 28 12:40:19 crc kubenswrapper[4779]: I1128 12:40:19.549845 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 28 12:40:19 crc kubenswrapper[4779]: I1128 12:40:19.649633 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 28 12:40:19 crc kubenswrapper[4779]: I1128 12:40:19.676759 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-bb968f6ff-qnv78" event={"ID":"ef77ca2e-cd2b-4980-84d2-f4343b46cc47","Type":"ContainerStarted","Data":"20bcf392fdb222c0c5404faaa5403488da203d75ba466ed6681b2db0e2a37327"} Nov 28 12:40:19 crc kubenswrapper[4779]: I1128 12:40:19.677403 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-bb968f6ff-qnv78" Nov 28 12:40:19 crc kubenswrapper[4779]: I1128 12:40:19.685309 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-bb968f6ff-qnv78" Nov 28 12:40:19 crc kubenswrapper[4779]: I1128 12:40:19.710459 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-bb968f6ff-qnv78" podStartSLOduration=54.710428933 podStartE2EDuration="54.710428933s" podCreationTimestamp="2025-11-28 12:39:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:40:19.709219 +0000 UTC m=+280.274894394" watchObservedRunningTime="2025-11-28 12:40:19.710428933 +0000 UTC m=+280.276104357" Nov 28 12:40:19 crc kubenswrapper[4779]: I1128 12:40:19.738619 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cbccab5-86c6-4c0f-82f6-9ae159b32cce" path="/var/lib/kubelet/pods/7cbccab5-86c6-4c0f-82f6-9ae159b32cce/volumes" Nov 28 12:40:19 crc kubenswrapper[4779]: I1128 12:40:19.773711 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 28 12:40:19 crc kubenswrapper[4779]: I1128 12:40:19.813411 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 28 12:40:19 crc kubenswrapper[4779]: I1128 12:40:19.855387 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 28 12:40:19 crc kubenswrapper[4779]: I1128 12:40:19.958978 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 28 12:40:20 crc kubenswrapper[4779]: I1128 12:40:20.046757 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 28 12:40:20 crc kubenswrapper[4779]: I1128 12:40:20.119529 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 28 12:40:20 crc kubenswrapper[4779]: I1128 12:40:20.326136 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 28 12:40:20 crc kubenswrapper[4779]: I1128 12:40:20.415458 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 28 12:40:20 crc kubenswrapper[4779]: I1128 12:40:20.642952 4779 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 28 12:40:20 crc kubenswrapper[4779]: I1128 12:40:20.648302 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 28 12:40:20 crc kubenswrapper[4779]: I1128 12:40:20.923261 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 28 12:40:21 crc kubenswrapper[4779]: I1128 12:40:21.010523 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 28 12:40:21 crc kubenswrapper[4779]: I1128 12:40:21.124903 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 28 12:40:21 crc kubenswrapper[4779]: I1128 12:40:21.256687 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 28 12:40:21 crc kubenswrapper[4779]: I1128 12:40:21.282352 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 28 12:40:21 crc kubenswrapper[4779]: I1128 12:40:21.290278 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 28 12:40:21 crc kubenswrapper[4779]: I1128 12:40:21.297736 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 28 12:40:21 crc kubenswrapper[4779]: I1128 12:40:21.331586 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 28 12:40:21 crc kubenswrapper[4779]: I1128 12:40:21.404937 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 28 12:40:21 crc kubenswrapper[4779]: I1128 12:40:21.654322 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 28 12:40:21 crc kubenswrapper[4779]: I1128 12:40:21.752185 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 28 12:40:21 crc kubenswrapper[4779]: I1128 12:40:21.853611 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 28 12:40:22 crc kubenswrapper[4779]: I1128 12:40:22.111874 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 28 12:40:22 crc kubenswrapper[4779]: I1128 12:40:22.167555 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 28 12:40:22 crc kubenswrapper[4779]: I1128 12:40:22.208138 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 28 12:40:22 crc kubenswrapper[4779]: I1128 12:40:22.330935 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 28 12:40:22 crc kubenswrapper[4779]: I1128 12:40:22.374594 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 28 12:40:22 crc kubenswrapper[4779]: I1128 12:40:22.532039 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 28 12:40:22 crc kubenswrapper[4779]: I1128 12:40:22.588398 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 28 12:40:23 crc kubenswrapper[4779]: I1128 12:40:23.247295 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 28 12:40:29 crc kubenswrapper[4779]: I1128 12:40:29.333550 4779 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 28 12:40:29 crc kubenswrapper[4779]: I1128 12:40:29.334350 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://75ed654f4d3746231833efb7c36a194bd92f5bfa95df7cfb54607fb3b6ecc8ee" gracePeriod=5 Nov 28 12:40:34 crc kubenswrapper[4779]: I1128 12:40:34.785015 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 28 12:40:34 crc kubenswrapper[4779]: I1128 12:40:34.785137 4779 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="75ed654f4d3746231833efb7c36a194bd92f5bfa95df7cfb54607fb3b6ecc8ee" exitCode=137 Nov 28 12:40:34 crc kubenswrapper[4779]: I1128 12:40:34.929649 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 28 12:40:34 crc kubenswrapper[4779]: I1128 12:40:34.929776 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 12:40:35 crc kubenswrapper[4779]: I1128 12:40:35.076403 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 28 12:40:35 crc kubenswrapper[4779]: I1128 12:40:35.076503 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 28 12:40:35 crc kubenswrapper[4779]: I1128 12:40:35.076551 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 28 12:40:35 crc kubenswrapper[4779]: I1128 12:40:35.076591 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 28 12:40:35 crc kubenswrapper[4779]: I1128 12:40:35.076656 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 28 12:40:35 crc kubenswrapper[4779]: I1128 12:40:35.077045 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:40:35 crc kubenswrapper[4779]: I1128 12:40:35.077142 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:40:35 crc kubenswrapper[4779]: I1128 12:40:35.077181 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:40:35 crc kubenswrapper[4779]: I1128 12:40:35.077816 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:40:35 crc kubenswrapper[4779]: I1128 12:40:35.089281 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:40:35 crc kubenswrapper[4779]: I1128 12:40:35.179449 4779 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Nov 28 12:40:35 crc kubenswrapper[4779]: I1128 12:40:35.179510 4779 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Nov 28 12:40:35 crc kubenswrapper[4779]: I1128 12:40:35.179532 4779 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 28 12:40:35 crc kubenswrapper[4779]: I1128 12:40:35.179553 4779 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Nov 28 12:40:35 crc kubenswrapper[4779]: I1128 12:40:35.179571 4779 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 28 12:40:35 crc kubenswrapper[4779]: I1128 12:40:35.738322 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Nov 28 12:40:35 crc kubenswrapper[4779]: I1128 12:40:35.738762 4779 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Nov 28 12:40:35 crc kubenswrapper[4779]: I1128 12:40:35.753922 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 28 12:40:35 crc kubenswrapper[4779]: I1128 12:40:35.753978 4779 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="69f68250-b51a-46e2-a8bc-29926d9430cc" Nov 28 12:40:35 crc kubenswrapper[4779]: I1128 12:40:35.761406 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 28 12:40:35 crc kubenswrapper[4779]: I1128 12:40:35.761447 4779 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="69f68250-b51a-46e2-a8bc-29926d9430cc" Nov 28 12:40:35 crc kubenswrapper[4779]: I1128 12:40:35.796038 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 28 12:40:35 crc kubenswrapper[4779]: I1128 12:40:35.796170 4779 scope.go:117] "RemoveContainer" containerID="75ed654f4d3746231833efb7c36a194bd92f5bfa95df7cfb54607fb3b6ecc8ee" Nov 28 12:40:35 crc kubenswrapper[4779]: I1128 12:40:35.796267 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 12:40:43 crc kubenswrapper[4779]: I1128 12:40:43.853814 4779 generic.go:334] "Generic (PLEG): container finished" podID="e2eedfd1-32f1-478a-b46d-939da24ba282" containerID="cb4ef0ff2f057c2da9c5a063639e6985d73deff284d1b4e223d737887d25b78a" exitCode=0 Nov 28 12:40:43 crc kubenswrapper[4779]: I1128 12:40:43.853988 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-f8kkl" event={"ID":"e2eedfd1-32f1-478a-b46d-939da24ba282","Type":"ContainerDied","Data":"cb4ef0ff2f057c2da9c5a063639e6985d73deff284d1b4e223d737887d25b78a"} Nov 28 12:40:43 crc kubenswrapper[4779]: I1128 12:40:43.855310 4779 scope.go:117] "RemoveContainer" containerID="cb4ef0ff2f057c2da9c5a063639e6985d73deff284d1b4e223d737887d25b78a" Nov 28 12:40:44 crc kubenswrapper[4779]: I1128 12:40:44.863619 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-f8kkl" event={"ID":"e2eedfd1-32f1-478a-b46d-939da24ba282","Type":"ContainerStarted","Data":"ab245bf68bcc940a5f23da47e463ed6e1ed8a884528c2fe3976fb5d14532a93c"} Nov 28 12:40:44 crc kubenswrapper[4779]: I1128 12:40:44.864835 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-f8kkl" Nov 28 12:40:44 crc kubenswrapper[4779]: I1128 12:40:44.868895 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-f8kkl" Nov 28 12:40:45 crc kubenswrapper[4779]: I1128 12:40:45.323972 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 28 12:40:45 crc kubenswrapper[4779]: I1128 12:40:45.874991 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Nov 28 12:40:45 crc kubenswrapper[4779]: I1128 12:40:45.878299 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 28 12:40:45 crc kubenswrapper[4779]: I1128 12:40:45.878376 4779 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="9bfa1f6025503181fed75a4d42e33410dd84ae74282e38ce502b67228405840f" exitCode=137 Nov 28 12:40:45 crc kubenswrapper[4779]: I1128 12:40:45.878529 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"9bfa1f6025503181fed75a4d42e33410dd84ae74282e38ce502b67228405840f"} Nov 28 12:40:45 crc kubenswrapper[4779]: I1128 12:40:45.878619 4779 scope.go:117] "RemoveContainer" containerID="0417da6607c0d549767642332fa4fb21bbef525d7073d0a352120092d3450f2b" Nov 28 12:40:46 crc kubenswrapper[4779]: I1128 12:40:46.886027 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Nov 28 12:40:46 crc kubenswrapper[4779]: I1128 12:40:46.886985 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3520c2afce20aaadc1fb772ddd4fa8ddd73322ccdfc3b0d0e7b84a2cbfb4449e"} Nov 28 12:40:49 crc kubenswrapper[4779]: I1128 12:40:49.914604 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 28 12:40:50 crc kubenswrapper[4779]: I1128 12:40:50.825163 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 12:40:52 crc kubenswrapper[4779]: I1128 12:40:52.736456 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 28 12:40:55 crc kubenswrapper[4779]: I1128 12:40:55.437475 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 12:40:55 crc kubenswrapper[4779]: I1128 12:40:55.444694 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 12:40:57 crc kubenswrapper[4779]: I1128 12:40:57.718508 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 28 12:40:59 crc kubenswrapper[4779]: I1128 12:40:59.206898 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 28 12:41:00 crc kubenswrapper[4779]: I1128 12:41:00.831677 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 12:41:07 crc kubenswrapper[4779]: I1128 12:41:07.884249 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tvc5s"] Nov 28 12:41:07 crc kubenswrapper[4779]: I1128 12:41:07.885197 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tvc5s" podUID="b5705070-06f5-4ad4-b5df-4d82f90f8e27" containerName="route-controller-manager" containerID="cri-o://ccab56ae3804a28125b1d7c71c470708a79879f4b488c0a19107038e93a0ca34" gracePeriod=30 Nov 28 12:41:07 crc kubenswrapper[4779]: I1128 12:41:07.899176 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8lqfg"] Nov 28 12:41:07 crc kubenswrapper[4779]: I1128 12:41:07.899437 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-8lqfg" podUID="13c936d9-26fd-46c4-9099-05a09312e511" containerName="controller-manager" containerID="cri-o://5e4f81708632f2bd20f6f63a95cf5c33bca575a5745ad3b3b03a5bcc55ba51b1" gracePeriod=30 Nov 28 12:41:08 crc kubenswrapper[4779]: I1128 12:41:08.053293 4779 generic.go:334] "Generic (PLEG): container finished" podID="13c936d9-26fd-46c4-9099-05a09312e511" containerID="5e4f81708632f2bd20f6f63a95cf5c33bca575a5745ad3b3b03a5bcc55ba51b1" exitCode=0 Nov 28 12:41:08 crc kubenswrapper[4779]: I1128 12:41:08.053351 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8lqfg" event={"ID":"13c936d9-26fd-46c4-9099-05a09312e511","Type":"ContainerDied","Data":"5e4f81708632f2bd20f6f63a95cf5c33bca575a5745ad3b3b03a5bcc55ba51b1"} Nov 28 12:41:08 crc kubenswrapper[4779]: I1128 12:41:08.082658 4779 generic.go:334] "Generic (PLEG): container finished" podID="b5705070-06f5-4ad4-b5df-4d82f90f8e27" containerID="ccab56ae3804a28125b1d7c71c470708a79879f4b488c0a19107038e93a0ca34" exitCode=0 Nov 28 12:41:08 crc kubenswrapper[4779]: I1128 12:41:08.082716 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tvc5s" event={"ID":"b5705070-06f5-4ad4-b5df-4d82f90f8e27","Type":"ContainerDied","Data":"ccab56ae3804a28125b1d7c71c470708a79879f4b488c0a19107038e93a0ca34"} Nov 28 12:41:08 crc kubenswrapper[4779]: I1128 12:41:08.265879 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tvc5s" Nov 28 12:41:08 crc kubenswrapper[4779]: I1128 12:41:08.301322 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8lqfg" Nov 28 12:41:08 crc kubenswrapper[4779]: I1128 12:41:08.383329 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87lvc\" (UniqueName: \"kubernetes.io/projected/b5705070-06f5-4ad4-b5df-4d82f90f8e27-kube-api-access-87lvc\") pod \"b5705070-06f5-4ad4-b5df-4d82f90f8e27\" (UID: \"b5705070-06f5-4ad4-b5df-4d82f90f8e27\") " Nov 28 12:41:08 crc kubenswrapper[4779]: I1128 12:41:08.383371 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b5705070-06f5-4ad4-b5df-4d82f90f8e27-client-ca\") pod \"b5705070-06f5-4ad4-b5df-4d82f90f8e27\" (UID: \"b5705070-06f5-4ad4-b5df-4d82f90f8e27\") " Nov 28 12:41:08 crc kubenswrapper[4779]: I1128 12:41:08.383410 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5705070-06f5-4ad4-b5df-4d82f90f8e27-serving-cert\") pod \"b5705070-06f5-4ad4-b5df-4d82f90f8e27\" (UID: \"b5705070-06f5-4ad4-b5df-4d82f90f8e27\") " Nov 28 12:41:08 crc kubenswrapper[4779]: I1128 12:41:08.383434 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5705070-06f5-4ad4-b5df-4d82f90f8e27-config\") pod \"b5705070-06f5-4ad4-b5df-4d82f90f8e27\" (UID: \"b5705070-06f5-4ad4-b5df-4d82f90f8e27\") " Nov 28 12:41:08 crc kubenswrapper[4779]: I1128 12:41:08.383502 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/13c936d9-26fd-46c4-9099-05a09312e511-proxy-ca-bundles\") pod \"13c936d9-26fd-46c4-9099-05a09312e511\" (UID: \"13c936d9-26fd-46c4-9099-05a09312e511\") " Nov 28 12:41:08 crc kubenswrapper[4779]: I1128 12:41:08.383520 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/13c936d9-26fd-46c4-9099-05a09312e511-client-ca\") pod \"13c936d9-26fd-46c4-9099-05a09312e511\" (UID: \"13c936d9-26fd-46c4-9099-05a09312e511\") " Nov 28 12:41:08 crc kubenswrapper[4779]: I1128 12:41:08.384279 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13c936d9-26fd-46c4-9099-05a09312e511-client-ca" (OuterVolumeSpecName: "client-ca") pod "13c936d9-26fd-46c4-9099-05a09312e511" (UID: "13c936d9-26fd-46c4-9099-05a09312e511"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:41:08 crc kubenswrapper[4779]: I1128 12:41:08.384287 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5705070-06f5-4ad4-b5df-4d82f90f8e27-client-ca" (OuterVolumeSpecName: "client-ca") pod "b5705070-06f5-4ad4-b5df-4d82f90f8e27" (UID: "b5705070-06f5-4ad4-b5df-4d82f90f8e27"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:41:08 crc kubenswrapper[4779]: I1128 12:41:08.384343 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5705070-06f5-4ad4-b5df-4d82f90f8e27-config" (OuterVolumeSpecName: "config") pod "b5705070-06f5-4ad4-b5df-4d82f90f8e27" (UID: "b5705070-06f5-4ad4-b5df-4d82f90f8e27"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:41:08 crc kubenswrapper[4779]: I1128 12:41:08.384368 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13c936d9-26fd-46c4-9099-05a09312e511-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "13c936d9-26fd-46c4-9099-05a09312e511" (UID: "13c936d9-26fd-46c4-9099-05a09312e511"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:41:08 crc kubenswrapper[4779]: I1128 12:41:08.388662 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5705070-06f5-4ad4-b5df-4d82f90f8e27-kube-api-access-87lvc" (OuterVolumeSpecName: "kube-api-access-87lvc") pod "b5705070-06f5-4ad4-b5df-4d82f90f8e27" (UID: "b5705070-06f5-4ad4-b5df-4d82f90f8e27"). InnerVolumeSpecName "kube-api-access-87lvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:41:08 crc kubenswrapper[4779]: I1128 12:41:08.388688 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5705070-06f5-4ad4-b5df-4d82f90f8e27-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b5705070-06f5-4ad4-b5df-4d82f90f8e27" (UID: "b5705070-06f5-4ad4-b5df-4d82f90f8e27"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:41:08 crc kubenswrapper[4779]: I1128 12:41:08.484980 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m75sm\" (UniqueName: \"kubernetes.io/projected/13c936d9-26fd-46c4-9099-05a09312e511-kube-api-access-m75sm\") pod \"13c936d9-26fd-46c4-9099-05a09312e511\" (UID: \"13c936d9-26fd-46c4-9099-05a09312e511\") " Nov 28 12:41:08 crc kubenswrapper[4779]: I1128 12:41:08.485074 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13c936d9-26fd-46c4-9099-05a09312e511-serving-cert\") pod \"13c936d9-26fd-46c4-9099-05a09312e511\" (UID: \"13c936d9-26fd-46c4-9099-05a09312e511\") " Nov 28 12:41:08 crc kubenswrapper[4779]: I1128 12:41:08.485170 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13c936d9-26fd-46c4-9099-05a09312e511-config\") pod \"13c936d9-26fd-46c4-9099-05a09312e511\" (UID: \"13c936d9-26fd-46c4-9099-05a09312e511\") " Nov 28 12:41:08 crc kubenswrapper[4779]: I1128 12:41:08.485380 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87lvc\" (UniqueName: \"kubernetes.io/projected/b5705070-06f5-4ad4-b5df-4d82f90f8e27-kube-api-access-87lvc\") on node \"crc\" DevicePath \"\"" Nov 28 12:41:08 crc kubenswrapper[4779]: I1128 12:41:08.485400 4779 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b5705070-06f5-4ad4-b5df-4d82f90f8e27-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 12:41:08 crc kubenswrapper[4779]: I1128 12:41:08.485408 4779 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5705070-06f5-4ad4-b5df-4d82f90f8e27-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 12:41:08 crc kubenswrapper[4779]: I1128 12:41:08.485417 4779 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5705070-06f5-4ad4-b5df-4d82f90f8e27-config\") on node \"crc\" DevicePath \"\"" Nov 28 12:41:08 crc kubenswrapper[4779]: I1128 12:41:08.485425 4779 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/13c936d9-26fd-46c4-9099-05a09312e511-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 28 12:41:08 crc kubenswrapper[4779]: I1128 12:41:08.485434 4779 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/13c936d9-26fd-46c4-9099-05a09312e511-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 12:41:08 crc kubenswrapper[4779]: I1128 12:41:08.485986 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13c936d9-26fd-46c4-9099-05a09312e511-config" (OuterVolumeSpecName: "config") pod "13c936d9-26fd-46c4-9099-05a09312e511" (UID: "13c936d9-26fd-46c4-9099-05a09312e511"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:41:08 crc kubenswrapper[4779]: I1128 12:41:08.488665 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13c936d9-26fd-46c4-9099-05a09312e511-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "13c936d9-26fd-46c4-9099-05a09312e511" (UID: "13c936d9-26fd-46c4-9099-05a09312e511"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:41:08 crc kubenswrapper[4779]: I1128 12:41:08.488747 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13c936d9-26fd-46c4-9099-05a09312e511-kube-api-access-m75sm" (OuterVolumeSpecName: "kube-api-access-m75sm") pod "13c936d9-26fd-46c4-9099-05a09312e511" (UID: "13c936d9-26fd-46c4-9099-05a09312e511"). InnerVolumeSpecName "kube-api-access-m75sm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:41:08 crc kubenswrapper[4779]: I1128 12:41:08.587192 4779 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13c936d9-26fd-46c4-9099-05a09312e511-config\") on node \"crc\" DevicePath \"\"" Nov 28 12:41:08 crc kubenswrapper[4779]: I1128 12:41:08.587258 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m75sm\" (UniqueName: \"kubernetes.io/projected/13c936d9-26fd-46c4-9099-05a09312e511-kube-api-access-m75sm\") on node \"crc\" DevicePath \"\"" Nov 28 12:41:08 crc kubenswrapper[4779]: I1128 12:41:08.587282 4779 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13c936d9-26fd-46c4-9099-05a09312e511-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.091956 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tvc5s" event={"ID":"b5705070-06f5-4ad4-b5df-4d82f90f8e27","Type":"ContainerDied","Data":"339f3a2f3db4143409d08ba95eaa6499179449f76b81de3b0047ccedc7a043ad"} Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.093509 4779 scope.go:117] "RemoveContainer" containerID="ccab56ae3804a28125b1d7c71c470708a79879f4b488c0a19107038e93a0ca34" Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.093829 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tvc5s" Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.103672 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8lqfg" event={"ID":"13c936d9-26fd-46c4-9099-05a09312e511","Type":"ContainerDied","Data":"7da9220129db78d1b0a195910e73170ca1fc98d55331f870492edba7adaefc90"} Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.103933 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8lqfg" Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.127465 4779 scope.go:117] "RemoveContainer" containerID="5e4f81708632f2bd20f6f63a95cf5c33bca575a5745ad3b3b03a5bcc55ba51b1" Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.155915 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tvc5s"] Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.161222 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tvc5s"] Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.183173 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8lqfg"] Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.183422 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8lqfg"] Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.336320 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6ccc76c7f8-6bvp8"] Nov 28 12:41:09 crc kubenswrapper[4779]: E1128 12:41:09.336718 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.336748 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 28 12:41:09 crc kubenswrapper[4779]: E1128 12:41:09.336770 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13c936d9-26fd-46c4-9099-05a09312e511" containerName="controller-manager" Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.336784 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="13c936d9-26fd-46c4-9099-05a09312e511" containerName="controller-manager" Nov 28 12:41:09 crc kubenswrapper[4779]: E1128 12:41:09.336809 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5705070-06f5-4ad4-b5df-4d82f90f8e27" containerName="route-controller-manager" Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.336823 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5705070-06f5-4ad4-b5df-4d82f90f8e27" containerName="route-controller-manager" Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.337023 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5705070-06f5-4ad4-b5df-4d82f90f8e27" containerName="route-controller-manager" Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.337053 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="13c936d9-26fd-46c4-9099-05a09312e511" containerName="controller-manager" Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.338017 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.338753 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ccc76c7f8-6bvp8" Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.342233 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.342303 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.342791 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.342971 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.343023 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.344304 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.349682 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66649d5fd7-6kzmj"] Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.350522 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-66649d5fd7-6kzmj" Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.351344 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6ccc76c7f8-6bvp8"] Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.351967 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.355553 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66649d5fd7-6kzmj"] Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.357431 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.358146 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.358199 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.358151 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.358282 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.358649 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.414794 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ffdc9c35-1b39-4f14-b555-d4a4e4988112-client-ca\") pod \"route-controller-manager-66649d5fd7-6kzmj\" (UID: \"ffdc9c35-1b39-4f14-b555-d4a4e4988112\") " pod="openshift-route-controller-manager/route-controller-manager-66649d5fd7-6kzmj" Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.415119 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/adf421ad-8f97-4429-a143-7afeed804142-client-ca\") pod \"controller-manager-6ccc76c7f8-6bvp8\" (UID: \"adf421ad-8f97-4429-a143-7afeed804142\") " pod="openshift-controller-manager/controller-manager-6ccc76c7f8-6bvp8" Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.415278 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adf421ad-8f97-4429-a143-7afeed804142-config\") pod \"controller-manager-6ccc76c7f8-6bvp8\" (UID: \"adf421ad-8f97-4429-a143-7afeed804142\") " pod="openshift-controller-manager/controller-manager-6ccc76c7f8-6bvp8" Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.415419 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adf421ad-8f97-4429-a143-7afeed804142-serving-cert\") pod \"controller-manager-6ccc76c7f8-6bvp8\" (UID: \"adf421ad-8f97-4429-a143-7afeed804142\") " pod="openshift-controller-manager/controller-manager-6ccc76c7f8-6bvp8" Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.415530 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99jkm\" (UniqueName: \"kubernetes.io/projected/ffdc9c35-1b39-4f14-b555-d4a4e4988112-kube-api-access-99jkm\") pod \"route-controller-manager-66649d5fd7-6kzmj\" (UID: \"ffdc9c35-1b39-4f14-b555-d4a4e4988112\") " pod="openshift-route-controller-manager/route-controller-manager-66649d5fd7-6kzmj" Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.415666 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5b46\" (UniqueName: \"kubernetes.io/projected/adf421ad-8f97-4429-a143-7afeed804142-kube-api-access-m5b46\") pod \"controller-manager-6ccc76c7f8-6bvp8\" (UID: \"adf421ad-8f97-4429-a143-7afeed804142\") " pod="openshift-controller-manager/controller-manager-6ccc76c7f8-6bvp8" Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.415800 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffdc9c35-1b39-4f14-b555-d4a4e4988112-config\") pod \"route-controller-manager-66649d5fd7-6kzmj\" (UID: \"ffdc9c35-1b39-4f14-b555-d4a4e4988112\") " pod="openshift-route-controller-manager/route-controller-manager-66649d5fd7-6kzmj" Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.415939 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffdc9c35-1b39-4f14-b555-d4a4e4988112-serving-cert\") pod \"route-controller-manager-66649d5fd7-6kzmj\" (UID: \"ffdc9c35-1b39-4f14-b555-d4a4e4988112\") " pod="openshift-route-controller-manager/route-controller-manager-66649d5fd7-6kzmj" Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.416051 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/adf421ad-8f97-4429-a143-7afeed804142-proxy-ca-bundles\") pod \"controller-manager-6ccc76c7f8-6bvp8\" (UID: \"adf421ad-8f97-4429-a143-7afeed804142\") " pod="openshift-controller-manager/controller-manager-6ccc76c7f8-6bvp8" Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.517057 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/adf421ad-8f97-4429-a143-7afeed804142-client-ca\") pod \"controller-manager-6ccc76c7f8-6bvp8\" (UID: \"adf421ad-8f97-4429-a143-7afeed804142\") " pod="openshift-controller-manager/controller-manager-6ccc76c7f8-6bvp8" Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.517363 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adf421ad-8f97-4429-a143-7afeed804142-config\") pod \"controller-manager-6ccc76c7f8-6bvp8\" (UID: \"adf421ad-8f97-4429-a143-7afeed804142\") " pod="openshift-controller-manager/controller-manager-6ccc76c7f8-6bvp8" Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.518993 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/adf421ad-8f97-4429-a143-7afeed804142-client-ca\") pod \"controller-manager-6ccc76c7f8-6bvp8\" (UID: \"adf421ad-8f97-4429-a143-7afeed804142\") " pod="openshift-controller-manager/controller-manager-6ccc76c7f8-6bvp8" Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.520930 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adf421ad-8f97-4429-a143-7afeed804142-config\") pod \"controller-manager-6ccc76c7f8-6bvp8\" (UID: \"adf421ad-8f97-4429-a143-7afeed804142\") " pod="openshift-controller-manager/controller-manager-6ccc76c7f8-6bvp8" Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.521081 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adf421ad-8f97-4429-a143-7afeed804142-serving-cert\") pod \"controller-manager-6ccc76c7f8-6bvp8\" (UID: \"adf421ad-8f97-4429-a143-7afeed804142\") " pod="openshift-controller-manager/controller-manager-6ccc76c7f8-6bvp8" Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.521239 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99jkm\" (UniqueName: \"kubernetes.io/projected/ffdc9c35-1b39-4f14-b555-d4a4e4988112-kube-api-access-99jkm\") pod \"route-controller-manager-66649d5fd7-6kzmj\" (UID: \"ffdc9c35-1b39-4f14-b555-d4a4e4988112\") " pod="openshift-route-controller-manager/route-controller-manager-66649d5fd7-6kzmj" Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.521886 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5b46\" (UniqueName: \"kubernetes.io/projected/adf421ad-8f97-4429-a143-7afeed804142-kube-api-access-m5b46\") pod \"controller-manager-6ccc76c7f8-6bvp8\" (UID: \"adf421ad-8f97-4429-a143-7afeed804142\") " pod="openshift-controller-manager/controller-manager-6ccc76c7f8-6bvp8" Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.522682 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffdc9c35-1b39-4f14-b555-d4a4e4988112-config\") pod \"route-controller-manager-66649d5fd7-6kzmj\" (UID: \"ffdc9c35-1b39-4f14-b555-d4a4e4988112\") " pod="openshift-route-controller-manager/route-controller-manager-66649d5fd7-6kzmj" Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.524685 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffdc9c35-1b39-4f14-b555-d4a4e4988112-config\") pod \"route-controller-manager-66649d5fd7-6kzmj\" (UID: \"ffdc9c35-1b39-4f14-b555-d4a4e4988112\") " pod="openshift-route-controller-manager/route-controller-manager-66649d5fd7-6kzmj" Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.525654 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffdc9c35-1b39-4f14-b555-d4a4e4988112-serving-cert\") pod \"route-controller-manager-66649d5fd7-6kzmj\" (UID: \"ffdc9c35-1b39-4f14-b555-d4a4e4988112\") " pod="openshift-route-controller-manager/route-controller-manager-66649d5fd7-6kzmj" Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.525822 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/adf421ad-8f97-4429-a143-7afeed804142-proxy-ca-bundles\") pod \"controller-manager-6ccc76c7f8-6bvp8\" (UID: \"adf421ad-8f97-4429-a143-7afeed804142\") " pod="openshift-controller-manager/controller-manager-6ccc76c7f8-6bvp8" Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.526962 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ffdc9c35-1b39-4f14-b555-d4a4e4988112-client-ca\") pod \"route-controller-manager-66649d5fd7-6kzmj\" (UID: \"ffdc9c35-1b39-4f14-b555-d4a4e4988112\") " pod="openshift-route-controller-manager/route-controller-manager-66649d5fd7-6kzmj" Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.527331 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adf421ad-8f97-4429-a143-7afeed804142-serving-cert\") pod \"controller-manager-6ccc76c7f8-6bvp8\" (UID: \"adf421ad-8f97-4429-a143-7afeed804142\") " pod="openshift-controller-manager/controller-manager-6ccc76c7f8-6bvp8" Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.528222 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/adf421ad-8f97-4429-a143-7afeed804142-proxy-ca-bundles\") pod \"controller-manager-6ccc76c7f8-6bvp8\" (UID: \"adf421ad-8f97-4429-a143-7afeed804142\") " pod="openshift-controller-manager/controller-manager-6ccc76c7f8-6bvp8" Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.528312 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ffdc9c35-1b39-4f14-b555-d4a4e4988112-client-ca\") pod \"route-controller-manager-66649d5fd7-6kzmj\" (UID: \"ffdc9c35-1b39-4f14-b555-d4a4e4988112\") " pod="openshift-route-controller-manager/route-controller-manager-66649d5fd7-6kzmj" Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.531755 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffdc9c35-1b39-4f14-b555-d4a4e4988112-serving-cert\") pod \"route-controller-manager-66649d5fd7-6kzmj\" (UID: \"ffdc9c35-1b39-4f14-b555-d4a4e4988112\") " pod="openshift-route-controller-manager/route-controller-manager-66649d5fd7-6kzmj" Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.549051 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99jkm\" (UniqueName: \"kubernetes.io/projected/ffdc9c35-1b39-4f14-b555-d4a4e4988112-kube-api-access-99jkm\") pod \"route-controller-manager-66649d5fd7-6kzmj\" (UID: \"ffdc9c35-1b39-4f14-b555-d4a4e4988112\") " pod="openshift-route-controller-manager/route-controller-manager-66649d5fd7-6kzmj" Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.553132 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5b46\" (UniqueName: \"kubernetes.io/projected/adf421ad-8f97-4429-a143-7afeed804142-kube-api-access-m5b46\") pod \"controller-manager-6ccc76c7f8-6bvp8\" (UID: \"adf421ad-8f97-4429-a143-7afeed804142\") " pod="openshift-controller-manager/controller-manager-6ccc76c7f8-6bvp8" Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.665188 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ccc76c7f8-6bvp8" Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.679544 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-66649d5fd7-6kzmj" Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.744518 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13c936d9-26fd-46c4-9099-05a09312e511" path="/var/lib/kubelet/pods/13c936d9-26fd-46c4-9099-05a09312e511/volumes" Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.745761 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5705070-06f5-4ad4-b5df-4d82f90f8e27" path="/var/lib/kubelet/pods/b5705070-06f5-4ad4-b5df-4d82f90f8e27/volumes" Nov 28 12:41:09 crc kubenswrapper[4779]: I1128 12:41:09.922884 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6ccc76c7f8-6bvp8"] Nov 28 12:41:09 crc kubenswrapper[4779]: W1128 12:41:09.931809 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadf421ad_8f97_4429_a143_7afeed804142.slice/crio-618b24112387a85e4c78cd5cde94721a7cb05b7f663b99d33542d68bca560dde WatchSource:0}: Error finding container 618b24112387a85e4c78cd5cde94721a7cb05b7f663b99d33542d68bca560dde: Status 404 returned error can't find the container with id 618b24112387a85e4c78cd5cde94721a7cb05b7f663b99d33542d68bca560dde Nov 28 12:41:10 crc kubenswrapper[4779]: I1128 12:41:09.998318 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66649d5fd7-6kzmj"] Nov 28 12:41:10 crc kubenswrapper[4779]: I1128 12:41:10.115239 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-66649d5fd7-6kzmj" event={"ID":"ffdc9c35-1b39-4f14-b555-d4a4e4988112","Type":"ContainerStarted","Data":"3929556619b752347510206006bfe2d1de54f73b603a294ecb24ee6a0d09e54e"} Nov 28 12:41:10 crc kubenswrapper[4779]: I1128 12:41:10.119541 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6ccc76c7f8-6bvp8" event={"ID":"adf421ad-8f97-4429-a143-7afeed804142","Type":"ContainerStarted","Data":"8dfefffd9da083e2ae24140d922f469c2aaa435c1abb3d59ba1a4957bdb37cb7"} Nov 28 12:41:10 crc kubenswrapper[4779]: I1128 12:41:10.119598 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6ccc76c7f8-6bvp8" event={"ID":"adf421ad-8f97-4429-a143-7afeed804142","Type":"ContainerStarted","Data":"618b24112387a85e4c78cd5cde94721a7cb05b7f663b99d33542d68bca560dde"} Nov 28 12:41:10 crc kubenswrapper[4779]: I1128 12:41:10.119808 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6ccc76c7f8-6bvp8" Nov 28 12:41:10 crc kubenswrapper[4779]: I1128 12:41:10.121018 4779 patch_prober.go:28] interesting pod/controller-manager-6ccc76c7f8-6bvp8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Nov 28 12:41:10 crc kubenswrapper[4779]: I1128 12:41:10.121079 4779 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6ccc76c7f8-6bvp8" podUID="adf421ad-8f97-4429-a143-7afeed804142" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" Nov 28 12:41:10 crc kubenswrapper[4779]: I1128 12:41:10.137071 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6ccc76c7f8-6bvp8" podStartSLOduration=3.137054481 podStartE2EDuration="3.137054481s" podCreationTimestamp="2025-11-28 12:41:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:41:10.134086307 +0000 UTC m=+330.699761651" watchObservedRunningTime="2025-11-28 12:41:10.137054481 +0000 UTC m=+330.702729835" Nov 28 12:41:11 crc kubenswrapper[4779]: I1128 12:41:11.133627 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-66649d5fd7-6kzmj" event={"ID":"ffdc9c35-1b39-4f14-b555-d4a4e4988112","Type":"ContainerStarted","Data":"da55bcd3d41f109ad5bf0b171245602e44592d668a6a737c3c205211593b1b3c"} Nov 28 12:41:11 crc kubenswrapper[4779]: I1128 12:41:11.142346 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6ccc76c7f8-6bvp8" Nov 28 12:41:11 crc kubenswrapper[4779]: I1128 12:41:11.159117 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-66649d5fd7-6kzmj" podStartSLOduration=4.159072777 podStartE2EDuration="4.159072777s" podCreationTimestamp="2025-11-28 12:41:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:41:11.155447365 +0000 UTC m=+331.721122729" watchObservedRunningTime="2025-11-28 12:41:11.159072777 +0000 UTC m=+331.724748141" Nov 28 12:41:12 crc kubenswrapper[4779]: I1128 12:41:12.139743 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-66649d5fd7-6kzmj" Nov 28 12:41:12 crc kubenswrapper[4779]: I1128 12:41:12.146352 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-66649d5fd7-6kzmj" Nov 28 12:41:46 crc kubenswrapper[4779]: I1128 12:41:46.285471 4779 patch_prober.go:28] interesting pod/machine-config-daemon-kj9g2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 12:41:46 crc kubenswrapper[4779]: I1128 12:41:46.286086 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 12:41:47 crc kubenswrapper[4779]: I1128 12:41:47.279080 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66649d5fd7-6kzmj"] Nov 28 12:41:47 crc kubenswrapper[4779]: I1128 12:41:47.279316 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-66649d5fd7-6kzmj" podUID="ffdc9c35-1b39-4f14-b555-d4a4e4988112" containerName="route-controller-manager" containerID="cri-o://da55bcd3d41f109ad5bf0b171245602e44592d668a6a737c3c205211593b1b3c" gracePeriod=30 Nov 28 12:41:47 crc kubenswrapper[4779]: I1128 12:41:47.808483 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-66649d5fd7-6kzmj" Nov 28 12:41:47 crc kubenswrapper[4779]: I1128 12:41:47.995767 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffdc9c35-1b39-4f14-b555-d4a4e4988112-config\") pod \"ffdc9c35-1b39-4f14-b555-d4a4e4988112\" (UID: \"ffdc9c35-1b39-4f14-b555-d4a4e4988112\") " Nov 28 12:41:47 crc kubenswrapper[4779]: I1128 12:41:47.995882 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99jkm\" (UniqueName: \"kubernetes.io/projected/ffdc9c35-1b39-4f14-b555-d4a4e4988112-kube-api-access-99jkm\") pod \"ffdc9c35-1b39-4f14-b555-d4a4e4988112\" (UID: \"ffdc9c35-1b39-4f14-b555-d4a4e4988112\") " Nov 28 12:41:47 crc kubenswrapper[4779]: I1128 12:41:47.996025 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ffdc9c35-1b39-4f14-b555-d4a4e4988112-client-ca\") pod \"ffdc9c35-1b39-4f14-b555-d4a4e4988112\" (UID: \"ffdc9c35-1b39-4f14-b555-d4a4e4988112\") " Nov 28 12:41:47 crc kubenswrapper[4779]: I1128 12:41:47.996131 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffdc9c35-1b39-4f14-b555-d4a4e4988112-serving-cert\") pod \"ffdc9c35-1b39-4f14-b555-d4a4e4988112\" (UID: \"ffdc9c35-1b39-4f14-b555-d4a4e4988112\") " Nov 28 12:41:47 crc kubenswrapper[4779]: I1128 12:41:47.997216 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffdc9c35-1b39-4f14-b555-d4a4e4988112-config" (OuterVolumeSpecName: "config") pod "ffdc9c35-1b39-4f14-b555-d4a4e4988112" (UID: "ffdc9c35-1b39-4f14-b555-d4a4e4988112"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:41:47 crc kubenswrapper[4779]: I1128 12:41:47.997584 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffdc9c35-1b39-4f14-b555-d4a4e4988112-client-ca" (OuterVolumeSpecName: "client-ca") pod "ffdc9c35-1b39-4f14-b555-d4a4e4988112" (UID: "ffdc9c35-1b39-4f14-b555-d4a4e4988112"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:41:48 crc kubenswrapper[4779]: I1128 12:41:48.004137 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffdc9c35-1b39-4f14-b555-d4a4e4988112-kube-api-access-99jkm" (OuterVolumeSpecName: "kube-api-access-99jkm") pod "ffdc9c35-1b39-4f14-b555-d4a4e4988112" (UID: "ffdc9c35-1b39-4f14-b555-d4a4e4988112"). InnerVolumeSpecName "kube-api-access-99jkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:41:48 crc kubenswrapper[4779]: I1128 12:41:48.004408 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffdc9c35-1b39-4f14-b555-d4a4e4988112-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ffdc9c35-1b39-4f14-b555-d4a4e4988112" (UID: "ffdc9c35-1b39-4f14-b555-d4a4e4988112"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:41:48 crc kubenswrapper[4779]: I1128 12:41:48.098000 4779 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ffdc9c35-1b39-4f14-b555-d4a4e4988112-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 12:41:48 crc kubenswrapper[4779]: I1128 12:41:48.098140 4779 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffdc9c35-1b39-4f14-b555-d4a4e4988112-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 12:41:48 crc kubenswrapper[4779]: I1128 12:41:48.098169 4779 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffdc9c35-1b39-4f14-b555-d4a4e4988112-config\") on node \"crc\" DevicePath \"\"" Nov 28 12:41:48 crc kubenswrapper[4779]: I1128 12:41:48.098195 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99jkm\" (UniqueName: \"kubernetes.io/projected/ffdc9c35-1b39-4f14-b555-d4a4e4988112-kube-api-access-99jkm\") on node \"crc\" DevicePath \"\"" Nov 28 12:41:48 crc kubenswrapper[4779]: I1128 12:41:48.357745 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c74ffd5fc-f69cg"] Nov 28 12:41:48 crc kubenswrapper[4779]: E1128 12:41:48.358062 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffdc9c35-1b39-4f14-b555-d4a4e4988112" containerName="route-controller-manager" Nov 28 12:41:48 crc kubenswrapper[4779]: I1128 12:41:48.358084 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffdc9c35-1b39-4f14-b555-d4a4e4988112" containerName="route-controller-manager" Nov 28 12:41:48 crc kubenswrapper[4779]: I1128 12:41:48.358308 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffdc9c35-1b39-4f14-b555-d4a4e4988112" containerName="route-controller-manager" Nov 28 12:41:48 crc kubenswrapper[4779]: I1128 12:41:48.358878 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c74ffd5fc-f69cg" Nov 28 12:41:48 crc kubenswrapper[4779]: I1128 12:41:48.371243 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c74ffd5fc-f69cg"] Nov 28 12:41:48 crc kubenswrapper[4779]: I1128 12:41:48.388037 4779 generic.go:334] "Generic (PLEG): container finished" podID="ffdc9c35-1b39-4f14-b555-d4a4e4988112" containerID="da55bcd3d41f109ad5bf0b171245602e44592d668a6a737c3c205211593b1b3c" exitCode=0 Nov 28 12:41:48 crc kubenswrapper[4779]: I1128 12:41:48.388132 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-66649d5fd7-6kzmj" Nov 28 12:41:48 crc kubenswrapper[4779]: I1128 12:41:48.388117 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-66649d5fd7-6kzmj" event={"ID":"ffdc9c35-1b39-4f14-b555-d4a4e4988112","Type":"ContainerDied","Data":"da55bcd3d41f109ad5bf0b171245602e44592d668a6a737c3c205211593b1b3c"} Nov 28 12:41:48 crc kubenswrapper[4779]: I1128 12:41:48.388202 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-66649d5fd7-6kzmj" event={"ID":"ffdc9c35-1b39-4f14-b555-d4a4e4988112","Type":"ContainerDied","Data":"3929556619b752347510206006bfe2d1de54f73b603a294ecb24ee6a0d09e54e"} Nov 28 12:41:48 crc kubenswrapper[4779]: I1128 12:41:48.388252 4779 scope.go:117] "RemoveContainer" containerID="da55bcd3d41f109ad5bf0b171245602e44592d668a6a737c3c205211593b1b3c" Nov 28 12:41:48 crc kubenswrapper[4779]: I1128 12:41:48.400977 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f5ba147f-7cec-45ec-a080-b36b65b6cb3e-client-ca\") pod \"route-controller-manager-6c74ffd5fc-f69cg\" (UID: \"f5ba147f-7cec-45ec-a080-b36b65b6cb3e\") " pod="openshift-route-controller-manager/route-controller-manager-6c74ffd5fc-f69cg" Nov 28 12:41:48 crc kubenswrapper[4779]: I1128 12:41:48.401349 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5ba147f-7cec-45ec-a080-b36b65b6cb3e-config\") pod \"route-controller-manager-6c74ffd5fc-f69cg\" (UID: \"f5ba147f-7cec-45ec-a080-b36b65b6cb3e\") " pod="openshift-route-controller-manager/route-controller-manager-6c74ffd5fc-f69cg" Nov 28 12:41:48 crc kubenswrapper[4779]: I1128 12:41:48.401440 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v68r\" (UniqueName: \"kubernetes.io/projected/f5ba147f-7cec-45ec-a080-b36b65b6cb3e-kube-api-access-2v68r\") pod \"route-controller-manager-6c74ffd5fc-f69cg\" (UID: \"f5ba147f-7cec-45ec-a080-b36b65b6cb3e\") " pod="openshift-route-controller-manager/route-controller-manager-6c74ffd5fc-f69cg" Nov 28 12:41:48 crc kubenswrapper[4779]: I1128 12:41:48.401505 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5ba147f-7cec-45ec-a080-b36b65b6cb3e-serving-cert\") pod \"route-controller-manager-6c74ffd5fc-f69cg\" (UID: \"f5ba147f-7cec-45ec-a080-b36b65b6cb3e\") " pod="openshift-route-controller-manager/route-controller-manager-6c74ffd5fc-f69cg" Nov 28 12:41:48 crc kubenswrapper[4779]: I1128 12:41:48.414404 4779 scope.go:117] "RemoveContainer" containerID="da55bcd3d41f109ad5bf0b171245602e44592d668a6a737c3c205211593b1b3c" Nov 28 12:41:48 crc kubenswrapper[4779]: E1128 12:41:48.414926 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da55bcd3d41f109ad5bf0b171245602e44592d668a6a737c3c205211593b1b3c\": container with ID starting with da55bcd3d41f109ad5bf0b171245602e44592d668a6a737c3c205211593b1b3c not found: ID does not exist" containerID="da55bcd3d41f109ad5bf0b171245602e44592d668a6a737c3c205211593b1b3c" Nov 28 12:41:48 crc kubenswrapper[4779]: I1128 12:41:48.415007 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da55bcd3d41f109ad5bf0b171245602e44592d668a6a737c3c205211593b1b3c"} err="failed to get container status \"da55bcd3d41f109ad5bf0b171245602e44592d668a6a737c3c205211593b1b3c\": rpc error: code = NotFound desc = could not find container \"da55bcd3d41f109ad5bf0b171245602e44592d668a6a737c3c205211593b1b3c\": container with ID starting with da55bcd3d41f109ad5bf0b171245602e44592d668a6a737c3c205211593b1b3c not found: ID does not exist" Nov 28 12:41:48 crc kubenswrapper[4779]: I1128 12:41:48.424995 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66649d5fd7-6kzmj"] Nov 28 12:41:48 crc kubenswrapper[4779]: I1128 12:41:48.431724 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66649d5fd7-6kzmj"] Nov 28 12:41:48 crc kubenswrapper[4779]: I1128 12:41:48.502521 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5ba147f-7cec-45ec-a080-b36b65b6cb3e-serving-cert\") pod \"route-controller-manager-6c74ffd5fc-f69cg\" (UID: \"f5ba147f-7cec-45ec-a080-b36b65b6cb3e\") " pod="openshift-route-controller-manager/route-controller-manager-6c74ffd5fc-f69cg" Nov 28 12:41:48 crc kubenswrapper[4779]: I1128 12:41:48.502610 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f5ba147f-7cec-45ec-a080-b36b65b6cb3e-client-ca\") pod \"route-controller-manager-6c74ffd5fc-f69cg\" (UID: \"f5ba147f-7cec-45ec-a080-b36b65b6cb3e\") " pod="openshift-route-controller-manager/route-controller-manager-6c74ffd5fc-f69cg" Nov 28 12:41:48 crc kubenswrapper[4779]: I1128 12:41:48.502683 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5ba147f-7cec-45ec-a080-b36b65b6cb3e-config\") pod \"route-controller-manager-6c74ffd5fc-f69cg\" (UID: \"f5ba147f-7cec-45ec-a080-b36b65b6cb3e\") " pod="openshift-route-controller-manager/route-controller-manager-6c74ffd5fc-f69cg" Nov 28 12:41:48 crc kubenswrapper[4779]: I1128 12:41:48.502785 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2v68r\" (UniqueName: \"kubernetes.io/projected/f5ba147f-7cec-45ec-a080-b36b65b6cb3e-kube-api-access-2v68r\") pod \"route-controller-manager-6c74ffd5fc-f69cg\" (UID: \"f5ba147f-7cec-45ec-a080-b36b65b6cb3e\") " pod="openshift-route-controller-manager/route-controller-manager-6c74ffd5fc-f69cg" Nov 28 12:41:48 crc kubenswrapper[4779]: I1128 12:41:48.504931 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f5ba147f-7cec-45ec-a080-b36b65b6cb3e-client-ca\") pod \"route-controller-manager-6c74ffd5fc-f69cg\" (UID: \"f5ba147f-7cec-45ec-a080-b36b65b6cb3e\") " pod="openshift-route-controller-manager/route-controller-manager-6c74ffd5fc-f69cg" Nov 28 12:41:48 crc kubenswrapper[4779]: I1128 12:41:48.506663 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5ba147f-7cec-45ec-a080-b36b65b6cb3e-config\") pod \"route-controller-manager-6c74ffd5fc-f69cg\" (UID: \"f5ba147f-7cec-45ec-a080-b36b65b6cb3e\") " pod="openshift-route-controller-manager/route-controller-manager-6c74ffd5fc-f69cg" Nov 28 12:41:48 crc kubenswrapper[4779]: I1128 12:41:48.520877 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5ba147f-7cec-45ec-a080-b36b65b6cb3e-serving-cert\") pod \"route-controller-manager-6c74ffd5fc-f69cg\" (UID: \"f5ba147f-7cec-45ec-a080-b36b65b6cb3e\") " pod="openshift-route-controller-manager/route-controller-manager-6c74ffd5fc-f69cg" Nov 28 12:41:48 crc kubenswrapper[4779]: I1128 12:41:48.525206 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2v68r\" (UniqueName: \"kubernetes.io/projected/f5ba147f-7cec-45ec-a080-b36b65b6cb3e-kube-api-access-2v68r\") pod \"route-controller-manager-6c74ffd5fc-f69cg\" (UID: \"f5ba147f-7cec-45ec-a080-b36b65b6cb3e\") " pod="openshift-route-controller-manager/route-controller-manager-6c74ffd5fc-f69cg" Nov 28 12:41:48 crc kubenswrapper[4779]: I1128 12:41:48.681683 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c74ffd5fc-f69cg" Nov 28 12:41:48 crc kubenswrapper[4779]: I1128 12:41:48.982661 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c74ffd5fc-f69cg"] Nov 28 12:41:48 crc kubenswrapper[4779]: W1128 12:41:48.987485 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5ba147f_7cec_45ec_a080_b36b65b6cb3e.slice/crio-ebb98b08e7dff99b2153d9108c22ba8a82c8cedae0085afff771b923dd40bb33 WatchSource:0}: Error finding container ebb98b08e7dff99b2153d9108c22ba8a82c8cedae0085afff771b923dd40bb33: Status 404 returned error can't find the container with id ebb98b08e7dff99b2153d9108c22ba8a82c8cedae0085afff771b923dd40bb33 Nov 28 12:41:49 crc kubenswrapper[4779]: I1128 12:41:49.397955 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c74ffd5fc-f69cg" event={"ID":"f5ba147f-7cec-45ec-a080-b36b65b6cb3e","Type":"ContainerStarted","Data":"0601fd5eaf874c9d8a14e7e177fbf82a197a0ebddb0ca6b16aba102c5e68f73c"} Nov 28 12:41:49 crc kubenswrapper[4779]: I1128 12:41:49.398412 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6c74ffd5fc-f69cg" Nov 28 12:41:49 crc kubenswrapper[4779]: I1128 12:41:49.398434 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c74ffd5fc-f69cg" event={"ID":"f5ba147f-7cec-45ec-a080-b36b65b6cb3e","Type":"ContainerStarted","Data":"ebb98b08e7dff99b2153d9108c22ba8a82c8cedae0085afff771b923dd40bb33"} Nov 28 12:41:49 crc kubenswrapper[4779]: I1128 12:41:49.424218 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6c74ffd5fc-f69cg" podStartSLOduration=2.424191028 podStartE2EDuration="2.424191028s" podCreationTimestamp="2025-11-28 12:41:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:41:49.422544041 +0000 UTC m=+369.988219435" watchObservedRunningTime="2025-11-28 12:41:49.424191028 +0000 UTC m=+369.989866422" Nov 28 12:41:49 crc kubenswrapper[4779]: I1128 12:41:49.738157 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffdc9c35-1b39-4f14-b555-d4a4e4988112" path="/var/lib/kubelet/pods/ffdc9c35-1b39-4f14-b555-d4a4e4988112/volumes" Nov 28 12:41:50 crc kubenswrapper[4779]: I1128 12:41:50.043159 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6c74ffd5fc-f69cg" Nov 28 12:42:03 crc kubenswrapper[4779]: I1128 12:42:03.489060 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-kckg2"] Nov 28 12:42:03 crc kubenswrapper[4779]: I1128 12:42:03.491564 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-kckg2" Nov 28 12:42:03 crc kubenswrapper[4779]: I1128 12:42:03.506385 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-kckg2"] Nov 28 12:42:03 crc kubenswrapper[4779]: I1128 12:42:03.627221 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt5t8\" (UniqueName: \"kubernetes.io/projected/4172d7e4-c775-4b76-8ef1-25192a2db026-kube-api-access-dt5t8\") pod \"image-registry-66df7c8f76-kckg2\" (UID: \"4172d7e4-c775-4b76-8ef1-25192a2db026\") " pod="openshift-image-registry/image-registry-66df7c8f76-kckg2" Nov 28 12:42:03 crc kubenswrapper[4779]: I1128 12:42:03.627291 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4172d7e4-c775-4b76-8ef1-25192a2db026-trusted-ca\") pod \"image-registry-66df7c8f76-kckg2\" (UID: \"4172d7e4-c775-4b76-8ef1-25192a2db026\") " pod="openshift-image-registry/image-registry-66df7c8f76-kckg2" Nov 28 12:42:03 crc kubenswrapper[4779]: I1128 12:42:03.627316 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4172d7e4-c775-4b76-8ef1-25192a2db026-ca-trust-extracted\") pod \"image-registry-66df7c8f76-kckg2\" (UID: \"4172d7e4-c775-4b76-8ef1-25192a2db026\") " pod="openshift-image-registry/image-registry-66df7c8f76-kckg2" Nov 28 12:42:03 crc kubenswrapper[4779]: I1128 12:42:03.627371 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4172d7e4-c775-4b76-8ef1-25192a2db026-bound-sa-token\") pod \"image-registry-66df7c8f76-kckg2\" (UID: \"4172d7e4-c775-4b76-8ef1-25192a2db026\") " pod="openshift-image-registry/image-registry-66df7c8f76-kckg2" Nov 28 12:42:03 crc kubenswrapper[4779]: I1128 12:42:03.627595 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4172d7e4-c775-4b76-8ef1-25192a2db026-registry-tls\") pod \"image-registry-66df7c8f76-kckg2\" (UID: \"4172d7e4-c775-4b76-8ef1-25192a2db026\") " pod="openshift-image-registry/image-registry-66df7c8f76-kckg2" Nov 28 12:42:03 crc kubenswrapper[4779]: I1128 12:42:03.627711 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-kckg2\" (UID: \"4172d7e4-c775-4b76-8ef1-25192a2db026\") " pod="openshift-image-registry/image-registry-66df7c8f76-kckg2" Nov 28 12:42:03 crc kubenswrapper[4779]: I1128 12:42:03.627828 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4172d7e4-c775-4b76-8ef1-25192a2db026-installation-pull-secrets\") pod \"image-registry-66df7c8f76-kckg2\" (UID: \"4172d7e4-c775-4b76-8ef1-25192a2db026\") " pod="openshift-image-registry/image-registry-66df7c8f76-kckg2" Nov 28 12:42:03 crc kubenswrapper[4779]: I1128 12:42:03.627906 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4172d7e4-c775-4b76-8ef1-25192a2db026-registry-certificates\") pod \"image-registry-66df7c8f76-kckg2\" (UID: \"4172d7e4-c775-4b76-8ef1-25192a2db026\") " pod="openshift-image-registry/image-registry-66df7c8f76-kckg2" Nov 28 12:42:03 crc kubenswrapper[4779]: I1128 12:42:03.650591 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-kckg2\" (UID: \"4172d7e4-c775-4b76-8ef1-25192a2db026\") " pod="openshift-image-registry/image-registry-66df7c8f76-kckg2" Nov 28 12:42:03 crc kubenswrapper[4779]: I1128 12:42:03.733928 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4172d7e4-c775-4b76-8ef1-25192a2db026-trusted-ca\") pod \"image-registry-66df7c8f76-kckg2\" (UID: \"4172d7e4-c775-4b76-8ef1-25192a2db026\") " pod="openshift-image-registry/image-registry-66df7c8f76-kckg2" Nov 28 12:42:03 crc kubenswrapper[4779]: I1128 12:42:03.734037 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4172d7e4-c775-4b76-8ef1-25192a2db026-ca-trust-extracted\") pod \"image-registry-66df7c8f76-kckg2\" (UID: \"4172d7e4-c775-4b76-8ef1-25192a2db026\") " pod="openshift-image-registry/image-registry-66df7c8f76-kckg2" Nov 28 12:42:03 crc kubenswrapper[4779]: I1128 12:42:03.734125 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4172d7e4-c775-4b76-8ef1-25192a2db026-bound-sa-token\") pod \"image-registry-66df7c8f76-kckg2\" (UID: \"4172d7e4-c775-4b76-8ef1-25192a2db026\") " pod="openshift-image-registry/image-registry-66df7c8f76-kckg2" Nov 28 12:42:03 crc kubenswrapper[4779]: I1128 12:42:03.734202 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4172d7e4-c775-4b76-8ef1-25192a2db026-registry-tls\") pod \"image-registry-66df7c8f76-kckg2\" (UID: \"4172d7e4-c775-4b76-8ef1-25192a2db026\") " pod="openshift-image-registry/image-registry-66df7c8f76-kckg2" Nov 28 12:42:03 crc kubenswrapper[4779]: I1128 12:42:03.734272 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4172d7e4-c775-4b76-8ef1-25192a2db026-installation-pull-secrets\") pod \"image-registry-66df7c8f76-kckg2\" (UID: \"4172d7e4-c775-4b76-8ef1-25192a2db026\") " pod="openshift-image-registry/image-registry-66df7c8f76-kckg2" Nov 28 12:42:03 crc kubenswrapper[4779]: I1128 12:42:03.734334 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4172d7e4-c775-4b76-8ef1-25192a2db026-registry-certificates\") pod \"image-registry-66df7c8f76-kckg2\" (UID: \"4172d7e4-c775-4b76-8ef1-25192a2db026\") " pod="openshift-image-registry/image-registry-66df7c8f76-kckg2" Nov 28 12:42:03 crc kubenswrapper[4779]: I1128 12:42:03.734419 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5t8\" (UniqueName: \"kubernetes.io/projected/4172d7e4-c775-4b76-8ef1-25192a2db026-kube-api-access-dt5t8\") pod \"image-registry-66df7c8f76-kckg2\" (UID: \"4172d7e4-c775-4b76-8ef1-25192a2db026\") " pod="openshift-image-registry/image-registry-66df7c8f76-kckg2" Nov 28 12:42:03 crc kubenswrapper[4779]: I1128 12:42:03.736518 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4172d7e4-c775-4b76-8ef1-25192a2db026-ca-trust-extracted\") pod \"image-registry-66df7c8f76-kckg2\" (UID: \"4172d7e4-c775-4b76-8ef1-25192a2db026\") " pod="openshift-image-registry/image-registry-66df7c8f76-kckg2" Nov 28 12:42:03 crc kubenswrapper[4779]: I1128 12:42:03.737369 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4172d7e4-c775-4b76-8ef1-25192a2db026-trusted-ca\") pod \"image-registry-66df7c8f76-kckg2\" (UID: \"4172d7e4-c775-4b76-8ef1-25192a2db026\") " pod="openshift-image-registry/image-registry-66df7c8f76-kckg2" Nov 28 12:42:03 crc kubenswrapper[4779]: I1128 12:42:03.739085 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4172d7e4-c775-4b76-8ef1-25192a2db026-registry-certificates\") pod \"image-registry-66df7c8f76-kckg2\" (UID: \"4172d7e4-c775-4b76-8ef1-25192a2db026\") " pod="openshift-image-registry/image-registry-66df7c8f76-kckg2" Nov 28 12:42:03 crc kubenswrapper[4779]: I1128 12:42:03.746080 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4172d7e4-c775-4b76-8ef1-25192a2db026-installation-pull-secrets\") pod \"image-registry-66df7c8f76-kckg2\" (UID: \"4172d7e4-c775-4b76-8ef1-25192a2db026\") " pod="openshift-image-registry/image-registry-66df7c8f76-kckg2" Nov 28 12:42:03 crc kubenswrapper[4779]: I1128 12:42:03.746368 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4172d7e4-c775-4b76-8ef1-25192a2db026-registry-tls\") pod \"image-registry-66df7c8f76-kckg2\" (UID: \"4172d7e4-c775-4b76-8ef1-25192a2db026\") " pod="openshift-image-registry/image-registry-66df7c8f76-kckg2" Nov 28 12:42:03 crc kubenswrapper[4779]: I1128 12:42:03.762720 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt5t8\" (UniqueName: \"kubernetes.io/projected/4172d7e4-c775-4b76-8ef1-25192a2db026-kube-api-access-dt5t8\") pod \"image-registry-66df7c8f76-kckg2\" (UID: \"4172d7e4-c775-4b76-8ef1-25192a2db026\") " pod="openshift-image-registry/image-registry-66df7c8f76-kckg2" Nov 28 12:42:03 crc kubenswrapper[4779]: I1128 12:42:03.764856 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4172d7e4-c775-4b76-8ef1-25192a2db026-bound-sa-token\") pod \"image-registry-66df7c8f76-kckg2\" (UID: \"4172d7e4-c775-4b76-8ef1-25192a2db026\") " pod="openshift-image-registry/image-registry-66df7c8f76-kckg2" Nov 28 12:42:03 crc kubenswrapper[4779]: I1128 12:42:03.812048 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-kckg2" Nov 28 12:42:04 crc kubenswrapper[4779]: I1128 12:42:04.060732 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-kckg2"] Nov 28 12:42:04 crc kubenswrapper[4779]: I1128 12:42:04.509076 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-kckg2" event={"ID":"4172d7e4-c775-4b76-8ef1-25192a2db026","Type":"ContainerStarted","Data":"7410ea32d97db16db6c71659a27115c45830be911cb14e500b0828119b98deea"} Nov 28 12:42:04 crc kubenswrapper[4779]: I1128 12:42:04.509135 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-kckg2" event={"ID":"4172d7e4-c775-4b76-8ef1-25192a2db026","Type":"ContainerStarted","Data":"ade3cc37233021d1e92fd6d4d6737d377c446405a3c6a5bbdac5180cce8c9787"} Nov 28 12:42:04 crc kubenswrapper[4779]: I1128 12:42:04.509311 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-kckg2" Nov 28 12:42:04 crc kubenswrapper[4779]: I1128 12:42:04.534964 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-kckg2" podStartSLOduration=1.534902877 podStartE2EDuration="1.534902877s" podCreationTimestamp="2025-11-28 12:42:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:42:04.532330534 +0000 UTC m=+385.098005928" watchObservedRunningTime="2025-11-28 12:42:04.534902877 +0000 UTC m=+385.100578281" Nov 28 12:42:16 crc kubenswrapper[4779]: I1128 12:42:16.285083 4779 patch_prober.go:28] interesting pod/machine-config-daemon-kj9g2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 12:42:16 crc kubenswrapper[4779]: I1128 12:42:16.285834 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 12:42:18 crc kubenswrapper[4779]: I1128 12:42:18.516400 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xbp9s"] Nov 28 12:42:18 crc kubenswrapper[4779]: I1128 12:42:18.517045 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xbp9s" podUID="b88224c6-06e6-41c7-bba9-cb04ae3361e0" containerName="registry-server" containerID="cri-o://91c773c489065c1aa57e7daa45c3737e2e04fd2cceb08c784419c5f837f178a0" gracePeriod=30 Nov 28 12:42:18 crc kubenswrapper[4779]: I1128 12:42:18.533789 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tsxnv"] Nov 28 12:42:18 crc kubenswrapper[4779]: I1128 12:42:18.536672 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tsxnv" podUID="42148fd9-447b-43a5-b513-7cc37b19ab16" containerName="registry-server" containerID="cri-o://1056b142771cf2102daf3b9a5c47e9041b7884919153262835b8d4f11846b11d" gracePeriod=30 Nov 28 12:42:18 crc kubenswrapper[4779]: I1128 12:42:18.549203 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-f8kkl"] Nov 28 12:42:18 crc kubenswrapper[4779]: I1128 12:42:18.549569 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-f8kkl" podUID="e2eedfd1-32f1-478a-b46d-939da24ba282" containerName="marketplace-operator" containerID="cri-o://ab245bf68bcc940a5f23da47e463ed6e1ed8a884528c2fe3976fb5d14532a93c" gracePeriod=30 Nov 28 12:42:18 crc kubenswrapper[4779]: I1128 12:42:18.555361 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-svdf4"] Nov 28 12:42:18 crc kubenswrapper[4779]: I1128 12:42:18.555586 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-svdf4" podUID="83ce570d-f1e1-4168-9b49-3da4f6b31209" containerName="registry-server" containerID="cri-o://ae5a94c60b1165f00d9c67227a7fa3c2f315ebdca6720d8b0929c792e7f9a5bf" gracePeriod=30 Nov 28 12:42:18 crc kubenswrapper[4779]: I1128 12:42:18.569118 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ntps2"] Nov 28 12:42:18 crc kubenswrapper[4779]: I1128 12:42:18.569410 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ntps2" podUID="b54c6e50-f765-4a8c-b147-237821a03d11" containerName="registry-server" containerID="cri-o://56eb81dec693b51964300e27276315a8bf8d62ab1bc0a2b5ecae56b5245d5cde" gracePeriod=30 Nov 28 12:42:18 crc kubenswrapper[4779]: I1128 12:42:18.571530 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-r6z5b"] Nov 28 12:42:18 crc kubenswrapper[4779]: I1128 12:42:18.572167 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-r6z5b" Nov 28 12:42:18 crc kubenswrapper[4779]: I1128 12:42:18.601720 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-r6z5b"] Nov 28 12:42:18 crc kubenswrapper[4779]: I1128 12:42:18.675286 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f8z5\" (UniqueName: \"kubernetes.io/projected/6d803c44-5049-4974-ad24-8bdf8082456f-kube-api-access-2f8z5\") pod \"marketplace-operator-79b997595-r6z5b\" (UID: \"6d803c44-5049-4974-ad24-8bdf8082456f\") " pod="openshift-marketplace/marketplace-operator-79b997595-r6z5b" Nov 28 12:42:18 crc kubenswrapper[4779]: I1128 12:42:18.675333 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6d803c44-5049-4974-ad24-8bdf8082456f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-r6z5b\" (UID: \"6d803c44-5049-4974-ad24-8bdf8082456f\") " pod="openshift-marketplace/marketplace-operator-79b997595-r6z5b" Nov 28 12:42:18 crc kubenswrapper[4779]: I1128 12:42:18.675386 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6d803c44-5049-4974-ad24-8bdf8082456f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-r6z5b\" (UID: \"6d803c44-5049-4974-ad24-8bdf8082456f\") " pod="openshift-marketplace/marketplace-operator-79b997595-r6z5b" Nov 28 12:42:18 crc kubenswrapper[4779]: I1128 12:42:18.778569 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f8z5\" (UniqueName: \"kubernetes.io/projected/6d803c44-5049-4974-ad24-8bdf8082456f-kube-api-access-2f8z5\") pod \"marketplace-operator-79b997595-r6z5b\" (UID: \"6d803c44-5049-4974-ad24-8bdf8082456f\") " pod="openshift-marketplace/marketplace-operator-79b997595-r6z5b" Nov 28 12:42:18 crc kubenswrapper[4779]: I1128 12:42:18.778632 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6d803c44-5049-4974-ad24-8bdf8082456f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-r6z5b\" (UID: \"6d803c44-5049-4974-ad24-8bdf8082456f\") " pod="openshift-marketplace/marketplace-operator-79b997595-r6z5b" Nov 28 12:42:18 crc kubenswrapper[4779]: I1128 12:42:18.778700 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6d803c44-5049-4974-ad24-8bdf8082456f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-r6z5b\" (UID: \"6d803c44-5049-4974-ad24-8bdf8082456f\") " pod="openshift-marketplace/marketplace-operator-79b997595-r6z5b" Nov 28 12:42:18 crc kubenswrapper[4779]: I1128 12:42:18.784977 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6d803c44-5049-4974-ad24-8bdf8082456f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-r6z5b\" (UID: \"6d803c44-5049-4974-ad24-8bdf8082456f\") " pod="openshift-marketplace/marketplace-operator-79b997595-r6z5b" Nov 28 12:42:18 crc kubenswrapper[4779]: I1128 12:42:18.788365 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6d803c44-5049-4974-ad24-8bdf8082456f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-r6z5b\" (UID: \"6d803c44-5049-4974-ad24-8bdf8082456f\") " pod="openshift-marketplace/marketplace-operator-79b997595-r6z5b" Nov 28 12:42:18 crc kubenswrapper[4779]: I1128 12:42:18.798033 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f8z5\" (UniqueName: \"kubernetes.io/projected/6d803c44-5049-4974-ad24-8bdf8082456f-kube-api-access-2f8z5\") pod \"marketplace-operator-79b997595-r6z5b\" (UID: \"6d803c44-5049-4974-ad24-8bdf8082456f\") " pod="openshift-marketplace/marketplace-operator-79b997595-r6z5b" Nov 28 12:42:18 crc kubenswrapper[4779]: I1128 12:42:18.977663 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-r6z5b" Nov 28 12:42:18 crc kubenswrapper[4779]: I1128 12:42:18.988798 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xbp9s" Nov 28 12:42:18 crc kubenswrapper[4779]: I1128 12:42:18.993316 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tsxnv" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.020710 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-f8kkl" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.039076 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-svdf4" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.045590 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ntps2" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.081238 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42148fd9-447b-43a5-b513-7cc37b19ab16-catalog-content\") pod \"42148fd9-447b-43a5-b513-7cc37b19ab16\" (UID: \"42148fd9-447b-43a5-b513-7cc37b19ab16\") " Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.081296 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83ce570d-f1e1-4168-9b49-3da4f6b31209-catalog-content\") pod \"83ce570d-f1e1-4168-9b49-3da4f6b31209\" (UID: \"83ce570d-f1e1-4168-9b49-3da4f6b31209\") " Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.081337 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmjd4\" (UniqueName: \"kubernetes.io/projected/83ce570d-f1e1-4168-9b49-3da4f6b31209-kube-api-access-wmjd4\") pod \"83ce570d-f1e1-4168-9b49-3da4f6b31209\" (UID: \"83ce570d-f1e1-4168-9b49-3da4f6b31209\") " Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.081379 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqxlr\" (UniqueName: \"kubernetes.io/projected/b88224c6-06e6-41c7-bba9-cb04ae3361e0-kube-api-access-jqxlr\") pod \"b88224c6-06e6-41c7-bba9-cb04ae3361e0\" (UID: \"b88224c6-06e6-41c7-bba9-cb04ae3361e0\") " Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.081409 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e2eedfd1-32f1-478a-b46d-939da24ba282-marketplace-operator-metrics\") pod \"e2eedfd1-32f1-478a-b46d-939da24ba282\" (UID: \"e2eedfd1-32f1-478a-b46d-939da24ba282\") " Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.081454 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b54c6e50-f765-4a8c-b147-237821a03d11-catalog-content\") pod \"b54c6e50-f765-4a8c-b147-237821a03d11\" (UID: \"b54c6e50-f765-4a8c-b147-237821a03d11\") " Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.081476 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b88224c6-06e6-41c7-bba9-cb04ae3361e0-catalog-content\") pod \"b88224c6-06e6-41c7-bba9-cb04ae3361e0\" (UID: \"b88224c6-06e6-41c7-bba9-cb04ae3361e0\") " Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.081496 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftpj8\" (UniqueName: \"kubernetes.io/projected/42148fd9-447b-43a5-b513-7cc37b19ab16-kube-api-access-ftpj8\") pod \"42148fd9-447b-43a5-b513-7cc37b19ab16\" (UID: \"42148fd9-447b-43a5-b513-7cc37b19ab16\") " Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.081521 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b88224c6-06e6-41c7-bba9-cb04ae3361e0-utilities\") pod \"b88224c6-06e6-41c7-bba9-cb04ae3361e0\" (UID: \"b88224c6-06e6-41c7-bba9-cb04ae3361e0\") " Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.081543 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdbm2\" (UniqueName: \"kubernetes.io/projected/b54c6e50-f765-4a8c-b147-237821a03d11-kube-api-access-zdbm2\") pod \"b54c6e50-f765-4a8c-b147-237821a03d11\" (UID: \"b54c6e50-f765-4a8c-b147-237821a03d11\") " Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.081567 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83ce570d-f1e1-4168-9b49-3da4f6b31209-utilities\") pod \"83ce570d-f1e1-4168-9b49-3da4f6b31209\" (UID: \"83ce570d-f1e1-4168-9b49-3da4f6b31209\") " Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.081616 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e2eedfd1-32f1-478a-b46d-939da24ba282-marketplace-trusted-ca\") pod \"e2eedfd1-32f1-478a-b46d-939da24ba282\" (UID: \"e2eedfd1-32f1-478a-b46d-939da24ba282\") " Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.081653 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42148fd9-447b-43a5-b513-7cc37b19ab16-utilities\") pod \"42148fd9-447b-43a5-b513-7cc37b19ab16\" (UID: \"42148fd9-447b-43a5-b513-7cc37b19ab16\") " Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.081683 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqckz\" (UniqueName: \"kubernetes.io/projected/e2eedfd1-32f1-478a-b46d-939da24ba282-kube-api-access-dqckz\") pod \"e2eedfd1-32f1-478a-b46d-939da24ba282\" (UID: \"e2eedfd1-32f1-478a-b46d-939da24ba282\") " Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.081711 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b54c6e50-f765-4a8c-b147-237821a03d11-utilities\") pod \"b54c6e50-f765-4a8c-b147-237821a03d11\" (UID: \"b54c6e50-f765-4a8c-b147-237821a03d11\") " Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.088819 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b88224c6-06e6-41c7-bba9-cb04ae3361e0-utilities" (OuterVolumeSpecName: "utilities") pod "b88224c6-06e6-41c7-bba9-cb04ae3361e0" (UID: "b88224c6-06e6-41c7-bba9-cb04ae3361e0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.093865 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42148fd9-447b-43a5-b513-7cc37b19ab16-utilities" (OuterVolumeSpecName: "utilities") pod "42148fd9-447b-43a5-b513-7cc37b19ab16" (UID: "42148fd9-447b-43a5-b513-7cc37b19ab16"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.096183 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b54c6e50-f765-4a8c-b147-237821a03d11-kube-api-access-zdbm2" (OuterVolumeSpecName: "kube-api-access-zdbm2") pod "b54c6e50-f765-4a8c-b147-237821a03d11" (UID: "b54c6e50-f765-4a8c-b147-237821a03d11"). InnerVolumeSpecName "kube-api-access-zdbm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.097932 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b88224c6-06e6-41c7-bba9-cb04ae3361e0-kube-api-access-jqxlr" (OuterVolumeSpecName: "kube-api-access-jqxlr") pod "b88224c6-06e6-41c7-bba9-cb04ae3361e0" (UID: "b88224c6-06e6-41c7-bba9-cb04ae3361e0"). InnerVolumeSpecName "kube-api-access-jqxlr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.098529 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b54c6e50-f765-4a8c-b147-237821a03d11-utilities" (OuterVolumeSpecName: "utilities") pod "b54c6e50-f765-4a8c-b147-237821a03d11" (UID: "b54c6e50-f765-4a8c-b147-237821a03d11"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.100447 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83ce570d-f1e1-4168-9b49-3da4f6b31209-kube-api-access-wmjd4" (OuterVolumeSpecName: "kube-api-access-wmjd4") pod "83ce570d-f1e1-4168-9b49-3da4f6b31209" (UID: "83ce570d-f1e1-4168-9b49-3da4f6b31209"). InnerVolumeSpecName "kube-api-access-wmjd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.100565 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2eedfd1-32f1-478a-b46d-939da24ba282-kube-api-access-dqckz" (OuterVolumeSpecName: "kube-api-access-dqckz") pod "e2eedfd1-32f1-478a-b46d-939da24ba282" (UID: "e2eedfd1-32f1-478a-b46d-939da24ba282"). InnerVolumeSpecName "kube-api-access-dqckz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.101253 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83ce570d-f1e1-4168-9b49-3da4f6b31209-utilities" (OuterVolumeSpecName: "utilities") pod "83ce570d-f1e1-4168-9b49-3da4f6b31209" (UID: "83ce570d-f1e1-4168-9b49-3da4f6b31209"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.101325 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2eedfd1-32f1-478a-b46d-939da24ba282-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "e2eedfd1-32f1-478a-b46d-939da24ba282" (UID: "e2eedfd1-32f1-478a-b46d-939da24ba282"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.102721 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2eedfd1-32f1-478a-b46d-939da24ba282-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "e2eedfd1-32f1-478a-b46d-939da24ba282" (UID: "e2eedfd1-32f1-478a-b46d-939da24ba282"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.104685 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42148fd9-447b-43a5-b513-7cc37b19ab16-kube-api-access-ftpj8" (OuterVolumeSpecName: "kube-api-access-ftpj8") pod "42148fd9-447b-43a5-b513-7cc37b19ab16" (UID: "42148fd9-447b-43a5-b513-7cc37b19ab16"). InnerVolumeSpecName "kube-api-access-ftpj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.111944 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83ce570d-f1e1-4168-9b49-3da4f6b31209-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "83ce570d-f1e1-4168-9b49-3da4f6b31209" (UID: "83ce570d-f1e1-4168-9b49-3da4f6b31209"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.162048 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b88224c6-06e6-41c7-bba9-cb04ae3361e0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b88224c6-06e6-41c7-bba9-cb04ae3361e0" (UID: "b88224c6-06e6-41c7-bba9-cb04ae3361e0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.173110 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42148fd9-447b-43a5-b513-7cc37b19ab16-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "42148fd9-447b-43a5-b513-7cc37b19ab16" (UID: "42148fd9-447b-43a5-b513-7cc37b19ab16"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.183365 4779 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42148fd9-447b-43a5-b513-7cc37b19ab16-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.183392 4779 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83ce570d-f1e1-4168-9b49-3da4f6b31209-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.183402 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmjd4\" (UniqueName: \"kubernetes.io/projected/83ce570d-f1e1-4168-9b49-3da4f6b31209-kube-api-access-wmjd4\") on node \"crc\" DevicePath \"\"" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.183415 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqxlr\" (UniqueName: \"kubernetes.io/projected/b88224c6-06e6-41c7-bba9-cb04ae3361e0-kube-api-access-jqxlr\") on node \"crc\" DevicePath \"\"" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.183424 4779 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e2eedfd1-32f1-478a-b46d-939da24ba282-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.183433 4779 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b88224c6-06e6-41c7-bba9-cb04ae3361e0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.183442 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftpj8\" (UniqueName: \"kubernetes.io/projected/42148fd9-447b-43a5-b513-7cc37b19ab16-kube-api-access-ftpj8\") on node \"crc\" DevicePath \"\"" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.183451 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdbm2\" (UniqueName: \"kubernetes.io/projected/b54c6e50-f765-4a8c-b147-237821a03d11-kube-api-access-zdbm2\") on node \"crc\" DevicePath \"\"" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.183458 4779 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b88224c6-06e6-41c7-bba9-cb04ae3361e0-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.183467 4779 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83ce570d-f1e1-4168-9b49-3da4f6b31209-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.183476 4779 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e2eedfd1-32f1-478a-b46d-939da24ba282-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.183484 4779 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42148fd9-447b-43a5-b513-7cc37b19ab16-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.183494 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqckz\" (UniqueName: \"kubernetes.io/projected/e2eedfd1-32f1-478a-b46d-939da24ba282-kube-api-access-dqckz\") on node \"crc\" DevicePath \"\"" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.183502 4779 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b54c6e50-f765-4a8c-b147-237821a03d11-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.220317 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b54c6e50-f765-4a8c-b147-237821a03d11-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b54c6e50-f765-4a8c-b147-237821a03d11" (UID: "b54c6e50-f765-4a8c-b147-237821a03d11"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.284476 4779 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b54c6e50-f765-4a8c-b147-237821a03d11-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.416032 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-r6z5b"] Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.642189 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-r6z5b" event={"ID":"6d803c44-5049-4974-ad24-8bdf8082456f","Type":"ContainerStarted","Data":"51e8184f4703a24cd38f658050f64a3cb9f11571d2d8126b3f573ea4989cfe56"} Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.642258 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-r6z5b" event={"ID":"6d803c44-5049-4974-ad24-8bdf8082456f","Type":"ContainerStarted","Data":"2795b087534b6157e17d2bd1d2451f549e9c46a86e820d6cf27ca740be453bd0"} Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.642694 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-r6z5b" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.644588 4779 generic.go:334] "Generic (PLEG): container finished" podID="e2eedfd1-32f1-478a-b46d-939da24ba282" containerID="ab245bf68bcc940a5f23da47e463ed6e1ed8a884528c2fe3976fb5d14532a93c" exitCode=0 Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.644656 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-f8kkl" event={"ID":"e2eedfd1-32f1-478a-b46d-939da24ba282","Type":"ContainerDied","Data":"ab245bf68bcc940a5f23da47e463ed6e1ed8a884528c2fe3976fb5d14532a93c"} Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.644681 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-f8kkl" event={"ID":"e2eedfd1-32f1-478a-b46d-939da24ba282","Type":"ContainerDied","Data":"c4a3e8a204c557c59d1c6d5dfa04d009d34de817f4e2667f4aa70d66e27ebb46"} Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.644684 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-f8kkl" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.644700 4779 scope.go:117] "RemoveContainer" containerID="ab245bf68bcc940a5f23da47e463ed6e1ed8a884528c2fe3976fb5d14532a93c" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.644757 4779 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-r6z5b container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.62:8080/healthz\": dial tcp 10.217.0.62:8080: connect: connection refused" start-of-body= Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.644819 4779 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-r6z5b" podUID="6d803c44-5049-4974-ad24-8bdf8082456f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.62:8080/healthz\": dial tcp 10.217.0.62:8080: connect: connection refused" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.648644 4779 generic.go:334] "Generic (PLEG): container finished" podID="42148fd9-447b-43a5-b513-7cc37b19ab16" containerID="1056b142771cf2102daf3b9a5c47e9041b7884919153262835b8d4f11846b11d" exitCode=0 Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.648741 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tsxnv" event={"ID":"42148fd9-447b-43a5-b513-7cc37b19ab16","Type":"ContainerDied","Data":"1056b142771cf2102daf3b9a5c47e9041b7884919153262835b8d4f11846b11d"} Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.648780 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tsxnv" event={"ID":"42148fd9-447b-43a5-b513-7cc37b19ab16","Type":"ContainerDied","Data":"b425776c25c35f2d49a5dd81d43b5ad7e236e6e97a658c7556e8fe624297d2e2"} Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.648870 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tsxnv" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.651511 4779 generic.go:334] "Generic (PLEG): container finished" podID="83ce570d-f1e1-4168-9b49-3da4f6b31209" containerID="ae5a94c60b1165f00d9c67227a7fa3c2f315ebdca6720d8b0929c792e7f9a5bf" exitCode=0 Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.651570 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-svdf4" event={"ID":"83ce570d-f1e1-4168-9b49-3da4f6b31209","Type":"ContainerDied","Data":"ae5a94c60b1165f00d9c67227a7fa3c2f315ebdca6720d8b0929c792e7f9a5bf"} Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.651596 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-svdf4" event={"ID":"83ce570d-f1e1-4168-9b49-3da4f6b31209","Type":"ContainerDied","Data":"56e32976c361ea8dbe4ddde109d17f6db2f1a86224cd96afddffc12c126a310d"} Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.651671 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-svdf4" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.655317 4779 generic.go:334] "Generic (PLEG): container finished" podID="b88224c6-06e6-41c7-bba9-cb04ae3361e0" containerID="91c773c489065c1aa57e7daa45c3737e2e04fd2cceb08c784419c5f837f178a0" exitCode=0 Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.655414 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xbp9s" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.655423 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xbp9s" event={"ID":"b88224c6-06e6-41c7-bba9-cb04ae3361e0","Type":"ContainerDied","Data":"91c773c489065c1aa57e7daa45c3737e2e04fd2cceb08c784419c5f837f178a0"} Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.655586 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xbp9s" event={"ID":"b88224c6-06e6-41c7-bba9-cb04ae3361e0","Type":"ContainerDied","Data":"35bdeb077301f4e6fba5093ef63afed1142d4f690f8cb8a2b0901b5d59e5134f"} Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.659353 4779 generic.go:334] "Generic (PLEG): container finished" podID="b54c6e50-f765-4a8c-b147-237821a03d11" containerID="56eb81dec693b51964300e27276315a8bf8d62ab1bc0a2b5ecae56b5245d5cde" exitCode=0 Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.659402 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ntps2" event={"ID":"b54c6e50-f765-4a8c-b147-237821a03d11","Type":"ContainerDied","Data":"56eb81dec693b51964300e27276315a8bf8d62ab1bc0a2b5ecae56b5245d5cde"} Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.659431 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ntps2" event={"ID":"b54c6e50-f765-4a8c-b147-237821a03d11","Type":"ContainerDied","Data":"6a89ead843be1c986ecce9648d0e52009571f3229cf4fe631b31137c992d3c35"} Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.659524 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ntps2" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.670952 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-r6z5b" podStartSLOduration=1.6709303279999999 podStartE2EDuration="1.670930328s" podCreationTimestamp="2025-11-28 12:42:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:42:19.66358128 +0000 UTC m=+400.229256674" watchObservedRunningTime="2025-11-28 12:42:19.670930328 +0000 UTC m=+400.236605712" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.735164 4779 scope.go:117] "RemoveContainer" containerID="cb4ef0ff2f057c2da9c5a063639e6985d73deff284d1b4e223d737887d25b78a" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.736588 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ntps2"] Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.740664 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ntps2"] Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.749422 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-svdf4"] Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.750983 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-svdf4"] Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.756717 4779 scope.go:117] "RemoveContainer" containerID="ab245bf68bcc940a5f23da47e463ed6e1ed8a884528c2fe3976fb5d14532a93c" Nov 28 12:42:19 crc kubenswrapper[4779]: E1128 12:42:19.764303 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab245bf68bcc940a5f23da47e463ed6e1ed8a884528c2fe3976fb5d14532a93c\": container with ID starting with ab245bf68bcc940a5f23da47e463ed6e1ed8a884528c2fe3976fb5d14532a93c not found: ID does not exist" containerID="ab245bf68bcc940a5f23da47e463ed6e1ed8a884528c2fe3976fb5d14532a93c" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.764524 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab245bf68bcc940a5f23da47e463ed6e1ed8a884528c2fe3976fb5d14532a93c"} err="failed to get container status \"ab245bf68bcc940a5f23da47e463ed6e1ed8a884528c2fe3976fb5d14532a93c\": rpc error: code = NotFound desc = could not find container \"ab245bf68bcc940a5f23da47e463ed6e1ed8a884528c2fe3976fb5d14532a93c\": container with ID starting with ab245bf68bcc940a5f23da47e463ed6e1ed8a884528c2fe3976fb5d14532a93c not found: ID does not exist" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.764633 4779 scope.go:117] "RemoveContainer" containerID="cb4ef0ff2f057c2da9c5a063639e6985d73deff284d1b4e223d737887d25b78a" Nov 28 12:42:19 crc kubenswrapper[4779]: E1128 12:42:19.764983 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb4ef0ff2f057c2da9c5a063639e6985d73deff284d1b4e223d737887d25b78a\": container with ID starting with cb4ef0ff2f057c2da9c5a063639e6985d73deff284d1b4e223d737887d25b78a not found: ID does not exist" containerID="cb4ef0ff2f057c2da9c5a063639e6985d73deff284d1b4e223d737887d25b78a" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.765067 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb4ef0ff2f057c2da9c5a063639e6985d73deff284d1b4e223d737887d25b78a"} err="failed to get container status \"cb4ef0ff2f057c2da9c5a063639e6985d73deff284d1b4e223d737887d25b78a\": rpc error: code = NotFound desc = could not find container \"cb4ef0ff2f057c2da9c5a063639e6985d73deff284d1b4e223d737887d25b78a\": container with ID starting with cb4ef0ff2f057c2da9c5a063639e6985d73deff284d1b4e223d737887d25b78a not found: ID does not exist" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.765172 4779 scope.go:117] "RemoveContainer" containerID="1056b142771cf2102daf3b9a5c47e9041b7884919153262835b8d4f11846b11d" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.784031 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-f8kkl"] Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.791422 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-f8kkl"] Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.792704 4779 scope.go:117] "RemoveContainer" containerID="f4fb6b0df5b911cb50225409e882c801ce78b9b8465e783d177f6213bedf0f27" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.795605 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xbp9s"] Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.799678 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xbp9s"] Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.804387 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tsxnv"] Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.806905 4779 scope.go:117] "RemoveContainer" containerID="cd5c8eb2a19bfbb3e38099d48133fd79f6d7d19e36201d554a6561e20e7eb446" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.808516 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tsxnv"] Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.827275 4779 scope.go:117] "RemoveContainer" containerID="1056b142771cf2102daf3b9a5c47e9041b7884919153262835b8d4f11846b11d" Nov 28 12:42:19 crc kubenswrapper[4779]: E1128 12:42:19.827712 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1056b142771cf2102daf3b9a5c47e9041b7884919153262835b8d4f11846b11d\": container with ID starting with 1056b142771cf2102daf3b9a5c47e9041b7884919153262835b8d4f11846b11d not found: ID does not exist" containerID="1056b142771cf2102daf3b9a5c47e9041b7884919153262835b8d4f11846b11d" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.827756 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1056b142771cf2102daf3b9a5c47e9041b7884919153262835b8d4f11846b11d"} err="failed to get container status \"1056b142771cf2102daf3b9a5c47e9041b7884919153262835b8d4f11846b11d\": rpc error: code = NotFound desc = could not find container \"1056b142771cf2102daf3b9a5c47e9041b7884919153262835b8d4f11846b11d\": container with ID starting with 1056b142771cf2102daf3b9a5c47e9041b7884919153262835b8d4f11846b11d not found: ID does not exist" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.827787 4779 scope.go:117] "RemoveContainer" containerID="f4fb6b0df5b911cb50225409e882c801ce78b9b8465e783d177f6213bedf0f27" Nov 28 12:42:19 crc kubenswrapper[4779]: E1128 12:42:19.828154 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4fb6b0df5b911cb50225409e882c801ce78b9b8465e783d177f6213bedf0f27\": container with ID starting with f4fb6b0df5b911cb50225409e882c801ce78b9b8465e783d177f6213bedf0f27 not found: ID does not exist" containerID="f4fb6b0df5b911cb50225409e882c801ce78b9b8465e783d177f6213bedf0f27" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.828200 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4fb6b0df5b911cb50225409e882c801ce78b9b8465e783d177f6213bedf0f27"} err="failed to get container status \"f4fb6b0df5b911cb50225409e882c801ce78b9b8465e783d177f6213bedf0f27\": rpc error: code = NotFound desc = could not find container \"f4fb6b0df5b911cb50225409e882c801ce78b9b8465e783d177f6213bedf0f27\": container with ID starting with f4fb6b0df5b911cb50225409e882c801ce78b9b8465e783d177f6213bedf0f27 not found: ID does not exist" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.828234 4779 scope.go:117] "RemoveContainer" containerID="cd5c8eb2a19bfbb3e38099d48133fd79f6d7d19e36201d554a6561e20e7eb446" Nov 28 12:42:19 crc kubenswrapper[4779]: E1128 12:42:19.828674 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd5c8eb2a19bfbb3e38099d48133fd79f6d7d19e36201d554a6561e20e7eb446\": container with ID starting with cd5c8eb2a19bfbb3e38099d48133fd79f6d7d19e36201d554a6561e20e7eb446 not found: ID does not exist" containerID="cd5c8eb2a19bfbb3e38099d48133fd79f6d7d19e36201d554a6561e20e7eb446" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.828707 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd5c8eb2a19bfbb3e38099d48133fd79f6d7d19e36201d554a6561e20e7eb446"} err="failed to get container status \"cd5c8eb2a19bfbb3e38099d48133fd79f6d7d19e36201d554a6561e20e7eb446\": rpc error: code = NotFound desc = could not find container \"cd5c8eb2a19bfbb3e38099d48133fd79f6d7d19e36201d554a6561e20e7eb446\": container with ID starting with cd5c8eb2a19bfbb3e38099d48133fd79f6d7d19e36201d554a6561e20e7eb446 not found: ID does not exist" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.828729 4779 scope.go:117] "RemoveContainer" containerID="ae5a94c60b1165f00d9c67227a7fa3c2f315ebdca6720d8b0929c792e7f9a5bf" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.841378 4779 scope.go:117] "RemoveContainer" containerID="305b1a87468b65dbb5dcaf99ee16f8a51e115b06a3f61e656abf9531452cf3fb" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.859695 4779 scope.go:117] "RemoveContainer" containerID="852465927e704c9a28d444fa960c537018e72e62ad6db0c39a048f6c67947ae7" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.878306 4779 scope.go:117] "RemoveContainer" containerID="ae5a94c60b1165f00d9c67227a7fa3c2f315ebdca6720d8b0929c792e7f9a5bf" Nov 28 12:42:19 crc kubenswrapper[4779]: E1128 12:42:19.879985 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae5a94c60b1165f00d9c67227a7fa3c2f315ebdca6720d8b0929c792e7f9a5bf\": container with ID starting with ae5a94c60b1165f00d9c67227a7fa3c2f315ebdca6720d8b0929c792e7f9a5bf not found: ID does not exist" containerID="ae5a94c60b1165f00d9c67227a7fa3c2f315ebdca6720d8b0929c792e7f9a5bf" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.880036 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae5a94c60b1165f00d9c67227a7fa3c2f315ebdca6720d8b0929c792e7f9a5bf"} err="failed to get container status \"ae5a94c60b1165f00d9c67227a7fa3c2f315ebdca6720d8b0929c792e7f9a5bf\": rpc error: code = NotFound desc = could not find container \"ae5a94c60b1165f00d9c67227a7fa3c2f315ebdca6720d8b0929c792e7f9a5bf\": container with ID starting with ae5a94c60b1165f00d9c67227a7fa3c2f315ebdca6720d8b0929c792e7f9a5bf not found: ID does not exist" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.880068 4779 scope.go:117] "RemoveContainer" containerID="305b1a87468b65dbb5dcaf99ee16f8a51e115b06a3f61e656abf9531452cf3fb" Nov 28 12:42:19 crc kubenswrapper[4779]: E1128 12:42:19.880513 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"305b1a87468b65dbb5dcaf99ee16f8a51e115b06a3f61e656abf9531452cf3fb\": container with ID starting with 305b1a87468b65dbb5dcaf99ee16f8a51e115b06a3f61e656abf9531452cf3fb not found: ID does not exist" containerID="305b1a87468b65dbb5dcaf99ee16f8a51e115b06a3f61e656abf9531452cf3fb" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.880584 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"305b1a87468b65dbb5dcaf99ee16f8a51e115b06a3f61e656abf9531452cf3fb"} err="failed to get container status \"305b1a87468b65dbb5dcaf99ee16f8a51e115b06a3f61e656abf9531452cf3fb\": rpc error: code = NotFound desc = could not find container \"305b1a87468b65dbb5dcaf99ee16f8a51e115b06a3f61e656abf9531452cf3fb\": container with ID starting with 305b1a87468b65dbb5dcaf99ee16f8a51e115b06a3f61e656abf9531452cf3fb not found: ID does not exist" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.880602 4779 scope.go:117] "RemoveContainer" containerID="852465927e704c9a28d444fa960c537018e72e62ad6db0c39a048f6c67947ae7" Nov 28 12:42:19 crc kubenswrapper[4779]: E1128 12:42:19.881174 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"852465927e704c9a28d444fa960c537018e72e62ad6db0c39a048f6c67947ae7\": container with ID starting with 852465927e704c9a28d444fa960c537018e72e62ad6db0c39a048f6c67947ae7 not found: ID does not exist" containerID="852465927e704c9a28d444fa960c537018e72e62ad6db0c39a048f6c67947ae7" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.881211 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"852465927e704c9a28d444fa960c537018e72e62ad6db0c39a048f6c67947ae7"} err="failed to get container status \"852465927e704c9a28d444fa960c537018e72e62ad6db0c39a048f6c67947ae7\": rpc error: code = NotFound desc = could not find container \"852465927e704c9a28d444fa960c537018e72e62ad6db0c39a048f6c67947ae7\": container with ID starting with 852465927e704c9a28d444fa960c537018e72e62ad6db0c39a048f6c67947ae7 not found: ID does not exist" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.881238 4779 scope.go:117] "RemoveContainer" containerID="91c773c489065c1aa57e7daa45c3737e2e04fd2cceb08c784419c5f837f178a0" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.918951 4779 scope.go:117] "RemoveContainer" containerID="8a7027c816b1c342706f6823486a004c029dbcf3900213bc802b7ae7e3f83e2a" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.934970 4779 scope.go:117] "RemoveContainer" containerID="a52a65567c485270a1404e8dee35c267d3e8a02e4f62a4665211b2525d0420a2" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.946265 4779 scope.go:117] "RemoveContainer" containerID="91c773c489065c1aa57e7daa45c3737e2e04fd2cceb08c784419c5f837f178a0" Nov 28 12:42:19 crc kubenswrapper[4779]: E1128 12:42:19.946560 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91c773c489065c1aa57e7daa45c3737e2e04fd2cceb08c784419c5f837f178a0\": container with ID starting with 91c773c489065c1aa57e7daa45c3737e2e04fd2cceb08c784419c5f837f178a0 not found: ID does not exist" containerID="91c773c489065c1aa57e7daa45c3737e2e04fd2cceb08c784419c5f837f178a0" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.946588 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91c773c489065c1aa57e7daa45c3737e2e04fd2cceb08c784419c5f837f178a0"} err="failed to get container status \"91c773c489065c1aa57e7daa45c3737e2e04fd2cceb08c784419c5f837f178a0\": rpc error: code = NotFound desc = could not find container \"91c773c489065c1aa57e7daa45c3737e2e04fd2cceb08c784419c5f837f178a0\": container with ID starting with 91c773c489065c1aa57e7daa45c3737e2e04fd2cceb08c784419c5f837f178a0 not found: ID does not exist" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.946608 4779 scope.go:117] "RemoveContainer" containerID="8a7027c816b1c342706f6823486a004c029dbcf3900213bc802b7ae7e3f83e2a" Nov 28 12:42:19 crc kubenswrapper[4779]: E1128 12:42:19.946855 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a7027c816b1c342706f6823486a004c029dbcf3900213bc802b7ae7e3f83e2a\": container with ID starting with 8a7027c816b1c342706f6823486a004c029dbcf3900213bc802b7ae7e3f83e2a not found: ID does not exist" containerID="8a7027c816b1c342706f6823486a004c029dbcf3900213bc802b7ae7e3f83e2a" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.946876 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a7027c816b1c342706f6823486a004c029dbcf3900213bc802b7ae7e3f83e2a"} err="failed to get container status \"8a7027c816b1c342706f6823486a004c029dbcf3900213bc802b7ae7e3f83e2a\": rpc error: code = NotFound desc = could not find container \"8a7027c816b1c342706f6823486a004c029dbcf3900213bc802b7ae7e3f83e2a\": container with ID starting with 8a7027c816b1c342706f6823486a004c029dbcf3900213bc802b7ae7e3f83e2a not found: ID does not exist" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.946887 4779 scope.go:117] "RemoveContainer" containerID="a52a65567c485270a1404e8dee35c267d3e8a02e4f62a4665211b2525d0420a2" Nov 28 12:42:19 crc kubenswrapper[4779]: E1128 12:42:19.947077 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a52a65567c485270a1404e8dee35c267d3e8a02e4f62a4665211b2525d0420a2\": container with ID starting with a52a65567c485270a1404e8dee35c267d3e8a02e4f62a4665211b2525d0420a2 not found: ID does not exist" containerID="a52a65567c485270a1404e8dee35c267d3e8a02e4f62a4665211b2525d0420a2" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.947108 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a52a65567c485270a1404e8dee35c267d3e8a02e4f62a4665211b2525d0420a2"} err="failed to get container status \"a52a65567c485270a1404e8dee35c267d3e8a02e4f62a4665211b2525d0420a2\": rpc error: code = NotFound desc = could not find container \"a52a65567c485270a1404e8dee35c267d3e8a02e4f62a4665211b2525d0420a2\": container with ID starting with a52a65567c485270a1404e8dee35c267d3e8a02e4f62a4665211b2525d0420a2 not found: ID does not exist" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.947119 4779 scope.go:117] "RemoveContainer" containerID="56eb81dec693b51964300e27276315a8bf8d62ab1bc0a2b5ecae56b5245d5cde" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.961232 4779 scope.go:117] "RemoveContainer" containerID="d49785b3671b039bcf84d7819ebbcb9807bb00cea962024f0f1fcb2db492cca1" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.979523 4779 scope.go:117] "RemoveContainer" containerID="8e28d07389fb63d0ab4f940e1455f97d96512528314398a81dc7d88c94798cc2" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.993506 4779 scope.go:117] "RemoveContainer" containerID="56eb81dec693b51964300e27276315a8bf8d62ab1bc0a2b5ecae56b5245d5cde" Nov 28 12:42:19 crc kubenswrapper[4779]: E1128 12:42:19.993829 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56eb81dec693b51964300e27276315a8bf8d62ab1bc0a2b5ecae56b5245d5cde\": container with ID starting with 56eb81dec693b51964300e27276315a8bf8d62ab1bc0a2b5ecae56b5245d5cde not found: ID does not exist" containerID="56eb81dec693b51964300e27276315a8bf8d62ab1bc0a2b5ecae56b5245d5cde" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.993868 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56eb81dec693b51964300e27276315a8bf8d62ab1bc0a2b5ecae56b5245d5cde"} err="failed to get container status \"56eb81dec693b51964300e27276315a8bf8d62ab1bc0a2b5ecae56b5245d5cde\": rpc error: code = NotFound desc = could not find container \"56eb81dec693b51964300e27276315a8bf8d62ab1bc0a2b5ecae56b5245d5cde\": container with ID starting with 56eb81dec693b51964300e27276315a8bf8d62ab1bc0a2b5ecae56b5245d5cde not found: ID does not exist" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.993896 4779 scope.go:117] "RemoveContainer" containerID="d49785b3671b039bcf84d7819ebbcb9807bb00cea962024f0f1fcb2db492cca1" Nov 28 12:42:19 crc kubenswrapper[4779]: E1128 12:42:19.994218 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d49785b3671b039bcf84d7819ebbcb9807bb00cea962024f0f1fcb2db492cca1\": container with ID starting with d49785b3671b039bcf84d7819ebbcb9807bb00cea962024f0f1fcb2db492cca1 not found: ID does not exist" containerID="d49785b3671b039bcf84d7819ebbcb9807bb00cea962024f0f1fcb2db492cca1" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.994251 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d49785b3671b039bcf84d7819ebbcb9807bb00cea962024f0f1fcb2db492cca1"} err="failed to get container status \"d49785b3671b039bcf84d7819ebbcb9807bb00cea962024f0f1fcb2db492cca1\": rpc error: code = NotFound desc = could not find container \"d49785b3671b039bcf84d7819ebbcb9807bb00cea962024f0f1fcb2db492cca1\": container with ID starting with d49785b3671b039bcf84d7819ebbcb9807bb00cea962024f0f1fcb2db492cca1 not found: ID does not exist" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.994273 4779 scope.go:117] "RemoveContainer" containerID="8e28d07389fb63d0ab4f940e1455f97d96512528314398a81dc7d88c94798cc2" Nov 28 12:42:19 crc kubenswrapper[4779]: E1128 12:42:19.994636 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e28d07389fb63d0ab4f940e1455f97d96512528314398a81dc7d88c94798cc2\": container with ID starting with 8e28d07389fb63d0ab4f940e1455f97d96512528314398a81dc7d88c94798cc2 not found: ID does not exist" containerID="8e28d07389fb63d0ab4f940e1455f97d96512528314398a81dc7d88c94798cc2" Nov 28 12:42:19 crc kubenswrapper[4779]: I1128 12:42:19.994656 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e28d07389fb63d0ab4f940e1455f97d96512528314398a81dc7d88c94798cc2"} err="failed to get container status \"8e28d07389fb63d0ab4f940e1455f97d96512528314398a81dc7d88c94798cc2\": rpc error: code = NotFound desc = could not find container \"8e28d07389fb63d0ab4f940e1455f97d96512528314398a81dc7d88c94798cc2\": container with ID starting with 8e28d07389fb63d0ab4f940e1455f97d96512528314398a81dc7d88c94798cc2 not found: ID does not exist" Nov 28 12:42:20 crc kubenswrapper[4779]: I1128 12:42:20.684907 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-r6z5b" Nov 28 12:42:20 crc kubenswrapper[4779]: I1128 12:42:20.747636 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jngtm"] Nov 28 12:42:20 crc kubenswrapper[4779]: E1128 12:42:20.747844 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42148fd9-447b-43a5-b513-7cc37b19ab16" containerName="extract-utilities" Nov 28 12:42:20 crc kubenswrapper[4779]: I1128 12:42:20.747859 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="42148fd9-447b-43a5-b513-7cc37b19ab16" containerName="extract-utilities" Nov 28 12:42:20 crc kubenswrapper[4779]: E1128 12:42:20.747870 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2eedfd1-32f1-478a-b46d-939da24ba282" containerName="marketplace-operator" Nov 28 12:42:20 crc kubenswrapper[4779]: I1128 12:42:20.747877 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2eedfd1-32f1-478a-b46d-939da24ba282" containerName="marketplace-operator" Nov 28 12:42:20 crc kubenswrapper[4779]: E1128 12:42:20.747889 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b88224c6-06e6-41c7-bba9-cb04ae3361e0" containerName="extract-utilities" Nov 28 12:42:20 crc kubenswrapper[4779]: I1128 12:42:20.747897 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="b88224c6-06e6-41c7-bba9-cb04ae3361e0" containerName="extract-utilities" Nov 28 12:42:20 crc kubenswrapper[4779]: E1128 12:42:20.747910 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2eedfd1-32f1-478a-b46d-939da24ba282" containerName="marketplace-operator" Nov 28 12:42:20 crc kubenswrapper[4779]: I1128 12:42:20.747918 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2eedfd1-32f1-478a-b46d-939da24ba282" containerName="marketplace-operator" Nov 28 12:42:20 crc kubenswrapper[4779]: E1128 12:42:20.747928 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b54c6e50-f765-4a8c-b147-237821a03d11" containerName="registry-server" Nov 28 12:42:20 crc kubenswrapper[4779]: I1128 12:42:20.747935 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="b54c6e50-f765-4a8c-b147-237821a03d11" containerName="registry-server" Nov 28 12:42:20 crc kubenswrapper[4779]: E1128 12:42:20.747946 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42148fd9-447b-43a5-b513-7cc37b19ab16" containerName="extract-content" Nov 28 12:42:20 crc kubenswrapper[4779]: I1128 12:42:20.747954 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="42148fd9-447b-43a5-b513-7cc37b19ab16" containerName="extract-content" Nov 28 12:42:20 crc kubenswrapper[4779]: E1128 12:42:20.747966 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83ce570d-f1e1-4168-9b49-3da4f6b31209" containerName="extract-utilities" Nov 28 12:42:20 crc kubenswrapper[4779]: I1128 12:42:20.747974 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="83ce570d-f1e1-4168-9b49-3da4f6b31209" containerName="extract-utilities" Nov 28 12:42:20 crc kubenswrapper[4779]: E1128 12:42:20.747983 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b88224c6-06e6-41c7-bba9-cb04ae3361e0" containerName="registry-server" Nov 28 12:42:20 crc kubenswrapper[4779]: I1128 12:42:20.747992 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="b88224c6-06e6-41c7-bba9-cb04ae3361e0" containerName="registry-server" Nov 28 12:42:20 crc kubenswrapper[4779]: E1128 12:42:20.748002 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b54c6e50-f765-4a8c-b147-237821a03d11" containerName="extract-content" Nov 28 12:42:20 crc kubenswrapper[4779]: I1128 12:42:20.748026 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="b54c6e50-f765-4a8c-b147-237821a03d11" containerName="extract-content" Nov 28 12:42:20 crc kubenswrapper[4779]: E1128 12:42:20.748038 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b54c6e50-f765-4a8c-b147-237821a03d11" containerName="extract-utilities" Nov 28 12:42:20 crc kubenswrapper[4779]: I1128 12:42:20.748045 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="b54c6e50-f765-4a8c-b147-237821a03d11" containerName="extract-utilities" Nov 28 12:42:20 crc kubenswrapper[4779]: E1128 12:42:20.748054 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83ce570d-f1e1-4168-9b49-3da4f6b31209" containerName="registry-server" Nov 28 12:42:20 crc kubenswrapper[4779]: I1128 12:42:20.748062 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="83ce570d-f1e1-4168-9b49-3da4f6b31209" containerName="registry-server" Nov 28 12:42:20 crc kubenswrapper[4779]: E1128 12:42:20.748071 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b88224c6-06e6-41c7-bba9-cb04ae3361e0" containerName="extract-content" Nov 28 12:42:20 crc kubenswrapper[4779]: I1128 12:42:20.748079 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="b88224c6-06e6-41c7-bba9-cb04ae3361e0" containerName="extract-content" Nov 28 12:42:20 crc kubenswrapper[4779]: E1128 12:42:20.748103 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83ce570d-f1e1-4168-9b49-3da4f6b31209" containerName="extract-content" Nov 28 12:42:20 crc kubenswrapper[4779]: I1128 12:42:20.748112 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="83ce570d-f1e1-4168-9b49-3da4f6b31209" containerName="extract-content" Nov 28 12:42:20 crc kubenswrapper[4779]: E1128 12:42:20.748120 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42148fd9-447b-43a5-b513-7cc37b19ab16" containerName="registry-server" Nov 28 12:42:20 crc kubenswrapper[4779]: I1128 12:42:20.748128 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="42148fd9-447b-43a5-b513-7cc37b19ab16" containerName="registry-server" Nov 28 12:42:20 crc kubenswrapper[4779]: I1128 12:42:20.748241 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2eedfd1-32f1-478a-b46d-939da24ba282" containerName="marketplace-operator" Nov 28 12:42:20 crc kubenswrapper[4779]: I1128 12:42:20.748255 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="42148fd9-447b-43a5-b513-7cc37b19ab16" containerName="registry-server" Nov 28 12:42:20 crc kubenswrapper[4779]: I1128 12:42:20.748267 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="b54c6e50-f765-4a8c-b147-237821a03d11" containerName="registry-server" Nov 28 12:42:20 crc kubenswrapper[4779]: I1128 12:42:20.748280 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2eedfd1-32f1-478a-b46d-939da24ba282" containerName="marketplace-operator" Nov 28 12:42:20 crc kubenswrapper[4779]: I1128 12:42:20.748293 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="83ce570d-f1e1-4168-9b49-3da4f6b31209" containerName="registry-server" Nov 28 12:42:20 crc kubenswrapper[4779]: I1128 12:42:20.748305 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="b88224c6-06e6-41c7-bba9-cb04ae3361e0" containerName="registry-server" Nov 28 12:42:20 crc kubenswrapper[4779]: I1128 12:42:20.749349 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jngtm" Nov 28 12:42:20 crc kubenswrapper[4779]: I1128 12:42:20.755918 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 28 12:42:20 crc kubenswrapper[4779]: I1128 12:42:20.773926 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jngtm"] Nov 28 12:42:20 crc kubenswrapper[4779]: I1128 12:42:20.820927 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aeb5fca6-5157-4e18-8223-59f88908f1c8-utilities\") pod \"redhat-marketplace-jngtm\" (UID: \"aeb5fca6-5157-4e18-8223-59f88908f1c8\") " pod="openshift-marketplace/redhat-marketplace-jngtm" Nov 28 12:42:20 crc kubenswrapper[4779]: I1128 12:42:20.820994 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxbk5\" (UniqueName: \"kubernetes.io/projected/aeb5fca6-5157-4e18-8223-59f88908f1c8-kube-api-access-gxbk5\") pod \"redhat-marketplace-jngtm\" (UID: \"aeb5fca6-5157-4e18-8223-59f88908f1c8\") " pod="openshift-marketplace/redhat-marketplace-jngtm" Nov 28 12:42:20 crc kubenswrapper[4779]: I1128 12:42:20.821019 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aeb5fca6-5157-4e18-8223-59f88908f1c8-catalog-content\") pod \"redhat-marketplace-jngtm\" (UID: \"aeb5fca6-5157-4e18-8223-59f88908f1c8\") " pod="openshift-marketplace/redhat-marketplace-jngtm" Nov 28 12:42:20 crc kubenswrapper[4779]: I1128 12:42:20.921885 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxbk5\" (UniqueName: \"kubernetes.io/projected/aeb5fca6-5157-4e18-8223-59f88908f1c8-kube-api-access-gxbk5\") pod \"redhat-marketplace-jngtm\" (UID: \"aeb5fca6-5157-4e18-8223-59f88908f1c8\") " pod="openshift-marketplace/redhat-marketplace-jngtm" Nov 28 12:42:20 crc kubenswrapper[4779]: I1128 12:42:20.921943 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aeb5fca6-5157-4e18-8223-59f88908f1c8-catalog-content\") pod \"redhat-marketplace-jngtm\" (UID: \"aeb5fca6-5157-4e18-8223-59f88908f1c8\") " pod="openshift-marketplace/redhat-marketplace-jngtm" Nov 28 12:42:20 crc kubenswrapper[4779]: I1128 12:42:20.922000 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aeb5fca6-5157-4e18-8223-59f88908f1c8-utilities\") pod \"redhat-marketplace-jngtm\" (UID: \"aeb5fca6-5157-4e18-8223-59f88908f1c8\") " pod="openshift-marketplace/redhat-marketplace-jngtm" Nov 28 12:42:20 crc kubenswrapper[4779]: I1128 12:42:20.922399 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aeb5fca6-5157-4e18-8223-59f88908f1c8-utilities\") pod \"redhat-marketplace-jngtm\" (UID: \"aeb5fca6-5157-4e18-8223-59f88908f1c8\") " pod="openshift-marketplace/redhat-marketplace-jngtm" Nov 28 12:42:20 crc kubenswrapper[4779]: I1128 12:42:20.922889 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aeb5fca6-5157-4e18-8223-59f88908f1c8-catalog-content\") pod \"redhat-marketplace-jngtm\" (UID: \"aeb5fca6-5157-4e18-8223-59f88908f1c8\") " pod="openshift-marketplace/redhat-marketplace-jngtm" Nov 28 12:42:20 crc kubenswrapper[4779]: I1128 12:42:20.945387 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hppfn"] Nov 28 12:42:20 crc kubenswrapper[4779]: I1128 12:42:20.946965 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hppfn" Nov 28 12:42:20 crc kubenswrapper[4779]: I1128 12:42:20.952158 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 28 12:42:20 crc kubenswrapper[4779]: I1128 12:42:20.954556 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxbk5\" (UniqueName: \"kubernetes.io/projected/aeb5fca6-5157-4e18-8223-59f88908f1c8-kube-api-access-gxbk5\") pod \"redhat-marketplace-jngtm\" (UID: \"aeb5fca6-5157-4e18-8223-59f88908f1c8\") " pod="openshift-marketplace/redhat-marketplace-jngtm" Nov 28 12:42:20 crc kubenswrapper[4779]: I1128 12:42:20.956904 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hppfn"] Nov 28 12:42:21 crc kubenswrapper[4779]: I1128 12:42:21.023707 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djrcv\" (UniqueName: \"kubernetes.io/projected/b5d5dfb9-ebff-4d12-af9a-53220c054a90-kube-api-access-djrcv\") pod \"redhat-operators-hppfn\" (UID: \"b5d5dfb9-ebff-4d12-af9a-53220c054a90\") " pod="openshift-marketplace/redhat-operators-hppfn" Nov 28 12:42:21 crc kubenswrapper[4779]: I1128 12:42:21.023830 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5d5dfb9-ebff-4d12-af9a-53220c054a90-catalog-content\") pod \"redhat-operators-hppfn\" (UID: \"b5d5dfb9-ebff-4d12-af9a-53220c054a90\") " pod="openshift-marketplace/redhat-operators-hppfn" Nov 28 12:42:21 crc kubenswrapper[4779]: I1128 12:42:21.023888 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5d5dfb9-ebff-4d12-af9a-53220c054a90-utilities\") pod \"redhat-operators-hppfn\" (UID: \"b5d5dfb9-ebff-4d12-af9a-53220c054a90\") " pod="openshift-marketplace/redhat-operators-hppfn" Nov 28 12:42:21 crc kubenswrapper[4779]: I1128 12:42:21.082179 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jngtm" Nov 28 12:42:21 crc kubenswrapper[4779]: I1128 12:42:21.125355 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5d5dfb9-ebff-4d12-af9a-53220c054a90-catalog-content\") pod \"redhat-operators-hppfn\" (UID: \"b5d5dfb9-ebff-4d12-af9a-53220c054a90\") " pod="openshift-marketplace/redhat-operators-hppfn" Nov 28 12:42:21 crc kubenswrapper[4779]: I1128 12:42:21.125412 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5d5dfb9-ebff-4d12-af9a-53220c054a90-utilities\") pod \"redhat-operators-hppfn\" (UID: \"b5d5dfb9-ebff-4d12-af9a-53220c054a90\") " pod="openshift-marketplace/redhat-operators-hppfn" Nov 28 12:42:21 crc kubenswrapper[4779]: I1128 12:42:21.125449 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djrcv\" (UniqueName: \"kubernetes.io/projected/b5d5dfb9-ebff-4d12-af9a-53220c054a90-kube-api-access-djrcv\") pod \"redhat-operators-hppfn\" (UID: \"b5d5dfb9-ebff-4d12-af9a-53220c054a90\") " pod="openshift-marketplace/redhat-operators-hppfn" Nov 28 12:42:21 crc kubenswrapper[4779]: I1128 12:42:21.126046 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5d5dfb9-ebff-4d12-af9a-53220c054a90-catalog-content\") pod \"redhat-operators-hppfn\" (UID: \"b5d5dfb9-ebff-4d12-af9a-53220c054a90\") " pod="openshift-marketplace/redhat-operators-hppfn" Nov 28 12:42:21 crc kubenswrapper[4779]: I1128 12:42:21.126315 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5d5dfb9-ebff-4d12-af9a-53220c054a90-utilities\") pod \"redhat-operators-hppfn\" (UID: \"b5d5dfb9-ebff-4d12-af9a-53220c054a90\") " pod="openshift-marketplace/redhat-operators-hppfn" Nov 28 12:42:21 crc kubenswrapper[4779]: I1128 12:42:21.154362 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djrcv\" (UniqueName: \"kubernetes.io/projected/b5d5dfb9-ebff-4d12-af9a-53220c054a90-kube-api-access-djrcv\") pod \"redhat-operators-hppfn\" (UID: \"b5d5dfb9-ebff-4d12-af9a-53220c054a90\") " pod="openshift-marketplace/redhat-operators-hppfn" Nov 28 12:42:21 crc kubenswrapper[4779]: I1128 12:42:21.278808 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hppfn" Nov 28 12:42:21 crc kubenswrapper[4779]: I1128 12:42:21.469895 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hppfn"] Nov 28 12:42:21 crc kubenswrapper[4779]: W1128 12:42:21.473369 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5d5dfb9_ebff_4d12_af9a_53220c054a90.slice/crio-c450e15aad8df67bb0a5892e864282927c37ba1ab7ec36b5243cd7c6317ac4e1 WatchSource:0}: Error finding container c450e15aad8df67bb0a5892e864282927c37ba1ab7ec36b5243cd7c6317ac4e1: Status 404 returned error can't find the container with id c450e15aad8df67bb0a5892e864282927c37ba1ab7ec36b5243cd7c6317ac4e1 Nov 28 12:42:21 crc kubenswrapper[4779]: I1128 12:42:21.542422 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jngtm"] Nov 28 12:42:21 crc kubenswrapper[4779]: W1128 12:42:21.549596 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaeb5fca6_5157_4e18_8223_59f88908f1c8.slice/crio-2e47f0cd4c3ab600ba367e2a77e513547ecb008b897653794355b4a797859bdf WatchSource:0}: Error finding container 2e47f0cd4c3ab600ba367e2a77e513547ecb008b897653794355b4a797859bdf: Status 404 returned error can't find the container with id 2e47f0cd4c3ab600ba367e2a77e513547ecb008b897653794355b4a797859bdf Nov 28 12:42:21 crc kubenswrapper[4779]: I1128 12:42:21.689390 4779 generic.go:334] "Generic (PLEG): container finished" podID="aeb5fca6-5157-4e18-8223-59f88908f1c8" containerID="a3684dfe0b72216d55546a0d9b7520c97fe801914151cab02106a2de0236b3c7" exitCode=0 Nov 28 12:42:21 crc kubenswrapper[4779]: I1128 12:42:21.689438 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jngtm" event={"ID":"aeb5fca6-5157-4e18-8223-59f88908f1c8","Type":"ContainerDied","Data":"a3684dfe0b72216d55546a0d9b7520c97fe801914151cab02106a2de0236b3c7"} Nov 28 12:42:21 crc kubenswrapper[4779]: I1128 12:42:21.689501 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jngtm" event={"ID":"aeb5fca6-5157-4e18-8223-59f88908f1c8","Type":"ContainerStarted","Data":"2e47f0cd4c3ab600ba367e2a77e513547ecb008b897653794355b4a797859bdf"} Nov 28 12:42:21 crc kubenswrapper[4779]: I1128 12:42:21.691608 4779 generic.go:334] "Generic (PLEG): container finished" podID="b5d5dfb9-ebff-4d12-af9a-53220c054a90" containerID="f31d4fb6f7acfc82da7b258d52e215889648bf5a8c0c5d8c3b51dbd866aa754f" exitCode=0 Nov 28 12:42:21 crc kubenswrapper[4779]: I1128 12:42:21.692476 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hppfn" event={"ID":"b5d5dfb9-ebff-4d12-af9a-53220c054a90","Type":"ContainerDied","Data":"f31d4fb6f7acfc82da7b258d52e215889648bf5a8c0c5d8c3b51dbd866aa754f"} Nov 28 12:42:21 crc kubenswrapper[4779]: I1128 12:42:21.692491 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hppfn" event={"ID":"b5d5dfb9-ebff-4d12-af9a-53220c054a90","Type":"ContainerStarted","Data":"c450e15aad8df67bb0a5892e864282927c37ba1ab7ec36b5243cd7c6317ac4e1"} Nov 28 12:42:21 crc kubenswrapper[4779]: I1128 12:42:21.736531 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42148fd9-447b-43a5-b513-7cc37b19ab16" path="/var/lib/kubelet/pods/42148fd9-447b-43a5-b513-7cc37b19ab16/volumes" Nov 28 12:42:21 crc kubenswrapper[4779]: I1128 12:42:21.737751 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83ce570d-f1e1-4168-9b49-3da4f6b31209" path="/var/lib/kubelet/pods/83ce570d-f1e1-4168-9b49-3da4f6b31209/volumes" Nov 28 12:42:21 crc kubenswrapper[4779]: I1128 12:42:21.738886 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b54c6e50-f765-4a8c-b147-237821a03d11" path="/var/lib/kubelet/pods/b54c6e50-f765-4a8c-b147-237821a03d11/volumes" Nov 28 12:42:21 crc kubenswrapper[4779]: I1128 12:42:21.740847 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b88224c6-06e6-41c7-bba9-cb04ae3361e0" path="/var/lib/kubelet/pods/b88224c6-06e6-41c7-bba9-cb04ae3361e0/volumes" Nov 28 12:42:21 crc kubenswrapper[4779]: I1128 12:42:21.741472 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2eedfd1-32f1-478a-b46d-939da24ba282" path="/var/lib/kubelet/pods/e2eedfd1-32f1-478a-b46d-939da24ba282/volumes" Nov 28 12:42:22 crc kubenswrapper[4779]: I1128 12:42:22.702828 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hppfn" event={"ID":"b5d5dfb9-ebff-4d12-af9a-53220c054a90","Type":"ContainerStarted","Data":"24f735f55866f82bfff59810c25d82338c2e2311ef33763c1ffddcbb674e34b8"} Nov 28 12:42:23 crc kubenswrapper[4779]: I1128 12:42:23.145050 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6qmdc"] Nov 28 12:42:23 crc kubenswrapper[4779]: I1128 12:42:23.147012 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6qmdc" Nov 28 12:42:23 crc kubenswrapper[4779]: I1128 12:42:23.149142 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 28 12:42:23 crc kubenswrapper[4779]: I1128 12:42:23.162164 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6qmdc"] Nov 28 12:42:23 crc kubenswrapper[4779]: I1128 12:42:23.251675 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rff8m\" (UniqueName: \"kubernetes.io/projected/5b79674b-d129-4bf4-91f2-77b42f1d51ea-kube-api-access-rff8m\") pod \"certified-operators-6qmdc\" (UID: \"5b79674b-d129-4bf4-91f2-77b42f1d51ea\") " pod="openshift-marketplace/certified-operators-6qmdc" Nov 28 12:42:23 crc kubenswrapper[4779]: I1128 12:42:23.251981 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b79674b-d129-4bf4-91f2-77b42f1d51ea-catalog-content\") pod \"certified-operators-6qmdc\" (UID: \"5b79674b-d129-4bf4-91f2-77b42f1d51ea\") " pod="openshift-marketplace/certified-operators-6qmdc" Nov 28 12:42:23 crc kubenswrapper[4779]: I1128 12:42:23.252153 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b79674b-d129-4bf4-91f2-77b42f1d51ea-utilities\") pod \"certified-operators-6qmdc\" (UID: \"5b79674b-d129-4bf4-91f2-77b42f1d51ea\") " pod="openshift-marketplace/certified-operators-6qmdc" Nov 28 12:42:23 crc kubenswrapper[4779]: I1128 12:42:23.349656 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gdz82"] Nov 28 12:42:23 crc kubenswrapper[4779]: I1128 12:42:23.351236 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gdz82" Nov 28 12:42:23 crc kubenswrapper[4779]: I1128 12:42:23.353562 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rff8m\" (UniqueName: \"kubernetes.io/projected/5b79674b-d129-4bf4-91f2-77b42f1d51ea-kube-api-access-rff8m\") pod \"certified-operators-6qmdc\" (UID: \"5b79674b-d129-4bf4-91f2-77b42f1d51ea\") " pod="openshift-marketplace/certified-operators-6qmdc" Nov 28 12:42:23 crc kubenswrapper[4779]: I1128 12:42:23.353646 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b79674b-d129-4bf4-91f2-77b42f1d51ea-catalog-content\") pod \"certified-operators-6qmdc\" (UID: \"5b79674b-d129-4bf4-91f2-77b42f1d51ea\") " pod="openshift-marketplace/certified-operators-6qmdc" Nov 28 12:42:23 crc kubenswrapper[4779]: I1128 12:42:23.353692 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b79674b-d129-4bf4-91f2-77b42f1d51ea-utilities\") pod \"certified-operators-6qmdc\" (UID: \"5b79674b-d129-4bf4-91f2-77b42f1d51ea\") " pod="openshift-marketplace/certified-operators-6qmdc" Nov 28 12:42:23 crc kubenswrapper[4779]: I1128 12:42:23.354423 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b79674b-d129-4bf4-91f2-77b42f1d51ea-utilities\") pod \"certified-operators-6qmdc\" (UID: \"5b79674b-d129-4bf4-91f2-77b42f1d51ea\") " pod="openshift-marketplace/certified-operators-6qmdc" Nov 28 12:42:23 crc kubenswrapper[4779]: I1128 12:42:23.354435 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b79674b-d129-4bf4-91f2-77b42f1d51ea-catalog-content\") pod \"certified-operators-6qmdc\" (UID: \"5b79674b-d129-4bf4-91f2-77b42f1d51ea\") " pod="openshift-marketplace/certified-operators-6qmdc" Nov 28 12:42:23 crc kubenswrapper[4779]: I1128 12:42:23.354803 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 28 12:42:23 crc kubenswrapper[4779]: I1128 12:42:23.363874 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gdz82"] Nov 28 12:42:23 crc kubenswrapper[4779]: I1128 12:42:23.386027 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rff8m\" (UniqueName: \"kubernetes.io/projected/5b79674b-d129-4bf4-91f2-77b42f1d51ea-kube-api-access-rff8m\") pod \"certified-operators-6qmdc\" (UID: \"5b79674b-d129-4bf4-91f2-77b42f1d51ea\") " pod="openshift-marketplace/certified-operators-6qmdc" Nov 28 12:42:23 crc kubenswrapper[4779]: I1128 12:42:23.455110 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbgvb\" (UniqueName: \"kubernetes.io/projected/218924d0-58ac-460f-a4f6-f00925ee6a97-kube-api-access-sbgvb\") pod \"community-operators-gdz82\" (UID: \"218924d0-58ac-460f-a4f6-f00925ee6a97\") " pod="openshift-marketplace/community-operators-gdz82" Nov 28 12:42:23 crc kubenswrapper[4779]: I1128 12:42:23.455577 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/218924d0-58ac-460f-a4f6-f00925ee6a97-catalog-content\") pod \"community-operators-gdz82\" (UID: \"218924d0-58ac-460f-a4f6-f00925ee6a97\") " pod="openshift-marketplace/community-operators-gdz82" Nov 28 12:42:23 crc kubenswrapper[4779]: I1128 12:42:23.455605 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/218924d0-58ac-460f-a4f6-f00925ee6a97-utilities\") pod \"community-operators-gdz82\" (UID: \"218924d0-58ac-460f-a4f6-f00925ee6a97\") " pod="openshift-marketplace/community-operators-gdz82" Nov 28 12:42:23 crc kubenswrapper[4779]: I1128 12:42:23.469182 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6qmdc" Nov 28 12:42:23 crc kubenswrapper[4779]: I1128 12:42:23.557618 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/218924d0-58ac-460f-a4f6-f00925ee6a97-catalog-content\") pod \"community-operators-gdz82\" (UID: \"218924d0-58ac-460f-a4f6-f00925ee6a97\") " pod="openshift-marketplace/community-operators-gdz82" Nov 28 12:42:23 crc kubenswrapper[4779]: I1128 12:42:23.557653 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/218924d0-58ac-460f-a4f6-f00925ee6a97-utilities\") pod \"community-operators-gdz82\" (UID: \"218924d0-58ac-460f-a4f6-f00925ee6a97\") " pod="openshift-marketplace/community-operators-gdz82" Nov 28 12:42:23 crc kubenswrapper[4779]: I1128 12:42:23.557707 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbgvb\" (UniqueName: \"kubernetes.io/projected/218924d0-58ac-460f-a4f6-f00925ee6a97-kube-api-access-sbgvb\") pod \"community-operators-gdz82\" (UID: \"218924d0-58ac-460f-a4f6-f00925ee6a97\") " pod="openshift-marketplace/community-operators-gdz82" Nov 28 12:42:23 crc kubenswrapper[4779]: I1128 12:42:23.558362 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/218924d0-58ac-460f-a4f6-f00925ee6a97-catalog-content\") pod \"community-operators-gdz82\" (UID: \"218924d0-58ac-460f-a4f6-f00925ee6a97\") " pod="openshift-marketplace/community-operators-gdz82" Nov 28 12:42:23 crc kubenswrapper[4779]: I1128 12:42:23.559633 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/218924d0-58ac-460f-a4f6-f00925ee6a97-utilities\") pod \"community-operators-gdz82\" (UID: \"218924d0-58ac-460f-a4f6-f00925ee6a97\") " pod="openshift-marketplace/community-operators-gdz82" Nov 28 12:42:23 crc kubenswrapper[4779]: I1128 12:42:23.585305 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbgvb\" (UniqueName: \"kubernetes.io/projected/218924d0-58ac-460f-a4f6-f00925ee6a97-kube-api-access-sbgvb\") pod \"community-operators-gdz82\" (UID: \"218924d0-58ac-460f-a4f6-f00925ee6a97\") " pod="openshift-marketplace/community-operators-gdz82" Nov 28 12:42:23 crc kubenswrapper[4779]: I1128 12:42:23.667829 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gdz82" Nov 28 12:42:23 crc kubenswrapper[4779]: I1128 12:42:23.709939 4779 generic.go:334] "Generic (PLEG): container finished" podID="b5d5dfb9-ebff-4d12-af9a-53220c054a90" containerID="24f735f55866f82bfff59810c25d82338c2e2311ef33763c1ffddcbb674e34b8" exitCode=0 Nov 28 12:42:23 crc kubenswrapper[4779]: I1128 12:42:23.710011 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hppfn" event={"ID":"b5d5dfb9-ebff-4d12-af9a-53220c054a90","Type":"ContainerDied","Data":"24f735f55866f82bfff59810c25d82338c2e2311ef33763c1ffddcbb674e34b8"} Nov 28 12:42:23 crc kubenswrapper[4779]: I1128 12:42:23.716531 4779 generic.go:334] "Generic (PLEG): container finished" podID="aeb5fca6-5157-4e18-8223-59f88908f1c8" containerID="63f7e9d6ad25821876ec4b865e075af823f125e4812ce3c8619a7a36fe4c3cc2" exitCode=0 Nov 28 12:42:23 crc kubenswrapper[4779]: I1128 12:42:23.716567 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jngtm" event={"ID":"aeb5fca6-5157-4e18-8223-59f88908f1c8","Type":"ContainerDied","Data":"63f7e9d6ad25821876ec4b865e075af823f125e4812ce3c8619a7a36fe4c3cc2"} Nov 28 12:42:23 crc kubenswrapper[4779]: I1128 12:42:23.821816 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-kckg2" Nov 28 12:42:23 crc kubenswrapper[4779]: I1128 12:42:23.842881 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6qmdc"] Nov 28 12:42:23 crc kubenswrapper[4779]: I1128 12:42:23.881139 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-gfm2w"] Nov 28 12:42:24 crc kubenswrapper[4779]: I1128 12:42:24.071573 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gdz82"] Nov 28 12:42:24 crc kubenswrapper[4779]: W1128 12:42:24.075383 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod218924d0_58ac_460f_a4f6_f00925ee6a97.slice/crio-80dcab5bc3a6449ee93a338a131192f3a329aaab688a672bd087892dcdfbc320 WatchSource:0}: Error finding container 80dcab5bc3a6449ee93a338a131192f3a329aaab688a672bd087892dcdfbc320: Status 404 returned error can't find the container with id 80dcab5bc3a6449ee93a338a131192f3a329aaab688a672bd087892dcdfbc320 Nov 28 12:42:24 crc kubenswrapper[4779]: I1128 12:42:24.724575 4779 generic.go:334] "Generic (PLEG): container finished" podID="218924d0-58ac-460f-a4f6-f00925ee6a97" containerID="f22b16569d1d8d22ee899af636bc71a7c354b01eb5e5983cc33e79343a6eeece" exitCode=0 Nov 28 12:42:24 crc kubenswrapper[4779]: I1128 12:42:24.724719 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gdz82" event={"ID":"218924d0-58ac-460f-a4f6-f00925ee6a97","Type":"ContainerDied","Data":"f22b16569d1d8d22ee899af636bc71a7c354b01eb5e5983cc33e79343a6eeece"} Nov 28 12:42:24 crc kubenswrapper[4779]: I1128 12:42:24.725218 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gdz82" event={"ID":"218924d0-58ac-460f-a4f6-f00925ee6a97","Type":"ContainerStarted","Data":"80dcab5bc3a6449ee93a338a131192f3a329aaab688a672bd087892dcdfbc320"} Nov 28 12:42:24 crc kubenswrapper[4779]: I1128 12:42:24.727757 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hppfn" event={"ID":"b5d5dfb9-ebff-4d12-af9a-53220c054a90","Type":"ContainerStarted","Data":"0840dcfdeab43bd06959e88cf4949c0a5f9b9f0f3ca3802ce324baf72e5d1c7d"} Nov 28 12:42:24 crc kubenswrapper[4779]: I1128 12:42:24.730142 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jngtm" event={"ID":"aeb5fca6-5157-4e18-8223-59f88908f1c8","Type":"ContainerStarted","Data":"e804db90a27a1fe8d360f91381125eec044c24d5577369687a8309694af97f51"} Nov 28 12:42:24 crc kubenswrapper[4779]: I1128 12:42:24.734516 4779 generic.go:334] "Generic (PLEG): container finished" podID="5b79674b-d129-4bf4-91f2-77b42f1d51ea" containerID="4668801a9ea0988224764e8fa126eaf6f5f86d7e562c2c807c4d9789eb562b24" exitCode=0 Nov 28 12:42:24 crc kubenswrapper[4779]: I1128 12:42:24.734556 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qmdc" event={"ID":"5b79674b-d129-4bf4-91f2-77b42f1d51ea","Type":"ContainerDied","Data":"4668801a9ea0988224764e8fa126eaf6f5f86d7e562c2c807c4d9789eb562b24"} Nov 28 12:42:24 crc kubenswrapper[4779]: I1128 12:42:24.734580 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qmdc" event={"ID":"5b79674b-d129-4bf4-91f2-77b42f1d51ea","Type":"ContainerStarted","Data":"7a4cbdbf011b6b684f896c9c5f527c148f5bc885696304134e20d7aebf5b429a"} Nov 28 12:42:24 crc kubenswrapper[4779]: I1128 12:42:24.769316 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jngtm" podStartSLOduration=1.928198358 podStartE2EDuration="4.769302708s" podCreationTimestamp="2025-11-28 12:42:20 +0000 UTC" firstStartedPulling="2025-11-28 12:42:21.690521159 +0000 UTC m=+402.256196543" lastFinishedPulling="2025-11-28 12:42:24.531625519 +0000 UTC m=+405.097300893" observedRunningTime="2025-11-28 12:42:24.768777793 +0000 UTC m=+405.334453187" watchObservedRunningTime="2025-11-28 12:42:24.769302708 +0000 UTC m=+405.334978062" Nov 28 12:42:24 crc kubenswrapper[4779]: I1128 12:42:24.808077 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hppfn" podStartSLOduration=2.254405734 podStartE2EDuration="4.808061855s" podCreationTimestamp="2025-11-28 12:42:20 +0000 UTC" firstStartedPulling="2025-11-28 12:42:21.693975307 +0000 UTC m=+402.259650691" lastFinishedPulling="2025-11-28 12:42:24.247631458 +0000 UTC m=+404.813306812" observedRunningTime="2025-11-28 12:42:24.805014789 +0000 UTC m=+405.370690153" watchObservedRunningTime="2025-11-28 12:42:24.808061855 +0000 UTC m=+405.373737209" Nov 28 12:42:26 crc kubenswrapper[4779]: I1128 12:42:26.766865 4779 generic.go:334] "Generic (PLEG): container finished" podID="5b79674b-d129-4bf4-91f2-77b42f1d51ea" containerID="7adf98e44365deb512137e40453e2056ad22133aa0630242fb34e318526e4db1" exitCode=0 Nov 28 12:42:26 crc kubenswrapper[4779]: I1128 12:42:26.766943 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qmdc" event={"ID":"5b79674b-d129-4bf4-91f2-77b42f1d51ea","Type":"ContainerDied","Data":"7adf98e44365deb512137e40453e2056ad22133aa0630242fb34e318526e4db1"} Nov 28 12:42:26 crc kubenswrapper[4779]: I1128 12:42:26.770692 4779 generic.go:334] "Generic (PLEG): container finished" podID="218924d0-58ac-460f-a4f6-f00925ee6a97" containerID="51135c050f6764d194bdb55e51044a0d37d637a5277293481024e12e5bf5996b" exitCode=0 Nov 28 12:42:26 crc kubenswrapper[4779]: I1128 12:42:26.770744 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gdz82" event={"ID":"218924d0-58ac-460f-a4f6-f00925ee6a97","Type":"ContainerDied","Data":"51135c050f6764d194bdb55e51044a0d37d637a5277293481024e12e5bf5996b"} Nov 28 12:42:29 crc kubenswrapper[4779]: I1128 12:42:29.797963 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qmdc" event={"ID":"5b79674b-d129-4bf4-91f2-77b42f1d51ea","Type":"ContainerStarted","Data":"565acd0dd56ea920f74548bea9f2a5146aa4198aa3232c0ba99e387b4d1e36c2"} Nov 28 12:42:29 crc kubenswrapper[4779]: I1128 12:42:29.801072 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gdz82" event={"ID":"218924d0-58ac-460f-a4f6-f00925ee6a97","Type":"ContainerStarted","Data":"896b29a5f6c1cae205b1cbc6a5c3509d2a4122b5aa97eab6134a7db5b6b1be94"} Nov 28 12:42:29 crc kubenswrapper[4779]: I1128 12:42:29.820466 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6qmdc" podStartSLOduration=4.280417485 podStartE2EDuration="6.820428569s" podCreationTimestamp="2025-11-28 12:42:23 +0000 UTC" firstStartedPulling="2025-11-28 12:42:24.736356845 +0000 UTC m=+405.302032199" lastFinishedPulling="2025-11-28 12:42:27.276367929 +0000 UTC m=+407.842043283" observedRunningTime="2025-11-28 12:42:29.81482562 +0000 UTC m=+410.380500974" watchObservedRunningTime="2025-11-28 12:42:29.820428569 +0000 UTC m=+410.386103923" Nov 28 12:42:31 crc kubenswrapper[4779]: I1128 12:42:31.083629 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jngtm" Nov 28 12:42:31 crc kubenswrapper[4779]: I1128 12:42:31.083998 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jngtm" Nov 28 12:42:31 crc kubenswrapper[4779]: I1128 12:42:31.138021 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jngtm" Nov 28 12:42:31 crc kubenswrapper[4779]: I1128 12:42:31.164950 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gdz82" podStartSLOduration=5.563474081 podStartE2EDuration="8.164932965s" podCreationTimestamp="2025-11-28 12:42:23 +0000 UTC" firstStartedPulling="2025-11-28 12:42:24.725831887 +0000 UTC m=+405.291507291" lastFinishedPulling="2025-11-28 12:42:27.327290801 +0000 UTC m=+407.892966175" observedRunningTime="2025-11-28 12:42:29.848894205 +0000 UTC m=+410.414569559" watchObservedRunningTime="2025-11-28 12:42:31.164932965 +0000 UTC m=+411.730608339" Nov 28 12:42:31 crc kubenswrapper[4779]: I1128 12:42:31.279539 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hppfn" Nov 28 12:42:31 crc kubenswrapper[4779]: I1128 12:42:31.279590 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hppfn" Nov 28 12:42:31 crc kubenswrapper[4779]: I1128 12:42:31.331539 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hppfn" Nov 28 12:42:31 crc kubenswrapper[4779]: I1128 12:42:31.863946 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jngtm" Nov 28 12:42:31 crc kubenswrapper[4779]: I1128 12:42:31.873740 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hppfn" Nov 28 12:42:33 crc kubenswrapper[4779]: I1128 12:42:33.470631 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6qmdc" Nov 28 12:42:33 crc kubenswrapper[4779]: I1128 12:42:33.471200 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6qmdc" Nov 28 12:42:33 crc kubenswrapper[4779]: I1128 12:42:33.506491 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6qmdc" Nov 28 12:42:33 crc kubenswrapper[4779]: I1128 12:42:33.668636 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gdz82" Nov 28 12:42:33 crc kubenswrapper[4779]: I1128 12:42:33.668799 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gdz82" Nov 28 12:42:33 crc kubenswrapper[4779]: I1128 12:42:33.726029 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gdz82" Nov 28 12:42:33 crc kubenswrapper[4779]: I1128 12:42:33.876890 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gdz82" Nov 28 12:42:33 crc kubenswrapper[4779]: I1128 12:42:33.907065 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6qmdc" Nov 28 12:42:46 crc kubenswrapper[4779]: I1128 12:42:46.284934 4779 patch_prober.go:28] interesting pod/machine-config-daemon-kj9g2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 12:42:46 crc kubenswrapper[4779]: I1128 12:42:46.285656 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 12:42:46 crc kubenswrapper[4779]: I1128 12:42:46.285717 4779 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" Nov 28 12:42:46 crc kubenswrapper[4779]: I1128 12:42:46.286521 4779 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"655348a98a3eea4baa5e428dc13dd64fc735d8645fc4d4eaf09b66ffacea7023"} pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 12:42:46 crc kubenswrapper[4779]: I1128 12:42:46.286766 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" containerID="cri-o://655348a98a3eea4baa5e428dc13dd64fc735d8645fc4d4eaf09b66ffacea7023" gracePeriod=600 Nov 28 12:42:48 crc kubenswrapper[4779]: I1128 12:42:48.920756 4779 generic.go:334] "Generic (PLEG): container finished" podID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerID="655348a98a3eea4baa5e428dc13dd64fc735d8645fc4d4eaf09b66ffacea7023" exitCode=0 Nov 28 12:42:48 crc kubenswrapper[4779]: I1128 12:42:48.920848 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" event={"ID":"3b2a3eb4-4de5-491b-b466-3a35b7d745ec","Type":"ContainerDied","Data":"655348a98a3eea4baa5e428dc13dd64fc735d8645fc4d4eaf09b66ffacea7023"} Nov 28 12:42:48 crc kubenswrapper[4779]: I1128 12:42:48.920960 4779 scope.go:117] "RemoveContainer" containerID="5f92b1378efd9146ee3cb61fef14092136e47b318d132a400c768bedf50d034e" Nov 28 12:42:48 crc kubenswrapper[4779]: I1128 12:42:48.922590 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" podUID="09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd" containerName="registry" containerID="cri-o://b390bff70b1780f9f3887253baa7db36e9ca7286b165deaa438995309eb8a05b" gracePeriod=30 Nov 28 12:42:49 crc kubenswrapper[4779]: I1128 12:42:49.366415 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:42:49 crc kubenswrapper[4779]: I1128 12:42:49.516563 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd-ca-trust-extracted\") pod \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " Nov 28 12:42:49 crc kubenswrapper[4779]: I1128 12:42:49.517306 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " Nov 28 12:42:49 crc kubenswrapper[4779]: I1128 12:42:49.517490 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd-installation-pull-secrets\") pod \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " Nov 28 12:42:49 crc kubenswrapper[4779]: I1128 12:42:49.518744 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd-bound-sa-token\") pod \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " Nov 28 12:42:49 crc kubenswrapper[4779]: I1128 12:42:49.518795 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd-registry-certificates\") pod \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " Nov 28 12:42:49 crc kubenswrapper[4779]: I1128 12:42:49.519038 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd-trusted-ca\") pod \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " Nov 28 12:42:49 crc kubenswrapper[4779]: I1128 12:42:49.519203 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hd6pq\" (UniqueName: \"kubernetes.io/projected/09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd-kube-api-access-hd6pq\") pod \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " Nov 28 12:42:49 crc kubenswrapper[4779]: I1128 12:42:49.519252 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd-registry-tls\") pod \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\" (UID: \"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd\") " Nov 28 12:42:49 crc kubenswrapper[4779]: I1128 12:42:49.520983 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd" (UID: "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:42:49 crc kubenswrapper[4779]: I1128 12:42:49.523132 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd" (UID: "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:42:49 crc kubenswrapper[4779]: I1128 12:42:49.525881 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd" (UID: "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:42:49 crc kubenswrapper[4779]: I1128 12:42:49.526472 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd" (UID: "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:42:49 crc kubenswrapper[4779]: I1128 12:42:49.527175 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd" (UID: "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:42:49 crc kubenswrapper[4779]: I1128 12:42:49.527563 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd-kube-api-access-hd6pq" (OuterVolumeSpecName: "kube-api-access-hd6pq") pod "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd" (UID: "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd"). InnerVolumeSpecName "kube-api-access-hd6pq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:42:49 crc kubenswrapper[4779]: I1128 12:42:49.536166 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd" (UID: "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 28 12:42:49 crc kubenswrapper[4779]: I1128 12:42:49.553940 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd" (UID: "09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:42:49 crc kubenswrapper[4779]: I1128 12:42:49.620658 4779 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 12:42:49 crc kubenswrapper[4779]: I1128 12:42:49.620722 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hd6pq\" (UniqueName: \"kubernetes.io/projected/09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd-kube-api-access-hd6pq\") on node \"crc\" DevicePath \"\"" Nov 28 12:42:49 crc kubenswrapper[4779]: I1128 12:42:49.620737 4779 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 28 12:42:49 crc kubenswrapper[4779]: I1128 12:42:49.621175 4779 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 28 12:42:49 crc kubenswrapper[4779]: I1128 12:42:49.621201 4779 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 28 12:42:49 crc kubenswrapper[4779]: I1128 12:42:49.621213 4779 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 28 12:42:49 crc kubenswrapper[4779]: I1128 12:42:49.621224 4779 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 28 12:42:49 crc kubenswrapper[4779]: I1128 12:42:49.932554 4779 generic.go:334] "Generic (PLEG): container finished" podID="09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd" containerID="b390bff70b1780f9f3887253baa7db36e9ca7286b165deaa438995309eb8a05b" exitCode=0 Nov 28 12:42:49 crc kubenswrapper[4779]: I1128 12:42:49.932710 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" event={"ID":"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd","Type":"ContainerDied","Data":"b390bff70b1780f9f3887253baa7db36e9ca7286b165deaa438995309eb8a05b"} Nov 28 12:42:49 crc kubenswrapper[4779]: I1128 12:42:49.932971 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" event={"ID":"09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd","Type":"ContainerDied","Data":"dc712056739a0e5ec826da2cb96b95c4b28ec9e9d0399599c40b35839a9e81fc"} Nov 28 12:42:49 crc kubenswrapper[4779]: I1128 12:42:49.933020 4779 scope.go:117] "RemoveContainer" containerID="b390bff70b1780f9f3887253baa7db36e9ca7286b165deaa438995309eb8a05b" Nov 28 12:42:49 crc kubenswrapper[4779]: I1128 12:42:49.934050 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-gfm2w" Nov 28 12:42:49 crc kubenswrapper[4779]: I1128 12:42:49.937745 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" event={"ID":"3b2a3eb4-4de5-491b-b466-3a35b7d745ec","Type":"ContainerStarted","Data":"31095d9d6d0a461f735b1edaee2a8d6fa31f53fbcf5ad74a652fee4e119ba7db"} Nov 28 12:42:49 crc kubenswrapper[4779]: I1128 12:42:49.964175 4779 scope.go:117] "RemoveContainer" containerID="b390bff70b1780f9f3887253baa7db36e9ca7286b165deaa438995309eb8a05b" Nov 28 12:42:49 crc kubenswrapper[4779]: E1128 12:42:49.965206 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b390bff70b1780f9f3887253baa7db36e9ca7286b165deaa438995309eb8a05b\": container with ID starting with b390bff70b1780f9f3887253baa7db36e9ca7286b165deaa438995309eb8a05b not found: ID does not exist" containerID="b390bff70b1780f9f3887253baa7db36e9ca7286b165deaa438995309eb8a05b" Nov 28 12:42:49 crc kubenswrapper[4779]: I1128 12:42:49.965267 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b390bff70b1780f9f3887253baa7db36e9ca7286b165deaa438995309eb8a05b"} err="failed to get container status \"b390bff70b1780f9f3887253baa7db36e9ca7286b165deaa438995309eb8a05b\": rpc error: code = NotFound desc = could not find container \"b390bff70b1780f9f3887253baa7db36e9ca7286b165deaa438995309eb8a05b\": container with ID starting with b390bff70b1780f9f3887253baa7db36e9ca7286b165deaa438995309eb8a05b not found: ID does not exist" Nov 28 12:42:49 crc kubenswrapper[4779]: I1128 12:42:49.966150 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-gfm2w"] Nov 28 12:42:49 crc kubenswrapper[4779]: I1128 12:42:49.972057 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-gfm2w"] Nov 28 12:42:51 crc kubenswrapper[4779]: I1128 12:42:51.737384 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd" path="/var/lib/kubelet/pods/09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd/volumes" Nov 28 12:45:00 crc kubenswrapper[4779]: I1128 12:45:00.207982 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405565-kxfxj"] Nov 28 12:45:00 crc kubenswrapper[4779]: E1128 12:45:00.209350 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd" containerName="registry" Nov 28 12:45:00 crc kubenswrapper[4779]: I1128 12:45:00.209530 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd" containerName="registry" Nov 28 12:45:00 crc kubenswrapper[4779]: I1128 12:45:00.209735 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="09ddbf68-8e74-4dc3-b5bb-e1f5863b2cdd" containerName="registry" Nov 28 12:45:00 crc kubenswrapper[4779]: I1128 12:45:00.210333 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405565-kxfxj" Nov 28 12:45:00 crc kubenswrapper[4779]: I1128 12:45:00.219481 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 12:45:00 crc kubenswrapper[4779]: I1128 12:45:00.220042 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 12:45:00 crc kubenswrapper[4779]: I1128 12:45:00.226189 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405565-kxfxj"] Nov 28 12:45:00 crc kubenswrapper[4779]: I1128 12:45:00.258803 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16957643-e7b1-4447-a15c-da6bdb1fbe75-config-volume\") pod \"collect-profiles-29405565-kxfxj\" (UID: \"16957643-e7b1-4447-a15c-da6bdb1fbe75\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405565-kxfxj" Nov 28 12:45:00 crc kubenswrapper[4779]: I1128 12:45:00.258910 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/16957643-e7b1-4447-a15c-da6bdb1fbe75-secret-volume\") pod \"collect-profiles-29405565-kxfxj\" (UID: \"16957643-e7b1-4447-a15c-da6bdb1fbe75\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405565-kxfxj" Nov 28 12:45:00 crc kubenswrapper[4779]: I1128 12:45:00.259233 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qxpg\" (UniqueName: \"kubernetes.io/projected/16957643-e7b1-4447-a15c-da6bdb1fbe75-kube-api-access-2qxpg\") pod \"collect-profiles-29405565-kxfxj\" (UID: \"16957643-e7b1-4447-a15c-da6bdb1fbe75\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405565-kxfxj" Nov 28 12:45:00 crc kubenswrapper[4779]: I1128 12:45:00.360320 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qxpg\" (UniqueName: \"kubernetes.io/projected/16957643-e7b1-4447-a15c-da6bdb1fbe75-kube-api-access-2qxpg\") pod \"collect-profiles-29405565-kxfxj\" (UID: \"16957643-e7b1-4447-a15c-da6bdb1fbe75\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405565-kxfxj" Nov 28 12:45:00 crc kubenswrapper[4779]: I1128 12:45:00.360401 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16957643-e7b1-4447-a15c-da6bdb1fbe75-config-volume\") pod \"collect-profiles-29405565-kxfxj\" (UID: \"16957643-e7b1-4447-a15c-da6bdb1fbe75\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405565-kxfxj" Nov 28 12:45:00 crc kubenswrapper[4779]: I1128 12:45:00.360472 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/16957643-e7b1-4447-a15c-da6bdb1fbe75-secret-volume\") pod \"collect-profiles-29405565-kxfxj\" (UID: \"16957643-e7b1-4447-a15c-da6bdb1fbe75\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405565-kxfxj" Nov 28 12:45:00 crc kubenswrapper[4779]: I1128 12:45:00.362063 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16957643-e7b1-4447-a15c-da6bdb1fbe75-config-volume\") pod \"collect-profiles-29405565-kxfxj\" (UID: \"16957643-e7b1-4447-a15c-da6bdb1fbe75\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405565-kxfxj" Nov 28 12:45:00 crc kubenswrapper[4779]: I1128 12:45:00.371419 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/16957643-e7b1-4447-a15c-da6bdb1fbe75-secret-volume\") pod \"collect-profiles-29405565-kxfxj\" (UID: \"16957643-e7b1-4447-a15c-da6bdb1fbe75\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405565-kxfxj" Nov 28 12:45:00 crc kubenswrapper[4779]: I1128 12:45:00.389900 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qxpg\" (UniqueName: \"kubernetes.io/projected/16957643-e7b1-4447-a15c-da6bdb1fbe75-kube-api-access-2qxpg\") pod \"collect-profiles-29405565-kxfxj\" (UID: \"16957643-e7b1-4447-a15c-da6bdb1fbe75\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405565-kxfxj" Nov 28 12:45:00 crc kubenswrapper[4779]: I1128 12:45:00.541719 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405565-kxfxj" Nov 28 12:45:00 crc kubenswrapper[4779]: I1128 12:45:00.978338 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405565-kxfxj"] Nov 28 12:45:01 crc kubenswrapper[4779]: I1128 12:45:01.969551 4779 generic.go:334] "Generic (PLEG): container finished" podID="16957643-e7b1-4447-a15c-da6bdb1fbe75" containerID="c41911ddb09c5a07a07fdcd95dd3d446a039175941c64231ce3de1a5eaeddfaa" exitCode=0 Nov 28 12:45:01 crc kubenswrapper[4779]: I1128 12:45:01.969672 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405565-kxfxj" event={"ID":"16957643-e7b1-4447-a15c-da6bdb1fbe75","Type":"ContainerDied","Data":"c41911ddb09c5a07a07fdcd95dd3d446a039175941c64231ce3de1a5eaeddfaa"} Nov 28 12:45:01 crc kubenswrapper[4779]: I1128 12:45:01.970031 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405565-kxfxj" event={"ID":"16957643-e7b1-4447-a15c-da6bdb1fbe75","Type":"ContainerStarted","Data":"13c4f4c5388d1900a757d32f837f4e09219104aa7a46f2f869b0209b4bebd875"} Nov 28 12:45:03 crc kubenswrapper[4779]: I1128 12:45:03.280447 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405565-kxfxj" Nov 28 12:45:03 crc kubenswrapper[4779]: I1128 12:45:03.316271 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16957643-e7b1-4447-a15c-da6bdb1fbe75-config-volume\") pod \"16957643-e7b1-4447-a15c-da6bdb1fbe75\" (UID: \"16957643-e7b1-4447-a15c-da6bdb1fbe75\") " Nov 28 12:45:03 crc kubenswrapper[4779]: I1128 12:45:03.316370 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qxpg\" (UniqueName: \"kubernetes.io/projected/16957643-e7b1-4447-a15c-da6bdb1fbe75-kube-api-access-2qxpg\") pod \"16957643-e7b1-4447-a15c-da6bdb1fbe75\" (UID: \"16957643-e7b1-4447-a15c-da6bdb1fbe75\") " Nov 28 12:45:03 crc kubenswrapper[4779]: I1128 12:45:03.316501 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/16957643-e7b1-4447-a15c-da6bdb1fbe75-secret-volume\") pod \"16957643-e7b1-4447-a15c-da6bdb1fbe75\" (UID: \"16957643-e7b1-4447-a15c-da6bdb1fbe75\") " Nov 28 12:45:03 crc kubenswrapper[4779]: I1128 12:45:03.318947 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16957643-e7b1-4447-a15c-da6bdb1fbe75-config-volume" (OuterVolumeSpecName: "config-volume") pod "16957643-e7b1-4447-a15c-da6bdb1fbe75" (UID: "16957643-e7b1-4447-a15c-da6bdb1fbe75"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:45:03 crc kubenswrapper[4779]: I1128 12:45:03.327730 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16957643-e7b1-4447-a15c-da6bdb1fbe75-kube-api-access-2qxpg" (OuterVolumeSpecName: "kube-api-access-2qxpg") pod "16957643-e7b1-4447-a15c-da6bdb1fbe75" (UID: "16957643-e7b1-4447-a15c-da6bdb1fbe75"). InnerVolumeSpecName "kube-api-access-2qxpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:45:03 crc kubenswrapper[4779]: I1128 12:45:03.327973 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16957643-e7b1-4447-a15c-da6bdb1fbe75-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "16957643-e7b1-4447-a15c-da6bdb1fbe75" (UID: "16957643-e7b1-4447-a15c-da6bdb1fbe75"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:45:03 crc kubenswrapper[4779]: I1128 12:45:03.417918 4779 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/16957643-e7b1-4447-a15c-da6bdb1fbe75-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 12:45:03 crc kubenswrapper[4779]: I1128 12:45:03.417986 4779 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16957643-e7b1-4447-a15c-da6bdb1fbe75-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 12:45:03 crc kubenswrapper[4779]: I1128 12:45:03.418007 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qxpg\" (UniqueName: \"kubernetes.io/projected/16957643-e7b1-4447-a15c-da6bdb1fbe75-kube-api-access-2qxpg\") on node \"crc\" DevicePath \"\"" Nov 28 12:45:03 crc kubenswrapper[4779]: I1128 12:45:03.987567 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405565-kxfxj" event={"ID":"16957643-e7b1-4447-a15c-da6bdb1fbe75","Type":"ContainerDied","Data":"13c4f4c5388d1900a757d32f837f4e09219104aa7a46f2f869b0209b4bebd875"} Nov 28 12:45:03 crc kubenswrapper[4779]: I1128 12:45:03.988237 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13c4f4c5388d1900a757d32f837f4e09219104aa7a46f2f869b0209b4bebd875" Nov 28 12:45:03 crc kubenswrapper[4779]: I1128 12:45:03.988341 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405565-kxfxj" Nov 28 12:45:16 crc kubenswrapper[4779]: I1128 12:45:16.284989 4779 patch_prober.go:28] interesting pod/machine-config-daemon-kj9g2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 12:45:16 crc kubenswrapper[4779]: I1128 12:45:16.285730 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 12:45:46 crc kubenswrapper[4779]: I1128 12:45:46.284982 4779 patch_prober.go:28] interesting pod/machine-config-daemon-kj9g2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 12:45:46 crc kubenswrapper[4779]: I1128 12:45:46.285768 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 12:46:16 crc kubenswrapper[4779]: I1128 12:46:16.284880 4779 patch_prober.go:28] interesting pod/machine-config-daemon-kj9g2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 12:46:16 crc kubenswrapper[4779]: I1128 12:46:16.285711 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 12:46:16 crc kubenswrapper[4779]: I1128 12:46:16.285777 4779 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" Nov 28 12:46:16 crc kubenswrapper[4779]: I1128 12:46:16.286625 4779 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"31095d9d6d0a461f735b1edaee2a8d6fa31f53fbcf5ad74a652fee4e119ba7db"} pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 12:46:16 crc kubenswrapper[4779]: I1128 12:46:16.286719 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" containerID="cri-o://31095d9d6d0a461f735b1edaee2a8d6fa31f53fbcf5ad74a652fee4e119ba7db" gracePeriod=600 Nov 28 12:46:16 crc kubenswrapper[4779]: I1128 12:46:16.498913 4779 generic.go:334] "Generic (PLEG): container finished" podID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerID="31095d9d6d0a461f735b1edaee2a8d6fa31f53fbcf5ad74a652fee4e119ba7db" exitCode=0 Nov 28 12:46:16 crc kubenswrapper[4779]: I1128 12:46:16.499163 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" event={"ID":"3b2a3eb4-4de5-491b-b466-3a35b7d745ec","Type":"ContainerDied","Data":"31095d9d6d0a461f735b1edaee2a8d6fa31f53fbcf5ad74a652fee4e119ba7db"} Nov 28 12:46:16 crc kubenswrapper[4779]: I1128 12:46:16.499905 4779 scope.go:117] "RemoveContainer" containerID="655348a98a3eea4baa5e428dc13dd64fc735d8645fc4d4eaf09b66ffacea7023" Nov 28 12:46:17 crc kubenswrapper[4779]: I1128 12:46:17.509428 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" event={"ID":"3b2a3eb4-4de5-491b-b466-3a35b7d745ec","Type":"ContainerStarted","Data":"f5e93de974be41a0eb6481eaf0510c9e7e4484d2b3ab950a8d456a68806d2e6f"} Nov 28 12:47:34 crc kubenswrapper[4779]: I1128 12:47:34.902150 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-fx6q6"] Nov 28 12:47:34 crc kubenswrapper[4779]: E1128 12:47:34.903020 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16957643-e7b1-4447-a15c-da6bdb1fbe75" containerName="collect-profiles" Nov 28 12:47:34 crc kubenswrapper[4779]: I1128 12:47:34.903039 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="16957643-e7b1-4447-a15c-da6bdb1fbe75" containerName="collect-profiles" Nov 28 12:47:34 crc kubenswrapper[4779]: I1128 12:47:34.903228 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="16957643-e7b1-4447-a15c-da6bdb1fbe75" containerName="collect-profiles" Nov 28 12:47:34 crc kubenswrapper[4779]: I1128 12:47:34.903765 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-fx6q6" Nov 28 12:47:34 crc kubenswrapper[4779]: I1128 12:47:34.905725 4779 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-gqcnv" Nov 28 12:47:34 crc kubenswrapper[4779]: I1128 12:47:34.905888 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 28 12:47:34 crc kubenswrapper[4779]: I1128 12:47:34.907402 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 28 12:47:34 crc kubenswrapper[4779]: I1128 12:47:34.916116 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-fx6q6"] Nov 28 12:47:34 crc kubenswrapper[4779]: I1128 12:47:34.935454 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-5qqff"] Nov 28 12:47:34 crc kubenswrapper[4779]: I1128 12:47:34.939011 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-5qqff" Nov 28 12:47:34 crc kubenswrapper[4779]: I1128 12:47:34.946263 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-bvk27"] Nov 28 12:47:34 crc kubenswrapper[4779]: I1128 12:47:34.946593 4779 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-vttsq" Nov 28 12:47:34 crc kubenswrapper[4779]: I1128 12:47:34.959850 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-5qqff"] Nov 28 12:47:34 crc kubenswrapper[4779]: I1128 12:47:34.959942 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-bvk27" Nov 28 12:47:34 crc kubenswrapper[4779]: I1128 12:47:34.965509 4779 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-bm2hj" Nov 28 12:47:34 crc kubenswrapper[4779]: I1128 12:47:34.974943 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-bvk27"] Nov 28 12:47:35 crc kubenswrapper[4779]: I1128 12:47:35.066418 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jldzf\" (UniqueName: \"kubernetes.io/projected/8445721b-8f86-4161-adc3-2ddf58f3aa94-kube-api-access-jldzf\") pod \"cert-manager-webhook-5655c58dd6-bvk27\" (UID: \"8445721b-8f86-4161-adc3-2ddf58f3aa94\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-bvk27" Nov 28 12:47:35 crc kubenswrapper[4779]: I1128 12:47:35.066495 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6q68\" (UniqueName: \"kubernetes.io/projected/ec2c397e-6b4d-4ffc-9ffa-4f437657da02-kube-api-access-l6q68\") pod \"cert-manager-5b446d88c5-5qqff\" (UID: \"ec2c397e-6b4d-4ffc-9ffa-4f437657da02\") " pod="cert-manager/cert-manager-5b446d88c5-5qqff" Nov 28 12:47:35 crc kubenswrapper[4779]: I1128 12:47:35.066697 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwq69\" (UniqueName: \"kubernetes.io/projected/17acea2c-1197-4905-bb74-3f4137eb521d-kube-api-access-wwq69\") pod \"cert-manager-cainjector-7f985d654d-fx6q6\" (UID: \"17acea2c-1197-4905-bb74-3f4137eb521d\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-fx6q6" Nov 28 12:47:35 crc kubenswrapper[4779]: I1128 12:47:35.168217 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6q68\" (UniqueName: \"kubernetes.io/projected/ec2c397e-6b4d-4ffc-9ffa-4f437657da02-kube-api-access-l6q68\") pod \"cert-manager-5b446d88c5-5qqff\" (UID: \"ec2c397e-6b4d-4ffc-9ffa-4f437657da02\") " pod="cert-manager/cert-manager-5b446d88c5-5qqff" Nov 28 12:47:35 crc kubenswrapper[4779]: I1128 12:47:35.168300 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwq69\" (UniqueName: \"kubernetes.io/projected/17acea2c-1197-4905-bb74-3f4137eb521d-kube-api-access-wwq69\") pod \"cert-manager-cainjector-7f985d654d-fx6q6\" (UID: \"17acea2c-1197-4905-bb74-3f4137eb521d\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-fx6q6" Nov 28 12:47:35 crc kubenswrapper[4779]: I1128 12:47:35.168376 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jldzf\" (UniqueName: \"kubernetes.io/projected/8445721b-8f86-4161-adc3-2ddf58f3aa94-kube-api-access-jldzf\") pod \"cert-manager-webhook-5655c58dd6-bvk27\" (UID: \"8445721b-8f86-4161-adc3-2ddf58f3aa94\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-bvk27" Nov 28 12:47:35 crc kubenswrapper[4779]: I1128 12:47:35.191627 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6q68\" (UniqueName: \"kubernetes.io/projected/ec2c397e-6b4d-4ffc-9ffa-4f437657da02-kube-api-access-l6q68\") pod \"cert-manager-5b446d88c5-5qqff\" (UID: \"ec2c397e-6b4d-4ffc-9ffa-4f437657da02\") " pod="cert-manager/cert-manager-5b446d88c5-5qqff" Nov 28 12:47:35 crc kubenswrapper[4779]: I1128 12:47:35.191811 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwq69\" (UniqueName: \"kubernetes.io/projected/17acea2c-1197-4905-bb74-3f4137eb521d-kube-api-access-wwq69\") pod \"cert-manager-cainjector-7f985d654d-fx6q6\" (UID: \"17acea2c-1197-4905-bb74-3f4137eb521d\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-fx6q6" Nov 28 12:47:35 crc kubenswrapper[4779]: I1128 12:47:35.193755 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jldzf\" (UniqueName: \"kubernetes.io/projected/8445721b-8f86-4161-adc3-2ddf58f3aa94-kube-api-access-jldzf\") pod \"cert-manager-webhook-5655c58dd6-bvk27\" (UID: \"8445721b-8f86-4161-adc3-2ddf58f3aa94\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-bvk27" Nov 28 12:47:35 crc kubenswrapper[4779]: I1128 12:47:35.220466 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-fx6q6" Nov 28 12:47:35 crc kubenswrapper[4779]: I1128 12:47:35.254668 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-5qqff" Nov 28 12:47:35 crc kubenswrapper[4779]: I1128 12:47:35.280014 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-bvk27" Nov 28 12:47:35 crc kubenswrapper[4779]: I1128 12:47:35.504441 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-fx6q6"] Nov 28 12:47:35 crc kubenswrapper[4779]: I1128 12:47:35.513809 4779 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 12:47:35 crc kubenswrapper[4779]: I1128 12:47:35.573973 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-bvk27"] Nov 28 12:47:35 crc kubenswrapper[4779]: W1128 12:47:35.578971 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8445721b_8f86_4161_adc3_2ddf58f3aa94.slice/crio-bdc23439d09ed70aae4884030b141878e4ec0eb7d0e8a066670543f8480da4af WatchSource:0}: Error finding container bdc23439d09ed70aae4884030b141878e4ec0eb7d0e8a066670543f8480da4af: Status 404 returned error can't find the container with id bdc23439d09ed70aae4884030b141878e4ec0eb7d0e8a066670543f8480da4af Nov 28 12:47:35 crc kubenswrapper[4779]: I1128 12:47:35.754835 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-5qqff"] Nov 28 12:47:35 crc kubenswrapper[4779]: W1128 12:47:35.756475 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec2c397e_6b4d_4ffc_9ffa_4f437657da02.slice/crio-d56a60283f6ffed87dd49d71e3af09e2e2f65424d9215600991399245db5c3fa WatchSource:0}: Error finding container d56a60283f6ffed87dd49d71e3af09e2e2f65424d9215600991399245db5c3fa: Status 404 returned error can't find the container with id d56a60283f6ffed87dd49d71e3af09e2e2f65424d9215600991399245db5c3fa Nov 28 12:47:36 crc kubenswrapper[4779]: I1128 12:47:36.081731 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-5qqff" event={"ID":"ec2c397e-6b4d-4ffc-9ffa-4f437657da02","Type":"ContainerStarted","Data":"d56a60283f6ffed87dd49d71e3af09e2e2f65424d9215600991399245db5c3fa"} Nov 28 12:47:36 crc kubenswrapper[4779]: I1128 12:47:36.083305 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-fx6q6" event={"ID":"17acea2c-1197-4905-bb74-3f4137eb521d","Type":"ContainerStarted","Data":"24545f4165c9e5c5c8694af2bf16b0ca268a88f788e8f56ebd9a9e89000d27a1"} Nov 28 12:47:36 crc kubenswrapper[4779]: I1128 12:47:36.085152 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-bvk27" event={"ID":"8445721b-8f86-4161-adc3-2ddf58f3aa94","Type":"ContainerStarted","Data":"bdc23439d09ed70aae4884030b141878e4ec0eb7d0e8a066670543f8480da4af"} Nov 28 12:47:38 crc kubenswrapper[4779]: I1128 12:47:38.096915 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-fx6q6" event={"ID":"17acea2c-1197-4905-bb74-3f4137eb521d","Type":"ContainerStarted","Data":"f15ec492482a4f355736a154d3be69d2c1485da61f30a0b8177f2958cd861c96"} Nov 28 12:47:38 crc kubenswrapper[4779]: I1128 12:47:38.122623 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-fx6q6" podStartSLOduration=2.029073501 podStartE2EDuration="4.122605394s" podCreationTimestamp="2025-11-28 12:47:34 +0000 UTC" firstStartedPulling="2025-11-28 12:47:35.513305102 +0000 UTC m=+716.078980496" lastFinishedPulling="2025-11-28 12:47:37.606837025 +0000 UTC m=+718.172512389" observedRunningTime="2025-11-28 12:47:38.120416835 +0000 UTC m=+718.686092179" watchObservedRunningTime="2025-11-28 12:47:38.122605394 +0000 UTC m=+718.688280748" Nov 28 12:47:40 crc kubenswrapper[4779]: I1128 12:47:40.111404 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-bvk27" event={"ID":"8445721b-8f86-4161-adc3-2ddf58f3aa94","Type":"ContainerStarted","Data":"4fdfa0ff75bc7d2beb9829120f4ccefe579f9f85e46f2bb3230bdbbbe6bffdbf"} Nov 28 12:47:40 crc kubenswrapper[4779]: I1128 12:47:40.111837 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-bvk27" Nov 28 12:47:40 crc kubenswrapper[4779]: I1128 12:47:40.113403 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-5qqff" event={"ID":"ec2c397e-6b4d-4ffc-9ffa-4f437657da02","Type":"ContainerStarted","Data":"705e16c8c612b70d8149f673d0e9aaa0959b2f8062d9f03bfba7a83f8bf54923"} Nov 28 12:47:40 crc kubenswrapper[4779]: I1128 12:47:40.144252 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-bvk27" podStartSLOduration=2.66467291 podStartE2EDuration="6.144235101s" podCreationTimestamp="2025-11-28 12:47:34 +0000 UTC" firstStartedPulling="2025-11-28 12:47:35.581230111 +0000 UTC m=+716.146905475" lastFinishedPulling="2025-11-28 12:47:39.060792272 +0000 UTC m=+719.626467666" observedRunningTime="2025-11-28 12:47:40.138760973 +0000 UTC m=+720.704436337" watchObservedRunningTime="2025-11-28 12:47:40.144235101 +0000 UTC m=+720.709910465" Nov 28 12:47:40 crc kubenswrapper[4779]: I1128 12:47:40.161188 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-5qqff" podStartSLOduration=2.801824275 podStartE2EDuration="6.161163077s" podCreationTimestamp="2025-11-28 12:47:34 +0000 UTC" firstStartedPulling="2025-11-28 12:47:35.759701888 +0000 UTC m=+716.325377242" lastFinishedPulling="2025-11-28 12:47:39.11904065 +0000 UTC m=+719.684716044" observedRunningTime="2025-11-28 12:47:40.155818163 +0000 UTC m=+720.721493557" watchObservedRunningTime="2025-11-28 12:47:40.161163077 +0000 UTC m=+720.726838461" Nov 28 12:47:45 crc kubenswrapper[4779]: I1128 12:47:45.284341 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-bvk27" Nov 28 12:47:45 crc kubenswrapper[4779]: I1128 12:47:45.795326 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pbmbn"] Nov 28 12:47:45 crc kubenswrapper[4779]: I1128 12:47:45.796022 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" podUID="35f4f43e-a921-41b2-aa88-506055daff60" containerName="ovn-controller" containerID="cri-o://0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868" gracePeriod=30 Nov 28 12:47:45 crc kubenswrapper[4779]: I1128 12:47:45.796107 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" podUID="35f4f43e-a921-41b2-aa88-506055daff60" containerName="nbdb" containerID="cri-o://bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71" gracePeriod=30 Nov 28 12:47:45 crc kubenswrapper[4779]: I1128 12:47:45.796181 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" podUID="35f4f43e-a921-41b2-aa88-506055daff60" containerName="kube-rbac-proxy-node" containerID="cri-o://759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620" gracePeriod=30 Nov 28 12:47:45 crc kubenswrapper[4779]: I1128 12:47:45.796179 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" podUID="35f4f43e-a921-41b2-aa88-506055daff60" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9" gracePeriod=30 Nov 28 12:47:45 crc kubenswrapper[4779]: I1128 12:47:45.796242 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" podUID="35f4f43e-a921-41b2-aa88-506055daff60" containerName="sbdb" containerID="cri-o://514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0" gracePeriod=30 Nov 28 12:47:45 crc kubenswrapper[4779]: I1128 12:47:45.796282 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" podUID="35f4f43e-a921-41b2-aa88-506055daff60" containerName="ovn-acl-logging" containerID="cri-o://683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2" gracePeriod=30 Nov 28 12:47:45 crc kubenswrapper[4779]: I1128 12:47:45.796317 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" podUID="35f4f43e-a921-41b2-aa88-506055daff60" containerName="northd" containerID="cri-o://192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e" gracePeriod=30 Nov 28 12:47:45 crc kubenswrapper[4779]: I1128 12:47:45.872636 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" podUID="35f4f43e-a921-41b2-aa88-506055daff60" containerName="ovnkube-controller" containerID="cri-o://356f9859366b2c85978bdc8b4d408a84029a55da3f4ccc8c50875af41e078241" gracePeriod=30 Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.142987 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pbmbn_35f4f43e-a921-41b2-aa88-506055daff60/ovnkube-controller/3.log" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.147032 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pbmbn_35f4f43e-a921-41b2-aa88-506055daff60/ovn-acl-logging/0.log" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.147825 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pbmbn_35f4f43e-a921-41b2-aa88-506055daff60/ovn-controller/0.log" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.150175 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.152166 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-pzwdx_ba664a9e-76d2-4d02-889a-e7062bfc903c/kube-multus/2.log" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.152908 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-pzwdx_ba664a9e-76d2-4d02-889a-e7062bfc903c/kube-multus/1.log" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.152970 4779 generic.go:334] "Generic (PLEG): container finished" podID="ba664a9e-76d2-4d02-889a-e7062bfc903c" containerID="f1a944d63eb31fd058243070791b29847489e8e4a0cd31d1b188b45c0790f5f2" exitCode=2 Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.153047 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-pzwdx" event={"ID":"ba664a9e-76d2-4d02-889a-e7062bfc903c","Type":"ContainerDied","Data":"f1a944d63eb31fd058243070791b29847489e8e4a0cd31d1b188b45c0790f5f2"} Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.153106 4779 scope.go:117] "RemoveContainer" containerID="3c11decc7085592a2a1e13b74049f378421293a7a1929f765860c47824c4b7a5" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.153615 4779 scope.go:117] "RemoveContainer" containerID="f1a944d63eb31fd058243070791b29847489e8e4a0cd31d1b188b45c0790f5f2" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.158459 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pbmbn_35f4f43e-a921-41b2-aa88-506055daff60/ovnkube-controller/3.log" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.165636 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pbmbn_35f4f43e-a921-41b2-aa88-506055daff60/ovn-acl-logging/0.log" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.166257 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pbmbn_35f4f43e-a921-41b2-aa88-506055daff60/ovn-controller/0.log" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.166778 4779 generic.go:334] "Generic (PLEG): container finished" podID="35f4f43e-a921-41b2-aa88-506055daff60" containerID="356f9859366b2c85978bdc8b4d408a84029a55da3f4ccc8c50875af41e078241" exitCode=0 Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.166970 4779 generic.go:334] "Generic (PLEG): container finished" podID="35f4f43e-a921-41b2-aa88-506055daff60" containerID="514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0" exitCode=0 Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.166989 4779 generic.go:334] "Generic (PLEG): container finished" podID="35f4f43e-a921-41b2-aa88-506055daff60" containerID="bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71" exitCode=0 Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.166857 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" event={"ID":"35f4f43e-a921-41b2-aa88-506055daff60","Type":"ContainerDied","Data":"356f9859366b2c85978bdc8b4d408a84029a55da3f4ccc8c50875af41e078241"} Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.167065 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" event={"ID":"35f4f43e-a921-41b2-aa88-506055daff60","Type":"ContainerDied","Data":"514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0"} Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.167119 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" event={"ID":"35f4f43e-a921-41b2-aa88-506055daff60","Type":"ContainerDied","Data":"bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71"} Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.167140 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" event={"ID":"35f4f43e-a921-41b2-aa88-506055daff60","Type":"ContainerDied","Data":"192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e"} Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.166916 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.167002 4779 generic.go:334] "Generic (PLEG): container finished" podID="35f4f43e-a921-41b2-aa88-506055daff60" containerID="192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e" exitCode=0 Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.167293 4779 generic.go:334] "Generic (PLEG): container finished" podID="35f4f43e-a921-41b2-aa88-506055daff60" containerID="d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9" exitCode=0 Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.167307 4779 generic.go:334] "Generic (PLEG): container finished" podID="35f4f43e-a921-41b2-aa88-506055daff60" containerID="759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620" exitCode=0 Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.167320 4779 generic.go:334] "Generic (PLEG): container finished" podID="35f4f43e-a921-41b2-aa88-506055daff60" containerID="683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2" exitCode=143 Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.167333 4779 generic.go:334] "Generic (PLEG): container finished" podID="35f4f43e-a921-41b2-aa88-506055daff60" containerID="0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868" exitCode=143 Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.167356 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" event={"ID":"35f4f43e-a921-41b2-aa88-506055daff60","Type":"ContainerDied","Data":"d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9"} Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.167378 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" event={"ID":"35f4f43e-a921-41b2-aa88-506055daff60","Type":"ContainerDied","Data":"759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620"} Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.167398 4779 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"356f9859366b2c85978bdc8b4d408a84029a55da3f4ccc8c50875af41e078241"} Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.167416 4779 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fae861b14ca36a4b482a48b94ffda32e0d188204f356dfe60e2d8778b284dc1b"} Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.167428 4779 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0"} Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.167439 4779 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71"} Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.167450 4779 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e"} Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.167461 4779 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9"} Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.167474 4779 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620"} Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.167484 4779 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2"} Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.167520 4779 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868"} Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.167531 4779 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29"} Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.167546 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" event={"ID":"35f4f43e-a921-41b2-aa88-506055daff60","Type":"ContainerDied","Data":"683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2"} Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.167564 4779 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"356f9859366b2c85978bdc8b4d408a84029a55da3f4ccc8c50875af41e078241"} Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.167577 4779 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fae861b14ca36a4b482a48b94ffda32e0d188204f356dfe60e2d8778b284dc1b"} Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.167587 4779 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0"} Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.167599 4779 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71"} Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.167610 4779 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e"} Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.167621 4779 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9"} Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.167632 4779 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620"} Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.167643 4779 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2"} Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.167655 4779 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868"} Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.167666 4779 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29"} Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.167681 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" event={"ID":"35f4f43e-a921-41b2-aa88-506055daff60","Type":"ContainerDied","Data":"0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868"} Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.167696 4779 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"356f9859366b2c85978bdc8b4d408a84029a55da3f4ccc8c50875af41e078241"} Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.167708 4779 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fae861b14ca36a4b482a48b94ffda32e0d188204f356dfe60e2d8778b284dc1b"} Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.167720 4779 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0"} Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.167731 4779 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71"} Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.167741 4779 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e"} Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.167752 4779 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9"} Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.167762 4779 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620"} Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.167773 4779 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2"} Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.167784 4779 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868"} Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.167794 4779 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29"} Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.167809 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pbmbn" event={"ID":"35f4f43e-a921-41b2-aa88-506055daff60","Type":"ContainerDied","Data":"b865dc8e82285005474063520e288759e283d7dcbb76e1e9245a86914efd1e56"} Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.167824 4779 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"356f9859366b2c85978bdc8b4d408a84029a55da3f4ccc8c50875af41e078241"} Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.167836 4779 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fae861b14ca36a4b482a48b94ffda32e0d188204f356dfe60e2d8778b284dc1b"} Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.167846 4779 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0"} Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.167857 4779 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71"} Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.167867 4779 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e"} Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.167877 4779 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9"} Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.167887 4779 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620"} Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.167899 4779 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2"} Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.167910 4779 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868"} Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.167920 4779 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29"} Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.227063 4779 scope.go:117] "RemoveContainer" containerID="356f9859366b2c85978bdc8b4d408a84029a55da3f4ccc8c50875af41e078241" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.255179 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bxqgl"] Nov 28 12:47:46 crc kubenswrapper[4779]: E1128 12:47:46.255508 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35f4f43e-a921-41b2-aa88-506055daff60" containerName="kubecfg-setup" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.255529 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="35f4f43e-a921-41b2-aa88-506055daff60" containerName="kubecfg-setup" Nov 28 12:47:46 crc kubenswrapper[4779]: E1128 12:47:46.255568 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35f4f43e-a921-41b2-aa88-506055daff60" containerName="ovn-acl-logging" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.255581 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="35f4f43e-a921-41b2-aa88-506055daff60" containerName="ovn-acl-logging" Nov 28 12:47:46 crc kubenswrapper[4779]: E1128 12:47:46.255603 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35f4f43e-a921-41b2-aa88-506055daff60" containerName="ovnkube-controller" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.255616 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="35f4f43e-a921-41b2-aa88-506055daff60" containerName="ovnkube-controller" Nov 28 12:47:46 crc kubenswrapper[4779]: E1128 12:47:46.255631 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35f4f43e-a921-41b2-aa88-506055daff60" containerName="sbdb" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.255644 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="35f4f43e-a921-41b2-aa88-506055daff60" containerName="sbdb" Nov 28 12:47:46 crc kubenswrapper[4779]: E1128 12:47:46.255663 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35f4f43e-a921-41b2-aa88-506055daff60" containerName="ovnkube-controller" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.255676 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="35f4f43e-a921-41b2-aa88-506055daff60" containerName="ovnkube-controller" Nov 28 12:47:46 crc kubenswrapper[4779]: E1128 12:47:46.255693 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35f4f43e-a921-41b2-aa88-506055daff60" containerName="kube-rbac-proxy-node" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.255705 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="35f4f43e-a921-41b2-aa88-506055daff60" containerName="kube-rbac-proxy-node" Nov 28 12:47:46 crc kubenswrapper[4779]: E1128 12:47:46.255718 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35f4f43e-a921-41b2-aa88-506055daff60" containerName="ovn-controller" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.255730 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="35f4f43e-a921-41b2-aa88-506055daff60" containerName="ovn-controller" Nov 28 12:47:46 crc kubenswrapper[4779]: E1128 12:47:46.255746 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35f4f43e-a921-41b2-aa88-506055daff60" containerName="ovnkube-controller" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.255759 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="35f4f43e-a921-41b2-aa88-506055daff60" containerName="ovnkube-controller" Nov 28 12:47:46 crc kubenswrapper[4779]: E1128 12:47:46.255775 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35f4f43e-a921-41b2-aa88-506055daff60" containerName="ovnkube-controller" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.255786 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="35f4f43e-a921-41b2-aa88-506055daff60" containerName="ovnkube-controller" Nov 28 12:47:46 crc kubenswrapper[4779]: E1128 12:47:46.255807 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35f4f43e-a921-41b2-aa88-506055daff60" containerName="kube-rbac-proxy-ovn-metrics" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.255818 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="35f4f43e-a921-41b2-aa88-506055daff60" containerName="kube-rbac-proxy-ovn-metrics" Nov 28 12:47:46 crc kubenswrapper[4779]: E1128 12:47:46.255837 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35f4f43e-a921-41b2-aa88-506055daff60" containerName="nbdb" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.255848 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="35f4f43e-a921-41b2-aa88-506055daff60" containerName="nbdb" Nov 28 12:47:46 crc kubenswrapper[4779]: E1128 12:47:46.255870 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35f4f43e-a921-41b2-aa88-506055daff60" containerName="northd" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.255882 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="35f4f43e-a921-41b2-aa88-506055daff60" containerName="northd" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.256046 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="35f4f43e-a921-41b2-aa88-506055daff60" containerName="ovnkube-controller" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.256061 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="35f4f43e-a921-41b2-aa88-506055daff60" containerName="kube-rbac-proxy-ovn-metrics" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.256082 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="35f4f43e-a921-41b2-aa88-506055daff60" containerName="sbdb" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.256124 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="35f4f43e-a921-41b2-aa88-506055daff60" containerName="ovn-controller" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.256138 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="35f4f43e-a921-41b2-aa88-506055daff60" containerName="kube-rbac-proxy-node" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.256153 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="35f4f43e-a921-41b2-aa88-506055daff60" containerName="northd" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.256172 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="35f4f43e-a921-41b2-aa88-506055daff60" containerName="ovnkube-controller" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.256185 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="35f4f43e-a921-41b2-aa88-506055daff60" containerName="nbdb" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.256201 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="35f4f43e-a921-41b2-aa88-506055daff60" containerName="ovn-acl-logging" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.256217 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="35f4f43e-a921-41b2-aa88-506055daff60" containerName="ovnkube-controller" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.256231 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="35f4f43e-a921-41b2-aa88-506055daff60" containerName="ovnkube-controller" Nov 28 12:47:46 crc kubenswrapper[4779]: E1128 12:47:46.256400 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35f4f43e-a921-41b2-aa88-506055daff60" containerName="ovnkube-controller" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.256415 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="35f4f43e-a921-41b2-aa88-506055daff60" containerName="ovnkube-controller" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.256584 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="35f4f43e-a921-41b2-aa88-506055daff60" containerName="ovnkube-controller" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.259651 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.261791 4779 scope.go:117] "RemoveContainer" containerID="fae861b14ca36a4b482a48b94ffda32e0d188204f356dfe60e2d8778b284dc1b" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.296895 4779 scope.go:117] "RemoveContainer" containerID="514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.314137 4779 scope.go:117] "RemoveContainer" containerID="bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.330190 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/35f4f43e-a921-41b2-aa88-506055daff60-ovnkube-config\") pod \"35f4f43e-a921-41b2-aa88-506055daff60\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.330226 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-host-cni-netd\") pod \"35f4f43e-a921-41b2-aa88-506055daff60\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.330268 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/35f4f43e-a921-41b2-aa88-506055daff60-env-overrides\") pod \"35f4f43e-a921-41b2-aa88-506055daff60\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.330319 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-etc-openvswitch\") pod \"35f4f43e-a921-41b2-aa88-506055daff60\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.330340 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-node-log\") pod \"35f4f43e-a921-41b2-aa88-506055daff60\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.330359 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-host-cni-bin\") pod \"35f4f43e-a921-41b2-aa88-506055daff60\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.330380 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5msg\" (UniqueName: \"kubernetes.io/projected/35f4f43e-a921-41b2-aa88-506055daff60-kube-api-access-q5msg\") pod \"35f4f43e-a921-41b2-aa88-506055daff60\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.330417 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/35f4f43e-a921-41b2-aa88-506055daff60-ovnkube-script-lib\") pod \"35f4f43e-a921-41b2-aa88-506055daff60\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.330440 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-run-ovn\") pod \"35f4f43e-a921-41b2-aa88-506055daff60\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.330469 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-host-run-ovn-kubernetes\") pod \"35f4f43e-a921-41b2-aa88-506055daff60\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.330486 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-host-slash\") pod \"35f4f43e-a921-41b2-aa88-506055daff60\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.330512 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-var-lib-openvswitch\") pod \"35f4f43e-a921-41b2-aa88-506055daff60\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.330534 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-host-run-netns\") pod \"35f4f43e-a921-41b2-aa88-506055daff60\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.330557 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-host-kubelet\") pod \"35f4f43e-a921-41b2-aa88-506055daff60\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.330573 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-log-socket\") pod \"35f4f43e-a921-41b2-aa88-506055daff60\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.330591 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/35f4f43e-a921-41b2-aa88-506055daff60-ovn-node-metrics-cert\") pod \"35f4f43e-a921-41b2-aa88-506055daff60\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.330616 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-systemd-units\") pod \"35f4f43e-a921-41b2-aa88-506055daff60\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.330636 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-run-systemd\") pod \"35f4f43e-a921-41b2-aa88-506055daff60\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.330660 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-host-var-lib-cni-networks-ovn-kubernetes\") pod \"35f4f43e-a921-41b2-aa88-506055daff60\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.330681 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-run-openvswitch\") pod \"35f4f43e-a921-41b2-aa88-506055daff60\" (UID: \"35f4f43e-a921-41b2-aa88-506055daff60\") " Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.331718 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "35f4f43e-a921-41b2-aa88-506055daff60" (UID: "35f4f43e-a921-41b2-aa88-506055daff60"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.332026 4779 scope.go:117] "RemoveContainer" containerID="192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.332189 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "35f4f43e-a921-41b2-aa88-506055daff60" (UID: "35f4f43e-a921-41b2-aa88-506055daff60"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.332219 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35f4f43e-a921-41b2-aa88-506055daff60-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "35f4f43e-a921-41b2-aa88-506055daff60" (UID: "35f4f43e-a921-41b2-aa88-506055daff60"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.332222 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "35f4f43e-a921-41b2-aa88-506055daff60" (UID: "35f4f43e-a921-41b2-aa88-506055daff60"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.332243 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-node-log" (OuterVolumeSpecName: "node-log") pod "35f4f43e-a921-41b2-aa88-506055daff60" (UID: "35f4f43e-a921-41b2-aa88-506055daff60"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.332259 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "35f4f43e-a921-41b2-aa88-506055daff60" (UID: "35f4f43e-a921-41b2-aa88-506055daff60"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.332286 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "35f4f43e-a921-41b2-aa88-506055daff60" (UID: "35f4f43e-a921-41b2-aa88-506055daff60"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.332324 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-log-socket" (OuterVolumeSpecName: "log-socket") pod "35f4f43e-a921-41b2-aa88-506055daff60" (UID: "35f4f43e-a921-41b2-aa88-506055daff60"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.332866 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "35f4f43e-a921-41b2-aa88-506055daff60" (UID: "35f4f43e-a921-41b2-aa88-506055daff60"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.333014 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35f4f43e-a921-41b2-aa88-506055daff60-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "35f4f43e-a921-41b2-aa88-506055daff60" (UID: "35f4f43e-a921-41b2-aa88-506055daff60"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.333076 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "35f4f43e-a921-41b2-aa88-506055daff60" (UID: "35f4f43e-a921-41b2-aa88-506055daff60"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.333140 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "35f4f43e-a921-41b2-aa88-506055daff60" (UID: "35f4f43e-a921-41b2-aa88-506055daff60"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.333268 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "35f4f43e-a921-41b2-aa88-506055daff60" (UID: "35f4f43e-a921-41b2-aa88-506055daff60"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.333264 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "35f4f43e-a921-41b2-aa88-506055daff60" (UID: "35f4f43e-a921-41b2-aa88-506055daff60"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.333300 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-host-slash" (OuterVolumeSpecName: "host-slash") pod "35f4f43e-a921-41b2-aa88-506055daff60" (UID: "35f4f43e-a921-41b2-aa88-506055daff60"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.333337 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "35f4f43e-a921-41b2-aa88-506055daff60" (UID: "35f4f43e-a921-41b2-aa88-506055daff60"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.333626 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35f4f43e-a921-41b2-aa88-506055daff60-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "35f4f43e-a921-41b2-aa88-506055daff60" (UID: "35f4f43e-a921-41b2-aa88-506055daff60"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.339638 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35f4f43e-a921-41b2-aa88-506055daff60-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "35f4f43e-a921-41b2-aa88-506055daff60" (UID: "35f4f43e-a921-41b2-aa88-506055daff60"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.339825 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35f4f43e-a921-41b2-aa88-506055daff60-kube-api-access-q5msg" (OuterVolumeSpecName: "kube-api-access-q5msg") pod "35f4f43e-a921-41b2-aa88-506055daff60" (UID: "35f4f43e-a921-41b2-aa88-506055daff60"). InnerVolumeSpecName "kube-api-access-q5msg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.349517 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "35f4f43e-a921-41b2-aa88-506055daff60" (UID: "35f4f43e-a921-41b2-aa88-506055daff60"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.351714 4779 scope.go:117] "RemoveContainer" containerID="d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.364762 4779 scope.go:117] "RemoveContainer" containerID="759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.378265 4779 scope.go:117] "RemoveContainer" containerID="683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.398376 4779 scope.go:117] "RemoveContainer" containerID="0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.413067 4779 scope.go:117] "RemoveContainer" containerID="397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.428833 4779 scope.go:117] "RemoveContainer" containerID="356f9859366b2c85978bdc8b4d408a84029a55da3f4ccc8c50875af41e078241" Nov 28 12:47:46 crc kubenswrapper[4779]: E1128 12:47:46.429583 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"356f9859366b2c85978bdc8b4d408a84029a55da3f4ccc8c50875af41e078241\": container with ID starting with 356f9859366b2c85978bdc8b4d408a84029a55da3f4ccc8c50875af41e078241 not found: ID does not exist" containerID="356f9859366b2c85978bdc8b4d408a84029a55da3f4ccc8c50875af41e078241" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.429640 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"356f9859366b2c85978bdc8b4d408a84029a55da3f4ccc8c50875af41e078241"} err="failed to get container status \"356f9859366b2c85978bdc8b4d408a84029a55da3f4ccc8c50875af41e078241\": rpc error: code = NotFound desc = could not find container \"356f9859366b2c85978bdc8b4d408a84029a55da3f4ccc8c50875af41e078241\": container with ID starting with 356f9859366b2c85978bdc8b4d408a84029a55da3f4ccc8c50875af41e078241 not found: ID does not exist" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.429677 4779 scope.go:117] "RemoveContainer" containerID="fae861b14ca36a4b482a48b94ffda32e0d188204f356dfe60e2d8778b284dc1b" Nov 28 12:47:46 crc kubenswrapper[4779]: E1128 12:47:46.430062 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fae861b14ca36a4b482a48b94ffda32e0d188204f356dfe60e2d8778b284dc1b\": container with ID starting with fae861b14ca36a4b482a48b94ffda32e0d188204f356dfe60e2d8778b284dc1b not found: ID does not exist" containerID="fae861b14ca36a4b482a48b94ffda32e0d188204f356dfe60e2d8778b284dc1b" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.430128 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fae861b14ca36a4b482a48b94ffda32e0d188204f356dfe60e2d8778b284dc1b"} err="failed to get container status \"fae861b14ca36a4b482a48b94ffda32e0d188204f356dfe60e2d8778b284dc1b\": rpc error: code = NotFound desc = could not find container \"fae861b14ca36a4b482a48b94ffda32e0d188204f356dfe60e2d8778b284dc1b\": container with ID starting with fae861b14ca36a4b482a48b94ffda32e0d188204f356dfe60e2d8778b284dc1b not found: ID does not exist" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.430160 4779 scope.go:117] "RemoveContainer" containerID="514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0" Nov 28 12:47:46 crc kubenswrapper[4779]: E1128 12:47:46.430471 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0\": container with ID starting with 514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0 not found: ID does not exist" containerID="514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.430503 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0"} err="failed to get container status \"514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0\": rpc error: code = NotFound desc = could not find container \"514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0\": container with ID starting with 514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0 not found: ID does not exist" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.430520 4779 scope.go:117] "RemoveContainer" containerID="bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71" Nov 28 12:47:46 crc kubenswrapper[4779]: E1128 12:47:46.430882 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71\": container with ID starting with bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71 not found: ID does not exist" containerID="bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.430933 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71"} err="failed to get container status \"bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71\": rpc error: code = NotFound desc = could not find container \"bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71\": container with ID starting with bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71 not found: ID does not exist" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.430950 4779 scope.go:117] "RemoveContainer" containerID="192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e" Nov 28 12:47:46 crc kubenswrapper[4779]: E1128 12:47:46.431355 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e\": container with ID starting with 192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e not found: ID does not exist" containerID="192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.431397 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e"} err="failed to get container status \"192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e\": rpc error: code = NotFound desc = could not find container \"192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e\": container with ID starting with 192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e not found: ID does not exist" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.431423 4779 scope.go:117] "RemoveContainer" containerID="d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.431663 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/59377e57-966a-454b-8151-ecdb0cb73686-host-kubelet\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.431712 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/59377e57-966a-454b-8151-ecdb0cb73686-systemd-units\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.431765 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/59377e57-966a-454b-8151-ecdb0cb73686-host-run-ovn-kubernetes\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.431803 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59377e57-966a-454b-8151-ecdb0cb73686-etc-openvswitch\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.431843 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzr4m\" (UniqueName: \"kubernetes.io/projected/59377e57-966a-454b-8151-ecdb0cb73686-kube-api-access-mzr4m\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: E1128 12:47:46.431859 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9\": container with ID starting with d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9 not found: ID does not exist" containerID="d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.431883 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/59377e57-966a-454b-8151-ecdb0cb73686-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.431894 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9"} err="failed to get container status \"d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9\": rpc error: code = NotFound desc = could not find container \"d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9\": container with ID starting with d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9 not found: ID does not exist" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.431919 4779 scope.go:117] "RemoveContainer" containerID="759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.431943 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/59377e57-966a-454b-8151-ecdb0cb73686-host-cni-bin\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.431981 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/59377e57-966a-454b-8151-ecdb0cb73686-ovnkube-script-lib\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.432018 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/59377e57-966a-454b-8151-ecdb0cb73686-node-log\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.432059 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/59377e57-966a-454b-8151-ecdb0cb73686-host-run-netns\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.432110 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59377e57-966a-454b-8151-ecdb0cb73686-var-lib-openvswitch\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.432138 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/59377e57-966a-454b-8151-ecdb0cb73686-ovnkube-config\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.432168 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/59377e57-966a-454b-8151-ecdb0cb73686-log-socket\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.432199 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/59377e57-966a-454b-8151-ecdb0cb73686-host-cni-netd\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.432227 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/59377e57-966a-454b-8151-ecdb0cb73686-host-slash\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: E1128 12:47:46.432254 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620\": container with ID starting with 759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620 not found: ID does not exist" containerID="759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.432261 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/59377e57-966a-454b-8151-ecdb0cb73686-run-systemd\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.432288 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/59377e57-966a-454b-8151-ecdb0cb73686-env-overrides\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.432271 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620"} err="failed to get container status \"759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620\": rpc error: code = NotFound desc = could not find container \"759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620\": container with ID starting with 759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620 not found: ID does not exist" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.432320 4779 scope.go:117] "RemoveContainer" containerID="683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.432369 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59377e57-966a-454b-8151-ecdb0cb73686-run-openvswitch\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.432391 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/59377e57-966a-454b-8151-ecdb0cb73686-run-ovn\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.432410 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/59377e57-966a-454b-8151-ecdb0cb73686-ovn-node-metrics-cert\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.432473 4779 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.432484 4779 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-log-socket\") on node \"crc\" DevicePath \"\"" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.432493 4779 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/35f4f43e-a921-41b2-aa88-506055daff60-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.432503 4779 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.432527 4779 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-run-systemd\") on node \"crc\" DevicePath \"\"" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.432536 4779 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.432544 4779 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.432553 4779 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/35f4f43e-a921-41b2-aa88-506055daff60-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.432561 4779 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.432568 4779 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/35f4f43e-a921-41b2-aa88-506055daff60-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 28 12:47:46 crc kubenswrapper[4779]: E1128 12:47:46.432569 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2\": container with ID starting with 683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2 not found: ID does not exist" containerID="683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.432593 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2"} err="failed to get container status \"683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2\": rpc error: code = NotFound desc = could not find container \"683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2\": container with ID starting with 683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2 not found: ID does not exist" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.432613 4779 scope.go:117] "RemoveContainer" containerID="0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.432576 4779 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.432666 4779 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-node-log\") on node \"crc\" DevicePath \"\"" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.432680 4779 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.432693 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5msg\" (UniqueName: \"kubernetes.io/projected/35f4f43e-a921-41b2-aa88-506055daff60-kube-api-access-q5msg\") on node \"crc\" DevicePath \"\"" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.432705 4779 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/35f4f43e-a921-41b2-aa88-506055daff60-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.432718 4779 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.432731 4779 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.432743 4779 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-host-slash\") on node \"crc\" DevicePath \"\"" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.432754 4779 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.432768 4779 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/35f4f43e-a921-41b2-aa88-506055daff60-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 28 12:47:46 crc kubenswrapper[4779]: E1128 12:47:46.432922 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868\": container with ID starting with 0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868 not found: ID does not exist" containerID="0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.432938 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868"} err="failed to get container status \"0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868\": rpc error: code = NotFound desc = could not find container \"0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868\": container with ID starting with 0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868 not found: ID does not exist" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.432949 4779 scope.go:117] "RemoveContainer" containerID="397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29" Nov 28 12:47:46 crc kubenswrapper[4779]: E1128 12:47:46.433420 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\": container with ID starting with 397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29 not found: ID does not exist" containerID="397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.433440 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29"} err="failed to get container status \"397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\": rpc error: code = NotFound desc = could not find container \"397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\": container with ID starting with 397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29 not found: ID does not exist" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.433452 4779 scope.go:117] "RemoveContainer" containerID="356f9859366b2c85978bdc8b4d408a84029a55da3f4ccc8c50875af41e078241" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.433719 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"356f9859366b2c85978bdc8b4d408a84029a55da3f4ccc8c50875af41e078241"} err="failed to get container status \"356f9859366b2c85978bdc8b4d408a84029a55da3f4ccc8c50875af41e078241\": rpc error: code = NotFound desc = could not find container \"356f9859366b2c85978bdc8b4d408a84029a55da3f4ccc8c50875af41e078241\": container with ID starting with 356f9859366b2c85978bdc8b4d408a84029a55da3f4ccc8c50875af41e078241 not found: ID does not exist" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.433750 4779 scope.go:117] "RemoveContainer" containerID="fae861b14ca36a4b482a48b94ffda32e0d188204f356dfe60e2d8778b284dc1b" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.434193 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fae861b14ca36a4b482a48b94ffda32e0d188204f356dfe60e2d8778b284dc1b"} err="failed to get container status \"fae861b14ca36a4b482a48b94ffda32e0d188204f356dfe60e2d8778b284dc1b\": rpc error: code = NotFound desc = could not find container \"fae861b14ca36a4b482a48b94ffda32e0d188204f356dfe60e2d8778b284dc1b\": container with ID starting with fae861b14ca36a4b482a48b94ffda32e0d188204f356dfe60e2d8778b284dc1b not found: ID does not exist" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.434207 4779 scope.go:117] "RemoveContainer" containerID="514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.434533 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0"} err="failed to get container status \"514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0\": rpc error: code = NotFound desc = could not find container \"514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0\": container with ID starting with 514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0 not found: ID does not exist" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.434547 4779 scope.go:117] "RemoveContainer" containerID="bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.434797 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71"} err="failed to get container status \"bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71\": rpc error: code = NotFound desc = could not find container \"bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71\": container with ID starting with bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71 not found: ID does not exist" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.434811 4779 scope.go:117] "RemoveContainer" containerID="192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.435160 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e"} err="failed to get container status \"192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e\": rpc error: code = NotFound desc = could not find container \"192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e\": container with ID starting with 192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e not found: ID does not exist" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.435201 4779 scope.go:117] "RemoveContainer" containerID="d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.435395 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9"} err="failed to get container status \"d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9\": rpc error: code = NotFound desc = could not find container \"d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9\": container with ID starting with d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9 not found: ID does not exist" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.435410 4779 scope.go:117] "RemoveContainer" containerID="759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.435696 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620"} err="failed to get container status \"759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620\": rpc error: code = NotFound desc = could not find container \"759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620\": container with ID starting with 759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620 not found: ID does not exist" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.435710 4779 scope.go:117] "RemoveContainer" containerID="683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.435926 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2"} err="failed to get container status \"683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2\": rpc error: code = NotFound desc = could not find container \"683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2\": container with ID starting with 683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2 not found: ID does not exist" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.435944 4779 scope.go:117] "RemoveContainer" containerID="0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.436440 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868"} err="failed to get container status \"0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868\": rpc error: code = NotFound desc = could not find container \"0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868\": container with ID starting with 0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868 not found: ID does not exist" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.436473 4779 scope.go:117] "RemoveContainer" containerID="397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.436806 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29"} err="failed to get container status \"397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\": rpc error: code = NotFound desc = could not find container \"397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\": container with ID starting with 397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29 not found: ID does not exist" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.436827 4779 scope.go:117] "RemoveContainer" containerID="356f9859366b2c85978bdc8b4d408a84029a55da3f4ccc8c50875af41e078241" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.437122 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"356f9859366b2c85978bdc8b4d408a84029a55da3f4ccc8c50875af41e078241"} err="failed to get container status \"356f9859366b2c85978bdc8b4d408a84029a55da3f4ccc8c50875af41e078241\": rpc error: code = NotFound desc = could not find container \"356f9859366b2c85978bdc8b4d408a84029a55da3f4ccc8c50875af41e078241\": container with ID starting with 356f9859366b2c85978bdc8b4d408a84029a55da3f4ccc8c50875af41e078241 not found: ID does not exist" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.437148 4779 scope.go:117] "RemoveContainer" containerID="fae861b14ca36a4b482a48b94ffda32e0d188204f356dfe60e2d8778b284dc1b" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.437545 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fae861b14ca36a4b482a48b94ffda32e0d188204f356dfe60e2d8778b284dc1b"} err="failed to get container status \"fae861b14ca36a4b482a48b94ffda32e0d188204f356dfe60e2d8778b284dc1b\": rpc error: code = NotFound desc = could not find container \"fae861b14ca36a4b482a48b94ffda32e0d188204f356dfe60e2d8778b284dc1b\": container with ID starting with fae861b14ca36a4b482a48b94ffda32e0d188204f356dfe60e2d8778b284dc1b not found: ID does not exist" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.437581 4779 scope.go:117] "RemoveContainer" containerID="514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.438170 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0"} err="failed to get container status \"514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0\": rpc error: code = NotFound desc = could not find container \"514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0\": container with ID starting with 514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0 not found: ID does not exist" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.438199 4779 scope.go:117] "RemoveContainer" containerID="bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.438569 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71"} err="failed to get container status \"bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71\": rpc error: code = NotFound desc = could not find container \"bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71\": container with ID starting with bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71 not found: ID does not exist" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.438760 4779 scope.go:117] "RemoveContainer" containerID="192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.439204 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e"} err="failed to get container status \"192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e\": rpc error: code = NotFound desc = could not find container \"192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e\": container with ID starting with 192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e not found: ID does not exist" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.439225 4779 scope.go:117] "RemoveContainer" containerID="d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.439583 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9"} err="failed to get container status \"d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9\": rpc error: code = NotFound desc = could not find container \"d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9\": container with ID starting with d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9 not found: ID does not exist" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.439813 4779 scope.go:117] "RemoveContainer" containerID="759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.440164 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620"} err="failed to get container status \"759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620\": rpc error: code = NotFound desc = could not find container \"759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620\": container with ID starting with 759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620 not found: ID does not exist" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.440189 4779 scope.go:117] "RemoveContainer" containerID="683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.440631 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2"} err="failed to get container status \"683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2\": rpc error: code = NotFound desc = could not find container \"683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2\": container with ID starting with 683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2 not found: ID does not exist" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.440652 4779 scope.go:117] "RemoveContainer" containerID="0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.440948 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868"} err="failed to get container status \"0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868\": rpc error: code = NotFound desc = could not find container \"0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868\": container with ID starting with 0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868 not found: ID does not exist" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.440996 4779 scope.go:117] "RemoveContainer" containerID="397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.441406 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29"} err="failed to get container status \"397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\": rpc error: code = NotFound desc = could not find container \"397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\": container with ID starting with 397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29 not found: ID does not exist" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.441446 4779 scope.go:117] "RemoveContainer" containerID="356f9859366b2c85978bdc8b4d408a84029a55da3f4ccc8c50875af41e078241" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.441725 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"356f9859366b2c85978bdc8b4d408a84029a55da3f4ccc8c50875af41e078241"} err="failed to get container status \"356f9859366b2c85978bdc8b4d408a84029a55da3f4ccc8c50875af41e078241\": rpc error: code = NotFound desc = could not find container \"356f9859366b2c85978bdc8b4d408a84029a55da3f4ccc8c50875af41e078241\": container with ID starting with 356f9859366b2c85978bdc8b4d408a84029a55da3f4ccc8c50875af41e078241 not found: ID does not exist" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.441753 4779 scope.go:117] "RemoveContainer" containerID="fae861b14ca36a4b482a48b94ffda32e0d188204f356dfe60e2d8778b284dc1b" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.442060 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fae861b14ca36a4b482a48b94ffda32e0d188204f356dfe60e2d8778b284dc1b"} err="failed to get container status \"fae861b14ca36a4b482a48b94ffda32e0d188204f356dfe60e2d8778b284dc1b\": rpc error: code = NotFound desc = could not find container \"fae861b14ca36a4b482a48b94ffda32e0d188204f356dfe60e2d8778b284dc1b\": container with ID starting with fae861b14ca36a4b482a48b94ffda32e0d188204f356dfe60e2d8778b284dc1b not found: ID does not exist" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.442118 4779 scope.go:117] "RemoveContainer" containerID="514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.442488 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0"} err="failed to get container status \"514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0\": rpc error: code = NotFound desc = could not find container \"514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0\": container with ID starting with 514dc15ba6de3deb99db4fea0d6ff38c833db499610697932265b40cd140eee0 not found: ID does not exist" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.442528 4779 scope.go:117] "RemoveContainer" containerID="bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.442780 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71"} err="failed to get container status \"bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71\": rpc error: code = NotFound desc = could not find container \"bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71\": container with ID starting with bf5232e94435ecac2ba95a6fcde02e9171439f7212d41b7d74461a1154fbea71 not found: ID does not exist" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.442807 4779 scope.go:117] "RemoveContainer" containerID="192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.443185 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e"} err="failed to get container status \"192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e\": rpc error: code = NotFound desc = could not find container \"192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e\": container with ID starting with 192a78b9dde34ee619d989db9957c86dff985bd1f1aa3d9288a3058751e2525e not found: ID does not exist" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.443211 4779 scope.go:117] "RemoveContainer" containerID="d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.443572 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9"} err="failed to get container status \"d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9\": rpc error: code = NotFound desc = could not find container \"d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9\": container with ID starting with d56124f69f226000f4c737100bd92f6d382909c0b1c3045751e8375a86c030b9 not found: ID does not exist" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.443625 4779 scope.go:117] "RemoveContainer" containerID="759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.444295 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620"} err="failed to get container status \"759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620\": rpc error: code = NotFound desc = could not find container \"759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620\": container with ID starting with 759316bc826037e5b9b0eeefcb2d7d834fb0f2cd0fc590958fb5da6231b99620 not found: ID does not exist" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.444357 4779 scope.go:117] "RemoveContainer" containerID="683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.444732 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2"} err="failed to get container status \"683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2\": rpc error: code = NotFound desc = could not find container \"683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2\": container with ID starting with 683b89492cf8ff3c7745eb720e4fcf15160fd3ed405e69c9b509be53f05a8bb2 not found: ID does not exist" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.444755 4779 scope.go:117] "RemoveContainer" containerID="0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.445130 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868"} err="failed to get container status \"0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868\": rpc error: code = NotFound desc = could not find container \"0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868\": container with ID starting with 0dac9faa0e8bb9a6669ce6d45c0a4a6e787ea8c98a4395bcf4240da5dfc2d868 not found: ID does not exist" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.445159 4779 scope.go:117] "RemoveContainer" containerID="397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.445554 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29"} err="failed to get container status \"397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\": rpc error: code = NotFound desc = could not find container \"397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29\": container with ID starting with 397a8cdb606c165ce4d9c11e3cb9e8729ee09edba58a4cf3dd994db0b4ff3d29 not found: ID does not exist" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.511051 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pbmbn"] Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.518689 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pbmbn"] Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.533556 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59377e57-966a-454b-8151-ecdb0cb73686-run-openvswitch\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.533631 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/59377e57-966a-454b-8151-ecdb0cb73686-run-ovn\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.533674 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/59377e57-966a-454b-8151-ecdb0cb73686-ovn-node-metrics-cert\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.533736 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/59377e57-966a-454b-8151-ecdb0cb73686-host-kubelet\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.533784 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/59377e57-966a-454b-8151-ecdb0cb73686-systemd-units\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.533834 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/59377e57-966a-454b-8151-ecdb0cb73686-host-run-ovn-kubernetes\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.533888 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59377e57-966a-454b-8151-ecdb0cb73686-etc-openvswitch\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.533938 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzr4m\" (UniqueName: \"kubernetes.io/projected/59377e57-966a-454b-8151-ecdb0cb73686-kube-api-access-mzr4m\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.533980 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/59377e57-966a-454b-8151-ecdb0cb73686-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.534018 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/59377e57-966a-454b-8151-ecdb0cb73686-host-cni-bin\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.534052 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/59377e57-966a-454b-8151-ecdb0cb73686-ovnkube-script-lib\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.534087 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/59377e57-966a-454b-8151-ecdb0cb73686-node-log\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.534170 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/59377e57-966a-454b-8151-ecdb0cb73686-host-run-netns\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.534205 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59377e57-966a-454b-8151-ecdb0cb73686-var-lib-openvswitch\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.534234 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/59377e57-966a-454b-8151-ecdb0cb73686-ovnkube-config\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.534264 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/59377e57-966a-454b-8151-ecdb0cb73686-log-socket\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.534302 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/59377e57-966a-454b-8151-ecdb0cb73686-host-cni-netd\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.534330 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/59377e57-966a-454b-8151-ecdb0cb73686-host-slash\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.534366 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/59377e57-966a-454b-8151-ecdb0cb73686-run-systemd\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.534399 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/59377e57-966a-454b-8151-ecdb0cb73686-env-overrides\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.535638 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/59377e57-966a-454b-8151-ecdb0cb73686-env-overrides\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.535794 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59377e57-966a-454b-8151-ecdb0cb73686-run-openvswitch\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.535850 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/59377e57-966a-454b-8151-ecdb0cb73686-run-ovn\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.536174 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59377e57-966a-454b-8151-ecdb0cb73686-etc-openvswitch\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.536239 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/59377e57-966a-454b-8151-ecdb0cb73686-host-kubelet\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.536329 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/59377e57-966a-454b-8151-ecdb0cb73686-host-run-netns\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.536350 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/59377e57-966a-454b-8151-ecdb0cb73686-node-log\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.536361 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59377e57-966a-454b-8151-ecdb0cb73686-var-lib-openvswitch\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.536404 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/59377e57-966a-454b-8151-ecdb0cb73686-host-cni-netd\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.536279 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/59377e57-966a-454b-8151-ecdb0cb73686-host-cni-bin\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.536417 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/59377e57-966a-454b-8151-ecdb0cb73686-systemd-units\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.536477 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/59377e57-966a-454b-8151-ecdb0cb73686-run-systemd\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.536456 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/59377e57-966a-454b-8151-ecdb0cb73686-host-run-ovn-kubernetes\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.536497 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/59377e57-966a-454b-8151-ecdb0cb73686-log-socket\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.536511 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/59377e57-966a-454b-8151-ecdb0cb73686-host-slash\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.536658 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/59377e57-966a-454b-8151-ecdb0cb73686-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.537435 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/59377e57-966a-454b-8151-ecdb0cb73686-ovnkube-config\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.538932 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/59377e57-966a-454b-8151-ecdb0cb73686-ovnkube-script-lib\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.541630 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/59377e57-966a-454b-8151-ecdb0cb73686-ovn-node-metrics-cert\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.556947 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzr4m\" (UniqueName: \"kubernetes.io/projected/59377e57-966a-454b-8151-ecdb0cb73686-kube-api-access-mzr4m\") pod \"ovnkube-node-bxqgl\" (UID: \"59377e57-966a-454b-8151-ecdb0cb73686\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: I1128 12:47:46.577389 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:46 crc kubenswrapper[4779]: W1128 12:47:46.602259 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59377e57_966a_454b_8151_ecdb0cb73686.slice/crio-f8305cc63b8414d66d15c1fdae8bdff382ff09b61c7ed139d1cfb1e2ecc413e1 WatchSource:0}: Error finding container f8305cc63b8414d66d15c1fdae8bdff382ff09b61c7ed139d1cfb1e2ecc413e1: Status 404 returned error can't find the container with id f8305cc63b8414d66d15c1fdae8bdff382ff09b61c7ed139d1cfb1e2ecc413e1 Nov 28 12:47:47 crc kubenswrapper[4779]: I1128 12:47:47.176813 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-pzwdx_ba664a9e-76d2-4d02-889a-e7062bfc903c/kube-multus/2.log" Nov 28 12:47:47 crc kubenswrapper[4779]: I1128 12:47:47.177363 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-pzwdx" event={"ID":"ba664a9e-76d2-4d02-889a-e7062bfc903c","Type":"ContainerStarted","Data":"28c19355587d02f21d859eaad48ab7796a2217ddfc4e3a6bc917579c145bb120"} Nov 28 12:47:47 crc kubenswrapper[4779]: I1128 12:47:47.180819 4779 generic.go:334] "Generic (PLEG): container finished" podID="59377e57-966a-454b-8151-ecdb0cb73686" containerID="0b34ebd5b745ddd6bfbda1733c3deaf8c669767f5217b5652a989cf61578b14c" exitCode=0 Nov 28 12:47:47 crc kubenswrapper[4779]: I1128 12:47:47.180885 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" event={"ID":"59377e57-966a-454b-8151-ecdb0cb73686","Type":"ContainerDied","Data":"0b34ebd5b745ddd6bfbda1733c3deaf8c669767f5217b5652a989cf61578b14c"} Nov 28 12:47:47 crc kubenswrapper[4779]: I1128 12:47:47.180925 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" event={"ID":"59377e57-966a-454b-8151-ecdb0cb73686","Type":"ContainerStarted","Data":"f8305cc63b8414d66d15c1fdae8bdff382ff09b61c7ed139d1cfb1e2ecc413e1"} Nov 28 12:47:47 crc kubenswrapper[4779]: I1128 12:47:47.740034 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35f4f43e-a921-41b2-aa88-506055daff60" path="/var/lib/kubelet/pods/35f4f43e-a921-41b2-aa88-506055daff60/volumes" Nov 28 12:47:48 crc kubenswrapper[4779]: I1128 12:47:48.196038 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" event={"ID":"59377e57-966a-454b-8151-ecdb0cb73686","Type":"ContainerStarted","Data":"5023edc1e5c1aa78af8bbb3876aa4823e50bba4495ebd30703bae7866474107f"} Nov 28 12:47:48 crc kubenswrapper[4779]: I1128 12:47:48.196156 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" event={"ID":"59377e57-966a-454b-8151-ecdb0cb73686","Type":"ContainerStarted","Data":"255b3d44640e6fa95881c6bccb359ecffeb2a2812760af2d730f1b0339bacdbe"} Nov 28 12:47:48 crc kubenswrapper[4779]: I1128 12:47:48.196192 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" event={"ID":"59377e57-966a-454b-8151-ecdb0cb73686","Type":"ContainerStarted","Data":"8d217840177a9de9e54facbc9de924ff3a89f398e7d5f4330be0ae8f139850df"} Nov 28 12:47:48 crc kubenswrapper[4779]: I1128 12:47:48.196219 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" event={"ID":"59377e57-966a-454b-8151-ecdb0cb73686","Type":"ContainerStarted","Data":"875be4f689bb1f46b0bd9a44ea65113bd37151b2a5f8afc441f88c72136976a0"} Nov 28 12:47:48 crc kubenswrapper[4779]: I1128 12:47:48.196250 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" event={"ID":"59377e57-966a-454b-8151-ecdb0cb73686","Type":"ContainerStarted","Data":"22d795282ecc985094436315a621242618d85ccdfce7ed89eac12ede96962ac6"} Nov 28 12:47:48 crc kubenswrapper[4779]: I1128 12:47:48.196277 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" event={"ID":"59377e57-966a-454b-8151-ecdb0cb73686","Type":"ContainerStarted","Data":"d31eb9fdc656fe8aa80db6b112e83312db52d76f732bce8b810bab3e1480db95"} Nov 28 12:47:51 crc kubenswrapper[4779]: I1128 12:47:51.219716 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" event={"ID":"59377e57-966a-454b-8151-ecdb0cb73686","Type":"ContainerStarted","Data":"a9fa27bc3e00955f57d696de5d66b37421219c7fb0f8164ebbc93b88dd24a79e"} Nov 28 12:47:54 crc kubenswrapper[4779]: I1128 12:47:54.240853 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" event={"ID":"59377e57-966a-454b-8151-ecdb0cb73686","Type":"ContainerStarted","Data":"e86e9d2c2261503e182f68076ae835daa8289d7ca96ae28ff698a5ac30e0cd66"} Nov 28 12:47:54 crc kubenswrapper[4779]: I1128 12:47:54.241289 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:54 crc kubenswrapper[4779]: I1128 12:47:54.241304 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:54 crc kubenswrapper[4779]: I1128 12:47:54.241315 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:54 crc kubenswrapper[4779]: I1128 12:47:54.269624 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:54 crc kubenswrapper[4779]: I1128 12:47:54.273141 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:47:54 crc kubenswrapper[4779]: I1128 12:47:54.273590 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" podStartSLOduration=8.273581147 podStartE2EDuration="8.273581147s" podCreationTimestamp="2025-11-28 12:47:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:47:54.270606977 +0000 UTC m=+734.836282331" watchObservedRunningTime="2025-11-28 12:47:54.273581147 +0000 UTC m=+734.839256501" Nov 28 12:48:16 crc kubenswrapper[4779]: I1128 12:48:16.284736 4779 patch_prober.go:28] interesting pod/machine-config-daemon-kj9g2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 12:48:16 crc kubenswrapper[4779]: I1128 12:48:16.285444 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 12:48:16 crc kubenswrapper[4779]: I1128 12:48:16.608664 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bxqgl" Nov 28 12:48:19 crc kubenswrapper[4779]: I1128 12:48:19.685866 4779 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 28 12:48:25 crc kubenswrapper[4779]: I1128 12:48:25.130133 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fg5659"] Nov 28 12:48:25 crc kubenswrapper[4779]: I1128 12:48:25.132250 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fg5659" Nov 28 12:48:25 crc kubenswrapper[4779]: I1128 12:48:25.138957 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 28 12:48:25 crc kubenswrapper[4779]: I1128 12:48:25.148559 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fg5659"] Nov 28 12:48:25 crc kubenswrapper[4779]: I1128 12:48:25.206323 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f88ccd92-f82a-4b6a-9502-f458938ab085-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fg5659\" (UID: \"f88ccd92-f82a-4b6a-9502-f458938ab085\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fg5659" Nov 28 12:48:25 crc kubenswrapper[4779]: I1128 12:48:25.206411 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f88ccd92-f82a-4b6a-9502-f458938ab085-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fg5659\" (UID: \"f88ccd92-f82a-4b6a-9502-f458938ab085\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fg5659" Nov 28 12:48:25 crc kubenswrapper[4779]: I1128 12:48:25.206453 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgb5z\" (UniqueName: \"kubernetes.io/projected/f88ccd92-f82a-4b6a-9502-f458938ab085-kube-api-access-fgb5z\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fg5659\" (UID: \"f88ccd92-f82a-4b6a-9502-f458938ab085\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fg5659" Nov 28 12:48:25 crc kubenswrapper[4779]: I1128 12:48:25.307971 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f88ccd92-f82a-4b6a-9502-f458938ab085-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fg5659\" (UID: \"f88ccd92-f82a-4b6a-9502-f458938ab085\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fg5659" Nov 28 12:48:25 crc kubenswrapper[4779]: I1128 12:48:25.308042 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f88ccd92-f82a-4b6a-9502-f458938ab085-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fg5659\" (UID: \"f88ccd92-f82a-4b6a-9502-f458938ab085\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fg5659" Nov 28 12:48:25 crc kubenswrapper[4779]: I1128 12:48:25.308078 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgb5z\" (UniqueName: \"kubernetes.io/projected/f88ccd92-f82a-4b6a-9502-f458938ab085-kube-api-access-fgb5z\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fg5659\" (UID: \"f88ccd92-f82a-4b6a-9502-f458938ab085\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fg5659" Nov 28 12:48:25 crc kubenswrapper[4779]: I1128 12:48:25.308723 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f88ccd92-f82a-4b6a-9502-f458938ab085-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fg5659\" (UID: \"f88ccd92-f82a-4b6a-9502-f458938ab085\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fg5659" Nov 28 12:48:25 crc kubenswrapper[4779]: I1128 12:48:25.308948 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f88ccd92-f82a-4b6a-9502-f458938ab085-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fg5659\" (UID: \"f88ccd92-f82a-4b6a-9502-f458938ab085\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fg5659" Nov 28 12:48:25 crc kubenswrapper[4779]: I1128 12:48:25.338221 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgb5z\" (UniqueName: \"kubernetes.io/projected/f88ccd92-f82a-4b6a-9502-f458938ab085-kube-api-access-fgb5z\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fg5659\" (UID: \"f88ccd92-f82a-4b6a-9502-f458938ab085\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fg5659" Nov 28 12:48:25 crc kubenswrapper[4779]: I1128 12:48:25.503417 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fg5659" Nov 28 12:48:25 crc kubenswrapper[4779]: I1128 12:48:25.787686 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fg5659"] Nov 28 12:48:26 crc kubenswrapper[4779]: I1128 12:48:26.677502 4779 generic.go:334] "Generic (PLEG): container finished" podID="f88ccd92-f82a-4b6a-9502-f458938ab085" containerID="cb75703d4cbeae893dee2041a55362274aa48a1f4922ba4145f3af716ceeeb42" exitCode=0 Nov 28 12:48:26 crc kubenswrapper[4779]: I1128 12:48:26.677596 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fg5659" event={"ID":"f88ccd92-f82a-4b6a-9502-f458938ab085","Type":"ContainerDied","Data":"cb75703d4cbeae893dee2041a55362274aa48a1f4922ba4145f3af716ceeeb42"} Nov 28 12:48:26 crc kubenswrapper[4779]: I1128 12:48:26.677961 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fg5659" event={"ID":"f88ccd92-f82a-4b6a-9502-f458938ab085","Type":"ContainerStarted","Data":"53bece80c01e2bf3b4500ad5d3dd6215300ff8f114581b690562f177d904b791"} Nov 28 12:48:27 crc kubenswrapper[4779]: I1128 12:48:27.486130 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6ktnp"] Nov 28 12:48:27 crc kubenswrapper[4779]: I1128 12:48:27.488845 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6ktnp" Nov 28 12:48:27 crc kubenswrapper[4779]: I1128 12:48:27.503479 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6ktnp"] Nov 28 12:48:27 crc kubenswrapper[4779]: I1128 12:48:27.552346 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9062671e-599e-4766-a93d-3bc5acc19910-catalog-content\") pod \"redhat-operators-6ktnp\" (UID: \"9062671e-599e-4766-a93d-3bc5acc19910\") " pod="openshift-marketplace/redhat-operators-6ktnp" Nov 28 12:48:27 crc kubenswrapper[4779]: I1128 12:48:27.552479 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lfj2\" (UniqueName: \"kubernetes.io/projected/9062671e-599e-4766-a93d-3bc5acc19910-kube-api-access-7lfj2\") pod \"redhat-operators-6ktnp\" (UID: \"9062671e-599e-4766-a93d-3bc5acc19910\") " pod="openshift-marketplace/redhat-operators-6ktnp" Nov 28 12:48:27 crc kubenswrapper[4779]: I1128 12:48:27.552536 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9062671e-599e-4766-a93d-3bc5acc19910-utilities\") pod \"redhat-operators-6ktnp\" (UID: \"9062671e-599e-4766-a93d-3bc5acc19910\") " pod="openshift-marketplace/redhat-operators-6ktnp" Nov 28 12:48:27 crc kubenswrapper[4779]: I1128 12:48:27.654650 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lfj2\" (UniqueName: \"kubernetes.io/projected/9062671e-599e-4766-a93d-3bc5acc19910-kube-api-access-7lfj2\") pod \"redhat-operators-6ktnp\" (UID: \"9062671e-599e-4766-a93d-3bc5acc19910\") " pod="openshift-marketplace/redhat-operators-6ktnp" Nov 28 12:48:27 crc kubenswrapper[4779]: I1128 12:48:27.654715 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9062671e-599e-4766-a93d-3bc5acc19910-utilities\") pod \"redhat-operators-6ktnp\" (UID: \"9062671e-599e-4766-a93d-3bc5acc19910\") " pod="openshift-marketplace/redhat-operators-6ktnp" Nov 28 12:48:27 crc kubenswrapper[4779]: I1128 12:48:27.654778 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9062671e-599e-4766-a93d-3bc5acc19910-catalog-content\") pod \"redhat-operators-6ktnp\" (UID: \"9062671e-599e-4766-a93d-3bc5acc19910\") " pod="openshift-marketplace/redhat-operators-6ktnp" Nov 28 12:48:27 crc kubenswrapper[4779]: I1128 12:48:27.655301 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9062671e-599e-4766-a93d-3bc5acc19910-catalog-content\") pod \"redhat-operators-6ktnp\" (UID: \"9062671e-599e-4766-a93d-3bc5acc19910\") " pod="openshift-marketplace/redhat-operators-6ktnp" Nov 28 12:48:27 crc kubenswrapper[4779]: I1128 12:48:27.655446 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9062671e-599e-4766-a93d-3bc5acc19910-utilities\") pod \"redhat-operators-6ktnp\" (UID: \"9062671e-599e-4766-a93d-3bc5acc19910\") " pod="openshift-marketplace/redhat-operators-6ktnp" Nov 28 12:48:27 crc kubenswrapper[4779]: I1128 12:48:27.687550 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lfj2\" (UniqueName: \"kubernetes.io/projected/9062671e-599e-4766-a93d-3bc5acc19910-kube-api-access-7lfj2\") pod \"redhat-operators-6ktnp\" (UID: \"9062671e-599e-4766-a93d-3bc5acc19910\") " pod="openshift-marketplace/redhat-operators-6ktnp" Nov 28 12:48:27 crc kubenswrapper[4779]: I1128 12:48:27.867131 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6ktnp" Nov 28 12:48:28 crc kubenswrapper[4779]: I1128 12:48:28.138241 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6ktnp"] Nov 28 12:48:28 crc kubenswrapper[4779]: W1128 12:48:28.150254 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9062671e_599e_4766_a93d_3bc5acc19910.slice/crio-bc1801bba03582e1e70f5acbb6d3c022fc3eaf93e86fdbf8f50a268b9154f4e3 WatchSource:0}: Error finding container bc1801bba03582e1e70f5acbb6d3c022fc3eaf93e86fdbf8f50a268b9154f4e3: Status 404 returned error can't find the container with id bc1801bba03582e1e70f5acbb6d3c022fc3eaf93e86fdbf8f50a268b9154f4e3 Nov 28 12:48:28 crc kubenswrapper[4779]: I1128 12:48:28.697477 4779 generic.go:334] "Generic (PLEG): container finished" podID="9062671e-599e-4766-a93d-3bc5acc19910" containerID="c080b7dd8a61d21deb95e2fc1278fa34afc8337669b71f4bddb9240f9a421ead" exitCode=0 Nov 28 12:48:28 crc kubenswrapper[4779]: I1128 12:48:28.697544 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ktnp" event={"ID":"9062671e-599e-4766-a93d-3bc5acc19910","Type":"ContainerDied","Data":"c080b7dd8a61d21deb95e2fc1278fa34afc8337669b71f4bddb9240f9a421ead"} Nov 28 12:48:28 crc kubenswrapper[4779]: I1128 12:48:28.697571 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ktnp" event={"ID":"9062671e-599e-4766-a93d-3bc5acc19910","Type":"ContainerStarted","Data":"bc1801bba03582e1e70f5acbb6d3c022fc3eaf93e86fdbf8f50a268b9154f4e3"} Nov 28 12:48:28 crc kubenswrapper[4779]: I1128 12:48:28.700719 4779 generic.go:334] "Generic (PLEG): container finished" podID="f88ccd92-f82a-4b6a-9502-f458938ab085" containerID="b1a137664e73f6178d8169a9f22a398d0b7e2e12c455d2826a98754eb76eed64" exitCode=0 Nov 28 12:48:28 crc kubenswrapper[4779]: I1128 12:48:28.700763 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fg5659" event={"ID":"f88ccd92-f82a-4b6a-9502-f458938ab085","Type":"ContainerDied","Data":"b1a137664e73f6178d8169a9f22a398d0b7e2e12c455d2826a98754eb76eed64"} Nov 28 12:48:29 crc kubenswrapper[4779]: I1128 12:48:29.709934 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ktnp" event={"ID":"9062671e-599e-4766-a93d-3bc5acc19910","Type":"ContainerStarted","Data":"9dd030489483126df933a21041cb212191e542acb7184d433b56b6fede4e3cdb"} Nov 28 12:48:29 crc kubenswrapper[4779]: I1128 12:48:29.715628 4779 generic.go:334] "Generic (PLEG): container finished" podID="f88ccd92-f82a-4b6a-9502-f458938ab085" containerID="6b5d3f7aea2734b738631350544f39c977a7a34306f04a6236e0848a5cc7d96a" exitCode=0 Nov 28 12:48:29 crc kubenswrapper[4779]: I1128 12:48:29.715668 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fg5659" event={"ID":"f88ccd92-f82a-4b6a-9502-f458938ab085","Type":"ContainerDied","Data":"6b5d3f7aea2734b738631350544f39c977a7a34306f04a6236e0848a5cc7d96a"} Nov 28 12:48:30 crc kubenswrapper[4779]: I1128 12:48:30.723998 4779 generic.go:334] "Generic (PLEG): container finished" podID="9062671e-599e-4766-a93d-3bc5acc19910" containerID="9dd030489483126df933a21041cb212191e542acb7184d433b56b6fede4e3cdb" exitCode=0 Nov 28 12:48:30 crc kubenswrapper[4779]: I1128 12:48:30.724297 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ktnp" event={"ID":"9062671e-599e-4766-a93d-3bc5acc19910","Type":"ContainerDied","Data":"9dd030489483126df933a21041cb212191e542acb7184d433b56b6fede4e3cdb"} Nov 28 12:48:31 crc kubenswrapper[4779]: I1128 12:48:31.080911 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fg5659" Nov 28 12:48:31 crc kubenswrapper[4779]: I1128 12:48:31.101777 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgb5z\" (UniqueName: \"kubernetes.io/projected/f88ccd92-f82a-4b6a-9502-f458938ab085-kube-api-access-fgb5z\") pod \"f88ccd92-f82a-4b6a-9502-f458938ab085\" (UID: \"f88ccd92-f82a-4b6a-9502-f458938ab085\") " Nov 28 12:48:31 crc kubenswrapper[4779]: I1128 12:48:31.101878 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f88ccd92-f82a-4b6a-9502-f458938ab085-bundle\") pod \"f88ccd92-f82a-4b6a-9502-f458938ab085\" (UID: \"f88ccd92-f82a-4b6a-9502-f458938ab085\") " Nov 28 12:48:31 crc kubenswrapper[4779]: I1128 12:48:31.101929 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f88ccd92-f82a-4b6a-9502-f458938ab085-util\") pod \"f88ccd92-f82a-4b6a-9502-f458938ab085\" (UID: \"f88ccd92-f82a-4b6a-9502-f458938ab085\") " Nov 28 12:48:31 crc kubenswrapper[4779]: I1128 12:48:31.103171 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f88ccd92-f82a-4b6a-9502-f458938ab085-bundle" (OuterVolumeSpecName: "bundle") pod "f88ccd92-f82a-4b6a-9502-f458938ab085" (UID: "f88ccd92-f82a-4b6a-9502-f458938ab085"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:48:31 crc kubenswrapper[4779]: I1128 12:48:31.110860 4779 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f88ccd92-f82a-4b6a-9502-f458938ab085-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:48:31 crc kubenswrapper[4779]: I1128 12:48:31.111987 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88ccd92-f82a-4b6a-9502-f458938ab085-kube-api-access-fgb5z" (OuterVolumeSpecName: "kube-api-access-fgb5z") pod "f88ccd92-f82a-4b6a-9502-f458938ab085" (UID: "f88ccd92-f82a-4b6a-9502-f458938ab085"). InnerVolumeSpecName "kube-api-access-fgb5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:48:31 crc kubenswrapper[4779]: I1128 12:48:31.137261 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f88ccd92-f82a-4b6a-9502-f458938ab085-util" (OuterVolumeSpecName: "util") pod "f88ccd92-f82a-4b6a-9502-f458938ab085" (UID: "f88ccd92-f82a-4b6a-9502-f458938ab085"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:48:31 crc kubenswrapper[4779]: I1128 12:48:31.212565 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgb5z\" (UniqueName: \"kubernetes.io/projected/f88ccd92-f82a-4b6a-9502-f458938ab085-kube-api-access-fgb5z\") on node \"crc\" DevicePath \"\"" Nov 28 12:48:31 crc kubenswrapper[4779]: I1128 12:48:31.212611 4779 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f88ccd92-f82a-4b6a-9502-f458938ab085-util\") on node \"crc\" DevicePath \"\"" Nov 28 12:48:31 crc kubenswrapper[4779]: I1128 12:48:31.737416 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ktnp" event={"ID":"9062671e-599e-4766-a93d-3bc5acc19910","Type":"ContainerStarted","Data":"5329d97bc37d4eed3099833c7093ce16e1dd2c407f899b146c3a054751f5cf06"} Nov 28 12:48:31 crc kubenswrapper[4779]: I1128 12:48:31.740645 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fg5659" event={"ID":"f88ccd92-f82a-4b6a-9502-f458938ab085","Type":"ContainerDied","Data":"53bece80c01e2bf3b4500ad5d3dd6215300ff8f114581b690562f177d904b791"} Nov 28 12:48:31 crc kubenswrapper[4779]: I1128 12:48:31.740685 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53bece80c01e2bf3b4500ad5d3dd6215300ff8f114581b690562f177d904b791" Nov 28 12:48:31 crc kubenswrapper[4779]: I1128 12:48:31.740756 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fg5659" Nov 28 12:48:31 crc kubenswrapper[4779]: I1128 12:48:31.771775 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6ktnp" podStartSLOduration=2.217676487 podStartE2EDuration="4.771714351s" podCreationTimestamp="2025-11-28 12:48:27 +0000 UTC" firstStartedPulling="2025-11-28 12:48:28.699181774 +0000 UTC m=+769.264857128" lastFinishedPulling="2025-11-28 12:48:31.253219608 +0000 UTC m=+771.818894992" observedRunningTime="2025-11-28 12:48:31.760256994 +0000 UTC m=+772.325932408" watchObservedRunningTime="2025-11-28 12:48:31.771714351 +0000 UTC m=+772.337389735" Nov 28 12:48:35 crc kubenswrapper[4779]: I1128 12:48:35.409713 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-27lnx"] Nov 28 12:48:35 crc kubenswrapper[4779]: E1128 12:48:35.410038 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f88ccd92-f82a-4b6a-9502-f458938ab085" containerName="util" Nov 28 12:48:35 crc kubenswrapper[4779]: I1128 12:48:35.410060 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="f88ccd92-f82a-4b6a-9502-f458938ab085" containerName="util" Nov 28 12:48:35 crc kubenswrapper[4779]: E1128 12:48:35.410089 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f88ccd92-f82a-4b6a-9502-f458938ab085" containerName="pull" Nov 28 12:48:35 crc kubenswrapper[4779]: I1128 12:48:35.410124 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="f88ccd92-f82a-4b6a-9502-f458938ab085" containerName="pull" Nov 28 12:48:35 crc kubenswrapper[4779]: E1128 12:48:35.410138 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f88ccd92-f82a-4b6a-9502-f458938ab085" containerName="extract" Nov 28 12:48:35 crc kubenswrapper[4779]: I1128 12:48:35.410148 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="f88ccd92-f82a-4b6a-9502-f458938ab085" containerName="extract" Nov 28 12:48:35 crc kubenswrapper[4779]: I1128 12:48:35.410299 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="f88ccd92-f82a-4b6a-9502-f458938ab085" containerName="extract" Nov 28 12:48:35 crc kubenswrapper[4779]: I1128 12:48:35.410861 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-27lnx" Nov 28 12:48:35 crc kubenswrapper[4779]: I1128 12:48:35.416747 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 28 12:48:35 crc kubenswrapper[4779]: I1128 12:48:35.417182 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 28 12:48:35 crc kubenswrapper[4779]: I1128 12:48:35.419600 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-8q5sk" Nov 28 12:48:35 crc kubenswrapper[4779]: I1128 12:48:35.428951 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-27lnx"] Nov 28 12:48:35 crc kubenswrapper[4779]: I1128 12:48:35.476840 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68ddw\" (UniqueName: \"kubernetes.io/projected/82cdcdcc-f4b1-4f17-b8be-81e5525a2438-kube-api-access-68ddw\") pod \"nmstate-operator-5b5b58f5c8-27lnx\" (UID: \"82cdcdcc-f4b1-4f17-b8be-81e5525a2438\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-27lnx" Nov 28 12:48:35 crc kubenswrapper[4779]: I1128 12:48:35.578664 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68ddw\" (UniqueName: \"kubernetes.io/projected/82cdcdcc-f4b1-4f17-b8be-81e5525a2438-kube-api-access-68ddw\") pod \"nmstate-operator-5b5b58f5c8-27lnx\" (UID: \"82cdcdcc-f4b1-4f17-b8be-81e5525a2438\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-27lnx" Nov 28 12:48:35 crc kubenswrapper[4779]: I1128 12:48:35.618378 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68ddw\" (UniqueName: \"kubernetes.io/projected/82cdcdcc-f4b1-4f17-b8be-81e5525a2438-kube-api-access-68ddw\") pod \"nmstate-operator-5b5b58f5c8-27lnx\" (UID: \"82cdcdcc-f4b1-4f17-b8be-81e5525a2438\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-27lnx" Nov 28 12:48:35 crc kubenswrapper[4779]: I1128 12:48:35.784915 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-27lnx" Nov 28 12:48:36 crc kubenswrapper[4779]: I1128 12:48:36.111940 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-27lnx"] Nov 28 12:48:36 crc kubenswrapper[4779]: I1128 12:48:36.776085 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-27lnx" event={"ID":"82cdcdcc-f4b1-4f17-b8be-81e5525a2438","Type":"ContainerStarted","Data":"5414508b935511f4912c1d2693bf08a7e235a06b9ff37b195e812e5a380564b9"} Nov 28 12:48:37 crc kubenswrapper[4779]: I1128 12:48:37.868238 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6ktnp" Nov 28 12:48:37 crc kubenswrapper[4779]: I1128 12:48:37.869412 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6ktnp" Nov 28 12:48:38 crc kubenswrapper[4779]: I1128 12:48:38.922970 4779 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6ktnp" podUID="9062671e-599e-4766-a93d-3bc5acc19910" containerName="registry-server" probeResult="failure" output=< Nov 28 12:48:38 crc kubenswrapper[4779]: timeout: failed to connect service ":50051" within 1s Nov 28 12:48:38 crc kubenswrapper[4779]: > Nov 28 12:48:39 crc kubenswrapper[4779]: I1128 12:48:39.796508 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-27lnx" event={"ID":"82cdcdcc-f4b1-4f17-b8be-81e5525a2438","Type":"ContainerStarted","Data":"de0916fdbf12ebd35d6bf0a00f5eb2b433bd957b116ce482f5352fd14621d154"} Nov 28 12:48:39 crc kubenswrapper[4779]: I1128 12:48:39.839885 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-27lnx" podStartSLOduration=2.008483926 podStartE2EDuration="4.839858738s" podCreationTimestamp="2025-11-28 12:48:35 +0000 UTC" firstStartedPulling="2025-11-28 12:48:36.133764779 +0000 UTC m=+776.699440133" lastFinishedPulling="2025-11-28 12:48:38.965139561 +0000 UTC m=+779.530814945" observedRunningTime="2025-11-28 12:48:39.81614056 +0000 UTC m=+780.381815944" watchObservedRunningTime="2025-11-28 12:48:39.839858738 +0000 UTC m=+780.405534122" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.376647 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-h6q7b"] Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.378409 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-h6q7b" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.381865 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-vc899" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.385638 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-zrh7w"] Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.386309 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-zrh7w" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.387972 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.412933 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-zrh7w"] Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.427487 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-h6q7b"] Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.433508 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-mqs42"] Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.434292 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-mqs42" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.515155 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/70ee469b-f21f-4b94-9f6a-1b79db90e4fd-ovs-socket\") pod \"nmstate-handler-mqs42\" (UID: \"70ee469b-f21f-4b94-9f6a-1b79db90e4fd\") " pod="openshift-nmstate/nmstate-handler-mqs42" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.515202 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/70ee469b-f21f-4b94-9f6a-1b79db90e4fd-nmstate-lock\") pod \"nmstate-handler-mqs42\" (UID: \"70ee469b-f21f-4b94-9f6a-1b79db90e4fd\") " pod="openshift-nmstate/nmstate-handler-mqs42" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.515263 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qqqf\" (UniqueName: \"kubernetes.io/projected/2ea9d3e0-ee7b-48bc-a358-689318fa4dae-kube-api-access-8qqqf\") pod \"nmstate-webhook-5f6d4c5ccb-zrh7w\" (UID: \"2ea9d3e0-ee7b-48bc-a358-689318fa4dae\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-zrh7w" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.515304 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpvnn\" (UniqueName: \"kubernetes.io/projected/70ee469b-f21f-4b94-9f6a-1b79db90e4fd-kube-api-access-cpvnn\") pod \"nmstate-handler-mqs42\" (UID: \"70ee469b-f21f-4b94-9f6a-1b79db90e4fd\") " pod="openshift-nmstate/nmstate-handler-mqs42" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.515328 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/70ee469b-f21f-4b94-9f6a-1b79db90e4fd-dbus-socket\") pod \"nmstate-handler-mqs42\" (UID: \"70ee469b-f21f-4b94-9f6a-1b79db90e4fd\") " pod="openshift-nmstate/nmstate-handler-mqs42" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.515350 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/2ea9d3e0-ee7b-48bc-a358-689318fa4dae-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-zrh7w\" (UID: \"2ea9d3e0-ee7b-48bc-a358-689318fa4dae\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-zrh7w" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.515372 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5rhh\" (UniqueName: \"kubernetes.io/projected/0c9a8cc1-da76-4824-8303-fe9e18c76af3-kube-api-access-q5rhh\") pod \"nmstate-metrics-7f946cbc9-h6q7b\" (UID: \"0c9a8cc1-da76-4824-8303-fe9e18c76af3\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-h6q7b" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.517763 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-ss4d2"] Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.518531 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-ss4d2" Nov 28 12:48:45 crc kubenswrapper[4779]: W1128 12:48:45.520294 4779 reflector.go:561] object-"openshift-nmstate"/"nginx-conf": failed to list *v1.ConfigMap: configmaps "nginx-conf" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-nmstate": no relationship found between node 'crc' and this object Nov 28 12:48:45 crc kubenswrapper[4779]: W1128 12:48:45.520321 4779 reflector.go:561] object-"openshift-nmstate"/"plugin-serving-cert": failed to list *v1.Secret: secrets "plugin-serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-nmstate": no relationship found between node 'crc' and this object Nov 28 12:48:45 crc kubenswrapper[4779]: E1128 12:48:45.520335 4779 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"nginx-conf\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"nginx-conf\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-nmstate\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 28 12:48:45 crc kubenswrapper[4779]: E1128 12:48:45.520355 4779 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"plugin-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"plugin-serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-nmstate\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.525394 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-pp99c" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.540182 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-ss4d2"] Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.616794 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qqqf\" (UniqueName: \"kubernetes.io/projected/2ea9d3e0-ee7b-48bc-a358-689318fa4dae-kube-api-access-8qqqf\") pod \"nmstate-webhook-5f6d4c5ccb-zrh7w\" (UID: \"2ea9d3e0-ee7b-48bc-a358-689318fa4dae\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-zrh7w" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.616847 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a8a297b2-fc61-4bcf-9872-106b5776cb43-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-ss4d2\" (UID: \"a8a297b2-fc61-4bcf-9872-106b5776cb43\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-ss4d2" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.616870 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpvnn\" (UniqueName: \"kubernetes.io/projected/70ee469b-f21f-4b94-9f6a-1b79db90e4fd-kube-api-access-cpvnn\") pod \"nmstate-handler-mqs42\" (UID: \"70ee469b-f21f-4b94-9f6a-1b79db90e4fd\") " pod="openshift-nmstate/nmstate-handler-mqs42" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.616891 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd9x9\" (UniqueName: \"kubernetes.io/projected/a8a297b2-fc61-4bcf-9872-106b5776cb43-kube-api-access-vd9x9\") pod \"nmstate-console-plugin-7fbb5f6569-ss4d2\" (UID: \"a8a297b2-fc61-4bcf-9872-106b5776cb43\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-ss4d2" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.616909 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/70ee469b-f21f-4b94-9f6a-1b79db90e4fd-dbus-socket\") pod \"nmstate-handler-mqs42\" (UID: \"70ee469b-f21f-4b94-9f6a-1b79db90e4fd\") " pod="openshift-nmstate/nmstate-handler-mqs42" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.616926 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/2ea9d3e0-ee7b-48bc-a358-689318fa4dae-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-zrh7w\" (UID: \"2ea9d3e0-ee7b-48bc-a358-689318fa4dae\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-zrh7w" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.616945 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5rhh\" (UniqueName: \"kubernetes.io/projected/0c9a8cc1-da76-4824-8303-fe9e18c76af3-kube-api-access-q5rhh\") pod \"nmstate-metrics-7f946cbc9-h6q7b\" (UID: \"0c9a8cc1-da76-4824-8303-fe9e18c76af3\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-h6q7b" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.616983 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/70ee469b-f21f-4b94-9f6a-1b79db90e4fd-ovs-socket\") pod \"nmstate-handler-mqs42\" (UID: \"70ee469b-f21f-4b94-9f6a-1b79db90e4fd\") " pod="openshift-nmstate/nmstate-handler-mqs42" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.617001 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/70ee469b-f21f-4b94-9f6a-1b79db90e4fd-nmstate-lock\") pod \"nmstate-handler-mqs42\" (UID: \"70ee469b-f21f-4b94-9f6a-1b79db90e4fd\") " pod="openshift-nmstate/nmstate-handler-mqs42" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.617022 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a8a297b2-fc61-4bcf-9872-106b5776cb43-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-ss4d2\" (UID: \"a8a297b2-fc61-4bcf-9872-106b5776cb43\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-ss4d2" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.617666 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/70ee469b-f21f-4b94-9f6a-1b79db90e4fd-ovs-socket\") pod \"nmstate-handler-mqs42\" (UID: \"70ee469b-f21f-4b94-9f6a-1b79db90e4fd\") " pod="openshift-nmstate/nmstate-handler-mqs42" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.617871 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/70ee469b-f21f-4b94-9f6a-1b79db90e4fd-nmstate-lock\") pod \"nmstate-handler-mqs42\" (UID: \"70ee469b-f21f-4b94-9f6a-1b79db90e4fd\") " pod="openshift-nmstate/nmstate-handler-mqs42" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.617965 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/70ee469b-f21f-4b94-9f6a-1b79db90e4fd-dbus-socket\") pod \"nmstate-handler-mqs42\" (UID: \"70ee469b-f21f-4b94-9f6a-1b79db90e4fd\") " pod="openshift-nmstate/nmstate-handler-mqs42" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.624740 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/2ea9d3e0-ee7b-48bc-a358-689318fa4dae-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-zrh7w\" (UID: \"2ea9d3e0-ee7b-48bc-a358-689318fa4dae\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-zrh7w" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.634895 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpvnn\" (UniqueName: \"kubernetes.io/projected/70ee469b-f21f-4b94-9f6a-1b79db90e4fd-kube-api-access-cpvnn\") pod \"nmstate-handler-mqs42\" (UID: \"70ee469b-f21f-4b94-9f6a-1b79db90e4fd\") " pod="openshift-nmstate/nmstate-handler-mqs42" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.637979 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qqqf\" (UniqueName: \"kubernetes.io/projected/2ea9d3e0-ee7b-48bc-a358-689318fa4dae-kube-api-access-8qqqf\") pod \"nmstate-webhook-5f6d4c5ccb-zrh7w\" (UID: \"2ea9d3e0-ee7b-48bc-a358-689318fa4dae\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-zrh7w" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.638781 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5rhh\" (UniqueName: \"kubernetes.io/projected/0c9a8cc1-da76-4824-8303-fe9e18c76af3-kube-api-access-q5rhh\") pod \"nmstate-metrics-7f946cbc9-h6q7b\" (UID: \"0c9a8cc1-da76-4824-8303-fe9e18c76af3\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-h6q7b" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.705955 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-h6q7b" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.716710 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-66db7f5f8b-2swkg"] Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.717805 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a8a297b2-fc61-4bcf-9872-106b5776cb43-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-ss4d2\" (UID: \"a8a297b2-fc61-4bcf-9872-106b5776cb43\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-ss4d2" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.717891 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a8a297b2-fc61-4bcf-9872-106b5776cb43-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-ss4d2\" (UID: \"a8a297b2-fc61-4bcf-9872-106b5776cb43\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-ss4d2" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.717948 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vd9x9\" (UniqueName: \"kubernetes.io/projected/a8a297b2-fc61-4bcf-9872-106b5776cb43-kube-api-access-vd9x9\") pod \"nmstate-console-plugin-7fbb5f6569-ss4d2\" (UID: \"a8a297b2-fc61-4bcf-9872-106b5776cb43\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-ss4d2" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.718958 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-zrh7w" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.719510 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-66db7f5f8b-2swkg" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.734223 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-66db7f5f8b-2swkg"] Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.751057 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vd9x9\" (UniqueName: \"kubernetes.io/projected/a8a297b2-fc61-4bcf-9872-106b5776cb43-kube-api-access-vd9x9\") pod \"nmstate-console-plugin-7fbb5f6569-ss4d2\" (UID: \"a8a297b2-fc61-4bcf-9872-106b5776cb43\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-ss4d2" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.757431 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-mqs42" Nov 28 12:48:45 crc kubenswrapper[4779]: W1128 12:48:45.779018 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70ee469b_f21f_4b94_9f6a_1b79db90e4fd.slice/crio-e591aaa94c542885717644c76450b2846c9e58104b0ee76d0588b6bd01e09d7e WatchSource:0}: Error finding container e591aaa94c542885717644c76450b2846c9e58104b0ee76d0588b6bd01e09d7e: Status 404 returned error can't find the container with id e591aaa94c542885717644c76450b2846c9e58104b0ee76d0588b6bd01e09d7e Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.819021 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/53fefe07-fd13-4ed1-b985-8f1c3ed47ce4-console-config\") pod \"console-66db7f5f8b-2swkg\" (UID: \"53fefe07-fd13-4ed1-b985-8f1c3ed47ce4\") " pod="openshift-console/console-66db7f5f8b-2swkg" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.819292 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/53fefe07-fd13-4ed1-b985-8f1c3ed47ce4-console-serving-cert\") pod \"console-66db7f5f8b-2swkg\" (UID: \"53fefe07-fd13-4ed1-b985-8f1c3ed47ce4\") " pod="openshift-console/console-66db7f5f8b-2swkg" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.819324 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/53fefe07-fd13-4ed1-b985-8f1c3ed47ce4-console-oauth-config\") pod \"console-66db7f5f8b-2swkg\" (UID: \"53fefe07-fd13-4ed1-b985-8f1c3ed47ce4\") " pod="openshift-console/console-66db7f5f8b-2swkg" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.819348 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/53fefe07-fd13-4ed1-b985-8f1c3ed47ce4-service-ca\") pod \"console-66db7f5f8b-2swkg\" (UID: \"53fefe07-fd13-4ed1-b985-8f1c3ed47ce4\") " pod="openshift-console/console-66db7f5f8b-2swkg" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.819475 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/53fefe07-fd13-4ed1-b985-8f1c3ed47ce4-trusted-ca-bundle\") pod \"console-66db7f5f8b-2swkg\" (UID: \"53fefe07-fd13-4ed1-b985-8f1c3ed47ce4\") " pod="openshift-console/console-66db7f5f8b-2swkg" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.819505 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/53fefe07-fd13-4ed1-b985-8f1c3ed47ce4-oauth-serving-cert\") pod \"console-66db7f5f8b-2swkg\" (UID: \"53fefe07-fd13-4ed1-b985-8f1c3ed47ce4\") " pod="openshift-console/console-66db7f5f8b-2swkg" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.819565 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44px6\" (UniqueName: \"kubernetes.io/projected/53fefe07-fd13-4ed1-b985-8f1c3ed47ce4-kube-api-access-44px6\") pod \"console-66db7f5f8b-2swkg\" (UID: \"53fefe07-fd13-4ed1-b985-8f1c3ed47ce4\") " pod="openshift-console/console-66db7f5f8b-2swkg" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.846372 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-mqs42" event={"ID":"70ee469b-f21f-4b94-9f6a-1b79db90e4fd","Type":"ContainerStarted","Data":"e591aaa94c542885717644c76450b2846c9e58104b0ee76d0588b6bd01e09d7e"} Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.920447 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/53fefe07-fd13-4ed1-b985-8f1c3ed47ce4-console-config\") pod \"console-66db7f5f8b-2swkg\" (UID: \"53fefe07-fd13-4ed1-b985-8f1c3ed47ce4\") " pod="openshift-console/console-66db7f5f8b-2swkg" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.920484 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/53fefe07-fd13-4ed1-b985-8f1c3ed47ce4-console-serving-cert\") pod \"console-66db7f5f8b-2swkg\" (UID: \"53fefe07-fd13-4ed1-b985-8f1c3ed47ce4\") " pod="openshift-console/console-66db7f5f8b-2swkg" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.920503 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/53fefe07-fd13-4ed1-b985-8f1c3ed47ce4-console-oauth-config\") pod \"console-66db7f5f8b-2swkg\" (UID: \"53fefe07-fd13-4ed1-b985-8f1c3ed47ce4\") " pod="openshift-console/console-66db7f5f8b-2swkg" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.920520 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/53fefe07-fd13-4ed1-b985-8f1c3ed47ce4-service-ca\") pod \"console-66db7f5f8b-2swkg\" (UID: \"53fefe07-fd13-4ed1-b985-8f1c3ed47ce4\") " pod="openshift-console/console-66db7f5f8b-2swkg" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.920589 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/53fefe07-fd13-4ed1-b985-8f1c3ed47ce4-trusted-ca-bundle\") pod \"console-66db7f5f8b-2swkg\" (UID: \"53fefe07-fd13-4ed1-b985-8f1c3ed47ce4\") " pod="openshift-console/console-66db7f5f8b-2swkg" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.920610 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/53fefe07-fd13-4ed1-b985-8f1c3ed47ce4-oauth-serving-cert\") pod \"console-66db7f5f8b-2swkg\" (UID: \"53fefe07-fd13-4ed1-b985-8f1c3ed47ce4\") " pod="openshift-console/console-66db7f5f8b-2swkg" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.920624 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44px6\" (UniqueName: \"kubernetes.io/projected/53fefe07-fd13-4ed1-b985-8f1c3ed47ce4-kube-api-access-44px6\") pod \"console-66db7f5f8b-2swkg\" (UID: \"53fefe07-fd13-4ed1-b985-8f1c3ed47ce4\") " pod="openshift-console/console-66db7f5f8b-2swkg" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.922837 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/53fefe07-fd13-4ed1-b985-8f1c3ed47ce4-console-config\") pod \"console-66db7f5f8b-2swkg\" (UID: \"53fefe07-fd13-4ed1-b985-8f1c3ed47ce4\") " pod="openshift-console/console-66db7f5f8b-2swkg" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.923737 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/53fefe07-fd13-4ed1-b985-8f1c3ed47ce4-oauth-serving-cert\") pod \"console-66db7f5f8b-2swkg\" (UID: \"53fefe07-fd13-4ed1-b985-8f1c3ed47ce4\") " pod="openshift-console/console-66db7f5f8b-2swkg" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.924322 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/53fefe07-fd13-4ed1-b985-8f1c3ed47ce4-service-ca\") pod \"console-66db7f5f8b-2swkg\" (UID: \"53fefe07-fd13-4ed1-b985-8f1c3ed47ce4\") " pod="openshift-console/console-66db7f5f8b-2swkg" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.924592 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/53fefe07-fd13-4ed1-b985-8f1c3ed47ce4-trusted-ca-bundle\") pod \"console-66db7f5f8b-2swkg\" (UID: \"53fefe07-fd13-4ed1-b985-8f1c3ed47ce4\") " pod="openshift-console/console-66db7f5f8b-2swkg" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.928311 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/53fefe07-fd13-4ed1-b985-8f1c3ed47ce4-console-serving-cert\") pod \"console-66db7f5f8b-2swkg\" (UID: \"53fefe07-fd13-4ed1-b985-8f1c3ed47ce4\") " pod="openshift-console/console-66db7f5f8b-2swkg" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.931284 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/53fefe07-fd13-4ed1-b985-8f1c3ed47ce4-console-oauth-config\") pod \"console-66db7f5f8b-2swkg\" (UID: \"53fefe07-fd13-4ed1-b985-8f1c3ed47ce4\") " pod="openshift-console/console-66db7f5f8b-2swkg" Nov 28 12:48:45 crc kubenswrapper[4779]: I1128 12:48:45.935827 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44px6\" (UniqueName: \"kubernetes.io/projected/53fefe07-fd13-4ed1-b985-8f1c3ed47ce4-kube-api-access-44px6\") pod \"console-66db7f5f8b-2swkg\" (UID: \"53fefe07-fd13-4ed1-b985-8f1c3ed47ce4\") " pod="openshift-console/console-66db7f5f8b-2swkg" Nov 28 12:48:46 crc kubenswrapper[4779]: I1128 12:48:46.095724 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-66db7f5f8b-2swkg" Nov 28 12:48:46 crc kubenswrapper[4779]: I1128 12:48:46.123579 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-h6q7b"] Nov 28 12:48:46 crc kubenswrapper[4779]: I1128 12:48:46.192432 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-zrh7w"] Nov 28 12:48:46 crc kubenswrapper[4779]: W1128 12:48:46.203982 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ea9d3e0_ee7b_48bc_a358_689318fa4dae.slice/crio-779a8a184f2817454738b6e3e05f8bd6cdb3fd792ee83617e3a87ab5cc583d70 WatchSource:0}: Error finding container 779a8a184f2817454738b6e3e05f8bd6cdb3fd792ee83617e3a87ab5cc583d70: Status 404 returned error can't find the container with id 779a8a184f2817454738b6e3e05f8bd6cdb3fd792ee83617e3a87ab5cc583d70 Nov 28 12:48:46 crc kubenswrapper[4779]: I1128 12:48:46.285146 4779 patch_prober.go:28] interesting pod/machine-config-daemon-kj9g2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 12:48:46 crc kubenswrapper[4779]: I1128 12:48:46.285210 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 12:48:46 crc kubenswrapper[4779]: I1128 12:48:46.353439 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-66db7f5f8b-2swkg"] Nov 28 12:48:46 crc kubenswrapper[4779]: W1128 12:48:46.357312 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53fefe07_fd13_4ed1_b985_8f1c3ed47ce4.slice/crio-ecd8d984d9add7d1e89602347466a5206abd0f16b24cf5602c3bcf005bac4ce1 WatchSource:0}: Error finding container ecd8d984d9add7d1e89602347466a5206abd0f16b24cf5602c3bcf005bac4ce1: Status 404 returned error can't find the container with id ecd8d984d9add7d1e89602347466a5206abd0f16b24cf5602c3bcf005bac4ce1 Nov 28 12:48:46 crc kubenswrapper[4779]: E1128 12:48:46.718909 4779 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: failed to sync secret cache: timed out waiting for the condition Nov 28 12:48:46 crc kubenswrapper[4779]: E1128 12:48:46.719572 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8a297b2-fc61-4bcf-9872-106b5776cb43-plugin-serving-cert podName:a8a297b2-fc61-4bcf-9872-106b5776cb43 nodeName:}" failed. No retries permitted until 2025-11-28 12:48:47.219489065 +0000 UTC m=+787.785164449 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/a8a297b2-fc61-4bcf-9872-106b5776cb43-plugin-serving-cert") pod "nmstate-console-plugin-7fbb5f6569-ss4d2" (UID: "a8a297b2-fc61-4bcf-9872-106b5776cb43") : failed to sync secret cache: timed out waiting for the condition Nov 28 12:48:46 crc kubenswrapper[4779]: E1128 12:48:46.719011 4779 configmap.go:193] Couldn't get configMap openshift-nmstate/nginx-conf: failed to sync configmap cache: timed out waiting for the condition Nov 28 12:48:46 crc kubenswrapper[4779]: E1128 12:48:46.720014 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a8a297b2-fc61-4bcf-9872-106b5776cb43-nginx-conf podName:a8a297b2-fc61-4bcf-9872-106b5776cb43 nodeName:}" failed. No retries permitted until 2025-11-28 12:48:47.219996489 +0000 UTC m=+787.785671873 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/a8a297b2-fc61-4bcf-9872-106b5776cb43-nginx-conf") pod "nmstate-console-plugin-7fbb5f6569-ss4d2" (UID: "a8a297b2-fc61-4bcf-9872-106b5776cb43") : failed to sync configmap cache: timed out waiting for the condition Nov 28 12:48:46 crc kubenswrapper[4779]: I1128 12:48:46.855596 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-h6q7b" event={"ID":"0c9a8cc1-da76-4824-8303-fe9e18c76af3","Type":"ContainerStarted","Data":"fbb923dbe7c553f5dfe9e7d36e93ffc321adbd8dfb2ab4e8b918afa6964be357"} Nov 28 12:48:46 crc kubenswrapper[4779]: I1128 12:48:46.857257 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-66db7f5f8b-2swkg" event={"ID":"53fefe07-fd13-4ed1-b985-8f1c3ed47ce4","Type":"ContainerStarted","Data":"ecd8d984d9add7d1e89602347466a5206abd0f16b24cf5602c3bcf005bac4ce1"} Nov 28 12:48:46 crc kubenswrapper[4779]: I1128 12:48:46.858612 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-zrh7w" event={"ID":"2ea9d3e0-ee7b-48bc-a358-689318fa4dae","Type":"ContainerStarted","Data":"779a8a184f2817454738b6e3e05f8bd6cdb3fd792ee83617e3a87ab5cc583d70"} Nov 28 12:48:46 crc kubenswrapper[4779]: I1128 12:48:46.932596 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 28 12:48:47 crc kubenswrapper[4779]: I1128 12:48:47.022671 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 28 12:48:47 crc kubenswrapper[4779]: I1128 12:48:47.243732 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a8a297b2-fc61-4bcf-9872-106b5776cb43-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-ss4d2\" (UID: \"a8a297b2-fc61-4bcf-9872-106b5776cb43\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-ss4d2" Nov 28 12:48:47 crc kubenswrapper[4779]: I1128 12:48:47.243851 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a8a297b2-fc61-4bcf-9872-106b5776cb43-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-ss4d2\" (UID: \"a8a297b2-fc61-4bcf-9872-106b5776cb43\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-ss4d2" Nov 28 12:48:47 crc kubenswrapper[4779]: I1128 12:48:47.245746 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a8a297b2-fc61-4bcf-9872-106b5776cb43-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-ss4d2\" (UID: \"a8a297b2-fc61-4bcf-9872-106b5776cb43\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-ss4d2" Nov 28 12:48:47 crc kubenswrapper[4779]: I1128 12:48:47.254364 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a8a297b2-fc61-4bcf-9872-106b5776cb43-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-ss4d2\" (UID: \"a8a297b2-fc61-4bcf-9872-106b5776cb43\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-ss4d2" Nov 28 12:48:47 crc kubenswrapper[4779]: I1128 12:48:47.337314 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-ss4d2" Nov 28 12:48:47 crc kubenswrapper[4779]: I1128 12:48:47.661043 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-ss4d2"] Nov 28 12:48:47 crc kubenswrapper[4779]: I1128 12:48:47.869060 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-ss4d2" event={"ID":"a8a297b2-fc61-4bcf-9872-106b5776cb43","Type":"ContainerStarted","Data":"0b25b07928352771ac864b5eab5cd2d41db08ae1e6007f05b3a3f7ea5c09bc09"} Nov 28 12:48:47 crc kubenswrapper[4779]: I1128 12:48:47.941014 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6ktnp" Nov 28 12:48:48 crc kubenswrapper[4779]: I1128 12:48:48.011044 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6ktnp" Nov 28 12:48:48 crc kubenswrapper[4779]: I1128 12:48:48.192020 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6ktnp"] Nov 28 12:48:48 crc kubenswrapper[4779]: I1128 12:48:48.877007 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-66db7f5f8b-2swkg" event={"ID":"53fefe07-fd13-4ed1-b985-8f1c3ed47ce4","Type":"ContainerStarted","Data":"af17af93f4ae1b8d29aa7284015f0a4502bc5c13c2b7c6d3deec7e0b383f5c61"} Nov 28 12:48:48 crc kubenswrapper[4779]: I1128 12:48:48.905398 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-66db7f5f8b-2swkg" podStartSLOduration=3.905377583 podStartE2EDuration="3.905377583s" podCreationTimestamp="2025-11-28 12:48:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:48:48.902044493 +0000 UTC m=+789.467719887" watchObservedRunningTime="2025-11-28 12:48:48.905377583 +0000 UTC m=+789.471052947" Nov 28 12:48:49 crc kubenswrapper[4779]: I1128 12:48:49.886240 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-mqs42" event={"ID":"70ee469b-f21f-4b94-9f6a-1b79db90e4fd","Type":"ContainerStarted","Data":"1dda676487600f74499e999c23669324259d9ad7abe96d5f7d990b03ebf17e8d"} Nov 28 12:48:49 crc kubenswrapper[4779]: I1128 12:48:49.886339 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-mqs42" Nov 28 12:48:49 crc kubenswrapper[4779]: I1128 12:48:49.888817 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-zrh7w" event={"ID":"2ea9d3e0-ee7b-48bc-a358-689318fa4dae","Type":"ContainerStarted","Data":"70d1f109d5d90a511ced8b5d0536bb970b4a5513b18476d32a5a46aa228dcd77"} Nov 28 12:48:49 crc kubenswrapper[4779]: I1128 12:48:49.888926 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-zrh7w" Nov 28 12:48:49 crc kubenswrapper[4779]: I1128 12:48:49.892010 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-h6q7b" event={"ID":"0c9a8cc1-da76-4824-8303-fe9e18c76af3","Type":"ContainerStarted","Data":"fed74c4fb86c4bb6d54ecabccc00e3b0cd33b31512101bc18e2b98a61a7615ad"} Nov 28 12:48:49 crc kubenswrapper[4779]: I1128 12:48:49.892600 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6ktnp" podUID="9062671e-599e-4766-a93d-3bc5acc19910" containerName="registry-server" containerID="cri-o://5329d97bc37d4eed3099833c7093ce16e1dd2c407f899b146c3a054751f5cf06" gracePeriod=2 Nov 28 12:48:49 crc kubenswrapper[4779]: I1128 12:48:49.909872 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-mqs42" podStartSLOduration=1.229471201 podStartE2EDuration="4.909849147s" podCreationTimestamp="2025-11-28 12:48:45 +0000 UTC" firstStartedPulling="2025-11-28 12:48:45.782248978 +0000 UTC m=+786.347924342" lastFinishedPulling="2025-11-28 12:48:49.462626894 +0000 UTC m=+790.028302288" observedRunningTime="2025-11-28 12:48:49.90477099 +0000 UTC m=+790.470446374" watchObservedRunningTime="2025-11-28 12:48:49.909849147 +0000 UTC m=+790.475524531" Nov 28 12:48:49 crc kubenswrapper[4779]: I1128 12:48:49.928476 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-zrh7w" podStartSLOduration=1.662027889 podStartE2EDuration="4.928460227s" podCreationTimestamp="2025-11-28 12:48:45 +0000 UTC" firstStartedPulling="2025-11-28 12:48:46.207005897 +0000 UTC m=+786.772681261" lastFinishedPulling="2025-11-28 12:48:49.473438235 +0000 UTC m=+790.039113599" observedRunningTime="2025-11-28 12:48:49.926413662 +0000 UTC m=+790.492089026" watchObservedRunningTime="2025-11-28 12:48:49.928460227 +0000 UTC m=+790.494135581" Nov 28 12:48:50 crc kubenswrapper[4779]: I1128 12:48:50.610567 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6ktnp" Nov 28 12:48:50 crc kubenswrapper[4779]: I1128 12:48:50.704187 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lfj2\" (UniqueName: \"kubernetes.io/projected/9062671e-599e-4766-a93d-3bc5acc19910-kube-api-access-7lfj2\") pod \"9062671e-599e-4766-a93d-3bc5acc19910\" (UID: \"9062671e-599e-4766-a93d-3bc5acc19910\") " Nov 28 12:48:50 crc kubenswrapper[4779]: I1128 12:48:50.704621 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9062671e-599e-4766-a93d-3bc5acc19910-utilities\") pod \"9062671e-599e-4766-a93d-3bc5acc19910\" (UID: \"9062671e-599e-4766-a93d-3bc5acc19910\") " Nov 28 12:48:50 crc kubenswrapper[4779]: I1128 12:48:50.704824 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9062671e-599e-4766-a93d-3bc5acc19910-catalog-content\") pod \"9062671e-599e-4766-a93d-3bc5acc19910\" (UID: \"9062671e-599e-4766-a93d-3bc5acc19910\") " Nov 28 12:48:50 crc kubenswrapper[4779]: I1128 12:48:50.706492 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9062671e-599e-4766-a93d-3bc5acc19910-utilities" (OuterVolumeSpecName: "utilities") pod "9062671e-599e-4766-a93d-3bc5acc19910" (UID: "9062671e-599e-4766-a93d-3bc5acc19910"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:48:50 crc kubenswrapper[4779]: I1128 12:48:50.708451 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9062671e-599e-4766-a93d-3bc5acc19910-kube-api-access-7lfj2" (OuterVolumeSpecName: "kube-api-access-7lfj2") pod "9062671e-599e-4766-a93d-3bc5acc19910" (UID: "9062671e-599e-4766-a93d-3bc5acc19910"). InnerVolumeSpecName "kube-api-access-7lfj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:48:50 crc kubenswrapper[4779]: I1128 12:48:50.801115 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9062671e-599e-4766-a93d-3bc5acc19910-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9062671e-599e-4766-a93d-3bc5acc19910" (UID: "9062671e-599e-4766-a93d-3bc5acc19910"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:48:50 crc kubenswrapper[4779]: I1128 12:48:50.806231 4779 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9062671e-599e-4766-a93d-3bc5acc19910-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 12:48:50 crc kubenswrapper[4779]: I1128 12:48:50.806823 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lfj2\" (UniqueName: \"kubernetes.io/projected/9062671e-599e-4766-a93d-3bc5acc19910-kube-api-access-7lfj2\") on node \"crc\" DevicePath \"\"" Nov 28 12:48:50 crc kubenswrapper[4779]: I1128 12:48:50.806892 4779 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9062671e-599e-4766-a93d-3bc5acc19910-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 12:48:50 crc kubenswrapper[4779]: I1128 12:48:50.900572 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-ss4d2" event={"ID":"a8a297b2-fc61-4bcf-9872-106b5776cb43","Type":"ContainerStarted","Data":"cb1a9dfd417f9097b0630ab286d9716c0d9ca3b7d4a81f053e0d5137fe0b11fc"} Nov 28 12:48:50 crc kubenswrapper[4779]: I1128 12:48:50.904827 4779 generic.go:334] "Generic (PLEG): container finished" podID="9062671e-599e-4766-a93d-3bc5acc19910" containerID="5329d97bc37d4eed3099833c7093ce16e1dd2c407f899b146c3a054751f5cf06" exitCode=0 Nov 28 12:48:50 crc kubenswrapper[4779]: I1128 12:48:50.904872 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ktnp" event={"ID":"9062671e-599e-4766-a93d-3bc5acc19910","Type":"ContainerDied","Data":"5329d97bc37d4eed3099833c7093ce16e1dd2c407f899b146c3a054751f5cf06"} Nov 28 12:48:50 crc kubenswrapper[4779]: I1128 12:48:50.905031 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ktnp" event={"ID":"9062671e-599e-4766-a93d-3bc5acc19910","Type":"ContainerDied","Data":"bc1801bba03582e1e70f5acbb6d3c022fc3eaf93e86fdbf8f50a268b9154f4e3"} Nov 28 12:48:50 crc kubenswrapper[4779]: I1128 12:48:50.905061 4779 scope.go:117] "RemoveContainer" containerID="5329d97bc37d4eed3099833c7093ce16e1dd2c407f899b146c3a054751f5cf06" Nov 28 12:48:50 crc kubenswrapper[4779]: I1128 12:48:50.904898 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6ktnp" Nov 28 12:48:50 crc kubenswrapper[4779]: I1128 12:48:50.919577 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-ss4d2" podStartSLOduration=2.947046488 podStartE2EDuration="5.919559373s" podCreationTimestamp="2025-11-28 12:48:45 +0000 UTC" firstStartedPulling="2025-11-28 12:48:47.674142451 +0000 UTC m=+788.239817845" lastFinishedPulling="2025-11-28 12:48:50.646655386 +0000 UTC m=+791.212330730" observedRunningTime="2025-11-28 12:48:50.91126628 +0000 UTC m=+791.476941634" watchObservedRunningTime="2025-11-28 12:48:50.919559373 +0000 UTC m=+791.485234727" Nov 28 12:48:50 crc kubenswrapper[4779]: I1128 12:48:50.943721 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6ktnp"] Nov 28 12:48:50 crc kubenswrapper[4779]: I1128 12:48:50.946855 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6ktnp"] Nov 28 12:48:50 crc kubenswrapper[4779]: I1128 12:48:50.946969 4779 scope.go:117] "RemoveContainer" containerID="9dd030489483126df933a21041cb212191e542acb7184d433b56b6fede4e3cdb" Nov 28 12:48:50 crc kubenswrapper[4779]: I1128 12:48:50.976977 4779 scope.go:117] "RemoveContainer" containerID="c080b7dd8a61d21deb95e2fc1278fa34afc8337669b71f4bddb9240f9a421ead" Nov 28 12:48:50 crc kubenswrapper[4779]: I1128 12:48:50.995326 4779 scope.go:117] "RemoveContainer" containerID="5329d97bc37d4eed3099833c7093ce16e1dd2c407f899b146c3a054751f5cf06" Nov 28 12:48:50 crc kubenswrapper[4779]: E1128 12:48:50.995775 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5329d97bc37d4eed3099833c7093ce16e1dd2c407f899b146c3a054751f5cf06\": container with ID starting with 5329d97bc37d4eed3099833c7093ce16e1dd2c407f899b146c3a054751f5cf06 not found: ID does not exist" containerID="5329d97bc37d4eed3099833c7093ce16e1dd2c407f899b146c3a054751f5cf06" Nov 28 12:48:50 crc kubenswrapper[4779]: I1128 12:48:50.995853 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5329d97bc37d4eed3099833c7093ce16e1dd2c407f899b146c3a054751f5cf06"} err="failed to get container status \"5329d97bc37d4eed3099833c7093ce16e1dd2c407f899b146c3a054751f5cf06\": rpc error: code = NotFound desc = could not find container \"5329d97bc37d4eed3099833c7093ce16e1dd2c407f899b146c3a054751f5cf06\": container with ID starting with 5329d97bc37d4eed3099833c7093ce16e1dd2c407f899b146c3a054751f5cf06 not found: ID does not exist" Nov 28 12:48:50 crc kubenswrapper[4779]: I1128 12:48:50.995906 4779 scope.go:117] "RemoveContainer" containerID="9dd030489483126df933a21041cb212191e542acb7184d433b56b6fede4e3cdb" Nov 28 12:48:50 crc kubenswrapper[4779]: E1128 12:48:50.996400 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9dd030489483126df933a21041cb212191e542acb7184d433b56b6fede4e3cdb\": container with ID starting with 9dd030489483126df933a21041cb212191e542acb7184d433b56b6fede4e3cdb not found: ID does not exist" containerID="9dd030489483126df933a21041cb212191e542acb7184d433b56b6fede4e3cdb" Nov 28 12:48:50 crc kubenswrapper[4779]: I1128 12:48:50.996452 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dd030489483126df933a21041cb212191e542acb7184d433b56b6fede4e3cdb"} err="failed to get container status \"9dd030489483126df933a21041cb212191e542acb7184d433b56b6fede4e3cdb\": rpc error: code = NotFound desc = could not find container \"9dd030489483126df933a21041cb212191e542acb7184d433b56b6fede4e3cdb\": container with ID starting with 9dd030489483126df933a21041cb212191e542acb7184d433b56b6fede4e3cdb not found: ID does not exist" Nov 28 12:48:50 crc kubenswrapper[4779]: I1128 12:48:50.996488 4779 scope.go:117] "RemoveContainer" containerID="c080b7dd8a61d21deb95e2fc1278fa34afc8337669b71f4bddb9240f9a421ead" Nov 28 12:48:50 crc kubenswrapper[4779]: E1128 12:48:50.996782 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c080b7dd8a61d21deb95e2fc1278fa34afc8337669b71f4bddb9240f9a421ead\": container with ID starting with c080b7dd8a61d21deb95e2fc1278fa34afc8337669b71f4bddb9240f9a421ead not found: ID does not exist" containerID="c080b7dd8a61d21deb95e2fc1278fa34afc8337669b71f4bddb9240f9a421ead" Nov 28 12:48:50 crc kubenswrapper[4779]: I1128 12:48:50.996817 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c080b7dd8a61d21deb95e2fc1278fa34afc8337669b71f4bddb9240f9a421ead"} err="failed to get container status \"c080b7dd8a61d21deb95e2fc1278fa34afc8337669b71f4bddb9240f9a421ead\": rpc error: code = NotFound desc = could not find container \"c080b7dd8a61d21deb95e2fc1278fa34afc8337669b71f4bddb9240f9a421ead\": container with ID starting with c080b7dd8a61d21deb95e2fc1278fa34afc8337669b71f4bddb9240f9a421ead not found: ID does not exist" Nov 28 12:48:51 crc kubenswrapper[4779]: I1128 12:48:51.734577 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9062671e-599e-4766-a93d-3bc5acc19910" path="/var/lib/kubelet/pods/9062671e-599e-4766-a93d-3bc5acc19910/volumes" Nov 28 12:48:52 crc kubenswrapper[4779]: I1128 12:48:52.925965 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-h6q7b" event={"ID":"0c9a8cc1-da76-4824-8303-fe9e18c76af3","Type":"ContainerStarted","Data":"2f4039dcf9e9373dda55acf9629923bebdaf90b9eb96709daf852982d95f3d78"} Nov 28 12:48:52 crc kubenswrapper[4779]: I1128 12:48:52.958713 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-h6q7b" podStartSLOduration=2.110854556 podStartE2EDuration="7.958682534s" podCreationTimestamp="2025-11-28 12:48:45 +0000 UTC" firstStartedPulling="2025-11-28 12:48:46.135202616 +0000 UTC m=+786.700878010" lastFinishedPulling="2025-11-28 12:48:51.983030604 +0000 UTC m=+792.548705988" observedRunningTime="2025-11-28 12:48:52.952653062 +0000 UTC m=+793.518328476" watchObservedRunningTime="2025-11-28 12:48:52.958682534 +0000 UTC m=+793.524357928" Nov 28 12:48:55 crc kubenswrapper[4779]: I1128 12:48:55.793262 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-mqs42" Nov 28 12:48:56 crc kubenswrapper[4779]: I1128 12:48:56.096392 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-66db7f5f8b-2swkg" Nov 28 12:48:56 crc kubenswrapper[4779]: I1128 12:48:56.096484 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-66db7f5f8b-2swkg" Nov 28 12:48:56 crc kubenswrapper[4779]: I1128 12:48:56.103902 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-66db7f5f8b-2swkg" Nov 28 12:48:56 crc kubenswrapper[4779]: I1128 12:48:56.961050 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-66db7f5f8b-2swkg" Nov 28 12:48:57 crc kubenswrapper[4779]: I1128 12:48:57.053938 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-ctt57"] Nov 28 12:49:05 crc kubenswrapper[4779]: I1128 12:49:05.740759 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-zrh7w" Nov 28 12:49:16 crc kubenswrapper[4779]: I1128 12:49:16.284395 4779 patch_prober.go:28] interesting pod/machine-config-daemon-kj9g2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 12:49:16 crc kubenswrapper[4779]: I1128 12:49:16.285729 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 12:49:16 crc kubenswrapper[4779]: I1128 12:49:16.285862 4779 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" Nov 28 12:49:16 crc kubenswrapper[4779]: I1128 12:49:16.286515 4779 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f5e93de974be41a0eb6481eaf0510c9e7e4484d2b3ab950a8d456a68806d2e6f"} pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 12:49:16 crc kubenswrapper[4779]: I1128 12:49:16.286665 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" containerID="cri-o://f5e93de974be41a0eb6481eaf0510c9e7e4484d2b3ab950a8d456a68806d2e6f" gracePeriod=600 Nov 28 12:49:17 crc kubenswrapper[4779]: I1128 12:49:17.101518 4779 generic.go:334] "Generic (PLEG): container finished" podID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerID="f5e93de974be41a0eb6481eaf0510c9e7e4484d2b3ab950a8d456a68806d2e6f" exitCode=0 Nov 28 12:49:17 crc kubenswrapper[4779]: I1128 12:49:17.101598 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" event={"ID":"3b2a3eb4-4de5-491b-b466-3a35b7d745ec","Type":"ContainerDied","Data":"f5e93de974be41a0eb6481eaf0510c9e7e4484d2b3ab950a8d456a68806d2e6f"} Nov 28 12:49:17 crc kubenswrapper[4779]: I1128 12:49:17.102048 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" event={"ID":"3b2a3eb4-4de5-491b-b466-3a35b7d745ec","Type":"ContainerStarted","Data":"2ec718b6174bac7e525f03254a91895c0abe5e9151b5cc34c2ee2019a1b96a1c"} Nov 28 12:49:17 crc kubenswrapper[4779]: I1128 12:49:17.102075 4779 scope.go:117] "RemoveContainer" containerID="31095d9d6d0a461f735b1edaee2a8d6fa31f53fbcf5ad74a652fee4e119ba7db" Nov 28 12:49:21 crc kubenswrapper[4779]: I1128 12:49:21.967550 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83zqqz8"] Nov 28 12:49:21 crc kubenswrapper[4779]: E1128 12:49:21.968291 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9062671e-599e-4766-a93d-3bc5acc19910" containerName="extract-content" Nov 28 12:49:21 crc kubenswrapper[4779]: I1128 12:49:21.968306 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="9062671e-599e-4766-a93d-3bc5acc19910" containerName="extract-content" Nov 28 12:49:21 crc kubenswrapper[4779]: E1128 12:49:21.968326 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9062671e-599e-4766-a93d-3bc5acc19910" containerName="registry-server" Nov 28 12:49:21 crc kubenswrapper[4779]: I1128 12:49:21.968334 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="9062671e-599e-4766-a93d-3bc5acc19910" containerName="registry-server" Nov 28 12:49:21 crc kubenswrapper[4779]: E1128 12:49:21.968352 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9062671e-599e-4766-a93d-3bc5acc19910" containerName="extract-utilities" Nov 28 12:49:21 crc kubenswrapper[4779]: I1128 12:49:21.968360 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="9062671e-599e-4766-a93d-3bc5acc19910" containerName="extract-utilities" Nov 28 12:49:21 crc kubenswrapper[4779]: I1128 12:49:21.968502 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="9062671e-599e-4766-a93d-3bc5acc19910" containerName="registry-server" Nov 28 12:49:21 crc kubenswrapper[4779]: I1128 12:49:21.969396 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83zqqz8" Nov 28 12:49:21 crc kubenswrapper[4779]: I1128 12:49:21.975849 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 28 12:49:21 crc kubenswrapper[4779]: I1128 12:49:21.984501 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83zqqz8"] Nov 28 12:49:22 crc kubenswrapper[4779]: I1128 12:49:22.099050 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlfsc\" (UniqueName: \"kubernetes.io/projected/f70b1dfe-4b12-40c8-8052-da91227479b0-kube-api-access-wlfsc\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83zqqz8\" (UID: \"f70b1dfe-4b12-40c8-8052-da91227479b0\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83zqqz8" Nov 28 12:49:22 crc kubenswrapper[4779]: I1128 12:49:22.099136 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f70b1dfe-4b12-40c8-8052-da91227479b0-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83zqqz8\" (UID: \"f70b1dfe-4b12-40c8-8052-da91227479b0\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83zqqz8" Nov 28 12:49:22 crc kubenswrapper[4779]: I1128 12:49:22.099278 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f70b1dfe-4b12-40c8-8052-da91227479b0-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83zqqz8\" (UID: \"f70b1dfe-4b12-40c8-8052-da91227479b0\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83zqqz8" Nov 28 12:49:22 crc kubenswrapper[4779]: I1128 12:49:22.110283 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-ctt57" podUID="bb401509-3ef4-41bc-93db-fbee2b5454b9" containerName="console" containerID="cri-o://b75ef25052c0835efd10ca75ba6aae87b86a54b1907919ef647027e084e854d6" gracePeriod=15 Nov 28 12:49:22 crc kubenswrapper[4779]: I1128 12:49:22.201133 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f70b1dfe-4b12-40c8-8052-da91227479b0-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83zqqz8\" (UID: \"f70b1dfe-4b12-40c8-8052-da91227479b0\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83zqqz8" Nov 28 12:49:22 crc kubenswrapper[4779]: I1128 12:49:22.201257 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlfsc\" (UniqueName: \"kubernetes.io/projected/f70b1dfe-4b12-40c8-8052-da91227479b0-kube-api-access-wlfsc\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83zqqz8\" (UID: \"f70b1dfe-4b12-40c8-8052-da91227479b0\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83zqqz8" Nov 28 12:49:22 crc kubenswrapper[4779]: I1128 12:49:22.201320 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f70b1dfe-4b12-40c8-8052-da91227479b0-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83zqqz8\" (UID: \"f70b1dfe-4b12-40c8-8052-da91227479b0\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83zqqz8" Nov 28 12:49:22 crc kubenswrapper[4779]: I1128 12:49:22.201562 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f70b1dfe-4b12-40c8-8052-da91227479b0-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83zqqz8\" (UID: \"f70b1dfe-4b12-40c8-8052-da91227479b0\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83zqqz8" Nov 28 12:49:22 crc kubenswrapper[4779]: I1128 12:49:22.201948 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f70b1dfe-4b12-40c8-8052-da91227479b0-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83zqqz8\" (UID: \"f70b1dfe-4b12-40c8-8052-da91227479b0\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83zqqz8" Nov 28 12:49:22 crc kubenswrapper[4779]: I1128 12:49:22.226428 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlfsc\" (UniqueName: \"kubernetes.io/projected/f70b1dfe-4b12-40c8-8052-da91227479b0-kube-api-access-wlfsc\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83zqqz8\" (UID: \"f70b1dfe-4b12-40c8-8052-da91227479b0\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83zqqz8" Nov 28 12:49:22 crc kubenswrapper[4779]: I1128 12:49:22.304808 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83zqqz8" Nov 28 12:49:22 crc kubenswrapper[4779]: I1128 12:49:22.474918 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-ctt57_bb401509-3ef4-41bc-93db-fbee2b5454b9/console/0.log" Nov 28 12:49:22 crc kubenswrapper[4779]: I1128 12:49:22.475258 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-ctt57" Nov 28 12:49:22 crc kubenswrapper[4779]: I1128 12:49:22.541047 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83zqqz8"] Nov 28 12:49:22 crc kubenswrapper[4779]: W1128 12:49:22.550970 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf70b1dfe_4b12_40c8_8052_da91227479b0.slice/crio-5e2711760676f1a566b787dfba3c01e240a596acdca1dc1d6303a9f30203fd8c WatchSource:0}: Error finding container 5e2711760676f1a566b787dfba3c01e240a596acdca1dc1d6303a9f30203fd8c: Status 404 returned error can't find the container with id 5e2711760676f1a566b787dfba3c01e240a596acdca1dc1d6303a9f30203fd8c Nov 28 12:49:22 crc kubenswrapper[4779]: I1128 12:49:22.607817 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bb401509-3ef4-41bc-93db-fbee2b5454b9-console-oauth-config\") pod \"bb401509-3ef4-41bc-93db-fbee2b5454b9\" (UID: \"bb401509-3ef4-41bc-93db-fbee2b5454b9\") " Nov 28 12:49:22 crc kubenswrapper[4779]: I1128 12:49:22.607905 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bb401509-3ef4-41bc-93db-fbee2b5454b9-console-serving-cert\") pod \"bb401509-3ef4-41bc-93db-fbee2b5454b9\" (UID: \"bb401509-3ef4-41bc-93db-fbee2b5454b9\") " Nov 28 12:49:22 crc kubenswrapper[4779]: I1128 12:49:22.607962 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bb401509-3ef4-41bc-93db-fbee2b5454b9-oauth-serving-cert\") pod \"bb401509-3ef4-41bc-93db-fbee2b5454b9\" (UID: \"bb401509-3ef4-41bc-93db-fbee2b5454b9\") " Nov 28 12:49:22 crc kubenswrapper[4779]: I1128 12:49:22.607992 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bb401509-3ef4-41bc-93db-fbee2b5454b9-console-config\") pod \"bb401509-3ef4-41bc-93db-fbee2b5454b9\" (UID: \"bb401509-3ef4-41bc-93db-fbee2b5454b9\") " Nov 28 12:49:22 crc kubenswrapper[4779]: I1128 12:49:22.608014 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzhtk\" (UniqueName: \"kubernetes.io/projected/bb401509-3ef4-41bc-93db-fbee2b5454b9-kube-api-access-fzhtk\") pod \"bb401509-3ef4-41bc-93db-fbee2b5454b9\" (UID: \"bb401509-3ef4-41bc-93db-fbee2b5454b9\") " Nov 28 12:49:22 crc kubenswrapper[4779]: I1128 12:49:22.608039 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb401509-3ef4-41bc-93db-fbee2b5454b9-trusted-ca-bundle\") pod \"bb401509-3ef4-41bc-93db-fbee2b5454b9\" (UID: \"bb401509-3ef4-41bc-93db-fbee2b5454b9\") " Nov 28 12:49:22 crc kubenswrapper[4779]: I1128 12:49:22.608069 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bb401509-3ef4-41bc-93db-fbee2b5454b9-service-ca\") pod \"bb401509-3ef4-41bc-93db-fbee2b5454b9\" (UID: \"bb401509-3ef4-41bc-93db-fbee2b5454b9\") " Nov 28 12:49:22 crc kubenswrapper[4779]: I1128 12:49:22.609027 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb401509-3ef4-41bc-93db-fbee2b5454b9-service-ca" (OuterVolumeSpecName: "service-ca") pod "bb401509-3ef4-41bc-93db-fbee2b5454b9" (UID: "bb401509-3ef4-41bc-93db-fbee2b5454b9"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:49:22 crc kubenswrapper[4779]: I1128 12:49:22.609050 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb401509-3ef4-41bc-93db-fbee2b5454b9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "bb401509-3ef4-41bc-93db-fbee2b5454b9" (UID: "bb401509-3ef4-41bc-93db-fbee2b5454b9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:49:22 crc kubenswrapper[4779]: I1128 12:49:22.609160 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb401509-3ef4-41bc-93db-fbee2b5454b9-console-config" (OuterVolumeSpecName: "console-config") pod "bb401509-3ef4-41bc-93db-fbee2b5454b9" (UID: "bb401509-3ef4-41bc-93db-fbee2b5454b9"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:49:22 crc kubenswrapper[4779]: I1128 12:49:22.609307 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb401509-3ef4-41bc-93db-fbee2b5454b9-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "bb401509-3ef4-41bc-93db-fbee2b5454b9" (UID: "bb401509-3ef4-41bc-93db-fbee2b5454b9"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:49:22 crc kubenswrapper[4779]: I1128 12:49:22.615439 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb401509-3ef4-41bc-93db-fbee2b5454b9-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "bb401509-3ef4-41bc-93db-fbee2b5454b9" (UID: "bb401509-3ef4-41bc-93db-fbee2b5454b9"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:49:22 crc kubenswrapper[4779]: I1128 12:49:22.615435 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb401509-3ef4-41bc-93db-fbee2b5454b9-kube-api-access-fzhtk" (OuterVolumeSpecName: "kube-api-access-fzhtk") pod "bb401509-3ef4-41bc-93db-fbee2b5454b9" (UID: "bb401509-3ef4-41bc-93db-fbee2b5454b9"). InnerVolumeSpecName "kube-api-access-fzhtk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:49:22 crc kubenswrapper[4779]: I1128 12:49:22.615822 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb401509-3ef4-41bc-93db-fbee2b5454b9-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "bb401509-3ef4-41bc-93db-fbee2b5454b9" (UID: "bb401509-3ef4-41bc-93db-fbee2b5454b9"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:49:22 crc kubenswrapper[4779]: I1128 12:49:22.709281 4779 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bb401509-3ef4-41bc-93db-fbee2b5454b9-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 12:49:22 crc kubenswrapper[4779]: I1128 12:49:22.709313 4779 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bb401509-3ef4-41bc-93db-fbee2b5454b9-console-config\") on node \"crc\" DevicePath \"\"" Nov 28 12:49:22 crc kubenswrapper[4779]: I1128 12:49:22.709324 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzhtk\" (UniqueName: \"kubernetes.io/projected/bb401509-3ef4-41bc-93db-fbee2b5454b9-kube-api-access-fzhtk\") on node \"crc\" DevicePath \"\"" Nov 28 12:49:22 crc kubenswrapper[4779]: I1128 12:49:22.709336 4779 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb401509-3ef4-41bc-93db-fbee2b5454b9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:49:22 crc kubenswrapper[4779]: I1128 12:49:22.709345 4779 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bb401509-3ef4-41bc-93db-fbee2b5454b9-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 12:49:22 crc kubenswrapper[4779]: I1128 12:49:22.709355 4779 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bb401509-3ef4-41bc-93db-fbee2b5454b9-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 28 12:49:22 crc kubenswrapper[4779]: I1128 12:49:22.709363 4779 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bb401509-3ef4-41bc-93db-fbee2b5454b9-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 12:49:23 crc kubenswrapper[4779]: I1128 12:49:23.166142 4779 generic.go:334] "Generic (PLEG): container finished" podID="f70b1dfe-4b12-40c8-8052-da91227479b0" containerID="9f4fee66de3fd6049019e4b3dd27bb882c5357a544a1eefc68d01139d6d9aebb" exitCode=0 Nov 28 12:49:23 crc kubenswrapper[4779]: I1128 12:49:23.166230 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83zqqz8" event={"ID":"f70b1dfe-4b12-40c8-8052-da91227479b0","Type":"ContainerDied","Data":"9f4fee66de3fd6049019e4b3dd27bb882c5357a544a1eefc68d01139d6d9aebb"} Nov 28 12:49:23 crc kubenswrapper[4779]: I1128 12:49:23.166294 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83zqqz8" event={"ID":"f70b1dfe-4b12-40c8-8052-da91227479b0","Type":"ContainerStarted","Data":"5e2711760676f1a566b787dfba3c01e240a596acdca1dc1d6303a9f30203fd8c"} Nov 28 12:49:23 crc kubenswrapper[4779]: I1128 12:49:23.169129 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-ctt57_bb401509-3ef4-41bc-93db-fbee2b5454b9/console/0.log" Nov 28 12:49:23 crc kubenswrapper[4779]: I1128 12:49:23.169187 4779 generic.go:334] "Generic (PLEG): container finished" podID="bb401509-3ef4-41bc-93db-fbee2b5454b9" containerID="b75ef25052c0835efd10ca75ba6aae87b86a54b1907919ef647027e084e854d6" exitCode=2 Nov 28 12:49:23 crc kubenswrapper[4779]: I1128 12:49:23.169237 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-ctt57" event={"ID":"bb401509-3ef4-41bc-93db-fbee2b5454b9","Type":"ContainerDied","Data":"b75ef25052c0835efd10ca75ba6aae87b86a54b1907919ef647027e084e854d6"} Nov 28 12:49:23 crc kubenswrapper[4779]: I1128 12:49:23.169290 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-ctt57" Nov 28 12:49:23 crc kubenswrapper[4779]: I1128 12:49:23.169311 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-ctt57" event={"ID":"bb401509-3ef4-41bc-93db-fbee2b5454b9","Type":"ContainerDied","Data":"2b165f77d94c78b8df6d4386354b0dc458a222e970ceedaf6e4a0023e08c5d40"} Nov 28 12:49:23 crc kubenswrapper[4779]: I1128 12:49:23.169676 4779 scope.go:117] "RemoveContainer" containerID="b75ef25052c0835efd10ca75ba6aae87b86a54b1907919ef647027e084e854d6" Nov 28 12:49:23 crc kubenswrapper[4779]: I1128 12:49:23.202763 4779 scope.go:117] "RemoveContainer" containerID="b75ef25052c0835efd10ca75ba6aae87b86a54b1907919ef647027e084e854d6" Nov 28 12:49:23 crc kubenswrapper[4779]: E1128 12:49:23.203494 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b75ef25052c0835efd10ca75ba6aae87b86a54b1907919ef647027e084e854d6\": container with ID starting with b75ef25052c0835efd10ca75ba6aae87b86a54b1907919ef647027e084e854d6 not found: ID does not exist" containerID="b75ef25052c0835efd10ca75ba6aae87b86a54b1907919ef647027e084e854d6" Nov 28 12:49:23 crc kubenswrapper[4779]: I1128 12:49:23.203562 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b75ef25052c0835efd10ca75ba6aae87b86a54b1907919ef647027e084e854d6"} err="failed to get container status \"b75ef25052c0835efd10ca75ba6aae87b86a54b1907919ef647027e084e854d6\": rpc error: code = NotFound desc = could not find container \"b75ef25052c0835efd10ca75ba6aae87b86a54b1907919ef647027e084e854d6\": container with ID starting with b75ef25052c0835efd10ca75ba6aae87b86a54b1907919ef647027e084e854d6 not found: ID does not exist" Nov 28 12:49:23 crc kubenswrapper[4779]: I1128 12:49:23.227911 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-ctt57"] Nov 28 12:49:23 crc kubenswrapper[4779]: I1128 12:49:23.237842 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-ctt57"] Nov 28 12:49:23 crc kubenswrapper[4779]: I1128 12:49:23.740913 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb401509-3ef4-41bc-93db-fbee2b5454b9" path="/var/lib/kubelet/pods/bb401509-3ef4-41bc-93db-fbee2b5454b9/volumes" Nov 28 12:49:25 crc kubenswrapper[4779]: I1128 12:49:25.190424 4779 generic.go:334] "Generic (PLEG): container finished" podID="f70b1dfe-4b12-40c8-8052-da91227479b0" containerID="bd1bbf16903cc781d1aeb7efd7b1c8ade7550bff635cf0b6003135f3bdad3a76" exitCode=0 Nov 28 12:49:25 crc kubenswrapper[4779]: I1128 12:49:25.190508 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83zqqz8" event={"ID":"f70b1dfe-4b12-40c8-8052-da91227479b0","Type":"ContainerDied","Data":"bd1bbf16903cc781d1aeb7efd7b1c8ade7550bff635cf0b6003135f3bdad3a76"} Nov 28 12:49:26 crc kubenswrapper[4779]: I1128 12:49:26.201251 4779 generic.go:334] "Generic (PLEG): container finished" podID="f70b1dfe-4b12-40c8-8052-da91227479b0" containerID="f6ed5ec6ac2b2b6eb85ab6d02881ac677efd6ddcc85b6ee065bfbdbdee24899b" exitCode=0 Nov 28 12:49:26 crc kubenswrapper[4779]: I1128 12:49:26.201306 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83zqqz8" event={"ID":"f70b1dfe-4b12-40c8-8052-da91227479b0","Type":"ContainerDied","Data":"f6ed5ec6ac2b2b6eb85ab6d02881ac677efd6ddcc85b6ee065bfbdbdee24899b"} Nov 28 12:49:27 crc kubenswrapper[4779]: I1128 12:49:27.520127 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83zqqz8" Nov 28 12:49:27 crc kubenswrapper[4779]: I1128 12:49:27.579307 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f70b1dfe-4b12-40c8-8052-da91227479b0-util\") pod \"f70b1dfe-4b12-40c8-8052-da91227479b0\" (UID: \"f70b1dfe-4b12-40c8-8052-da91227479b0\") " Nov 28 12:49:27 crc kubenswrapper[4779]: I1128 12:49:27.579381 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f70b1dfe-4b12-40c8-8052-da91227479b0-bundle\") pod \"f70b1dfe-4b12-40c8-8052-da91227479b0\" (UID: \"f70b1dfe-4b12-40c8-8052-da91227479b0\") " Nov 28 12:49:27 crc kubenswrapper[4779]: I1128 12:49:27.579624 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlfsc\" (UniqueName: \"kubernetes.io/projected/f70b1dfe-4b12-40c8-8052-da91227479b0-kube-api-access-wlfsc\") pod \"f70b1dfe-4b12-40c8-8052-da91227479b0\" (UID: \"f70b1dfe-4b12-40c8-8052-da91227479b0\") " Nov 28 12:49:27 crc kubenswrapper[4779]: I1128 12:49:27.581344 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f70b1dfe-4b12-40c8-8052-da91227479b0-bundle" (OuterVolumeSpecName: "bundle") pod "f70b1dfe-4b12-40c8-8052-da91227479b0" (UID: "f70b1dfe-4b12-40c8-8052-da91227479b0"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:49:27 crc kubenswrapper[4779]: I1128 12:49:27.587944 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f70b1dfe-4b12-40c8-8052-da91227479b0-kube-api-access-wlfsc" (OuterVolumeSpecName: "kube-api-access-wlfsc") pod "f70b1dfe-4b12-40c8-8052-da91227479b0" (UID: "f70b1dfe-4b12-40c8-8052-da91227479b0"). InnerVolumeSpecName "kube-api-access-wlfsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:49:27 crc kubenswrapper[4779]: I1128 12:49:27.611034 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f70b1dfe-4b12-40c8-8052-da91227479b0-util" (OuterVolumeSpecName: "util") pod "f70b1dfe-4b12-40c8-8052-da91227479b0" (UID: "f70b1dfe-4b12-40c8-8052-da91227479b0"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:49:27 crc kubenswrapper[4779]: I1128 12:49:27.680782 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlfsc\" (UniqueName: \"kubernetes.io/projected/f70b1dfe-4b12-40c8-8052-da91227479b0-kube-api-access-wlfsc\") on node \"crc\" DevicePath \"\"" Nov 28 12:49:27 crc kubenswrapper[4779]: I1128 12:49:27.681061 4779 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f70b1dfe-4b12-40c8-8052-da91227479b0-util\") on node \"crc\" DevicePath \"\"" Nov 28 12:49:27 crc kubenswrapper[4779]: I1128 12:49:27.681219 4779 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f70b1dfe-4b12-40c8-8052-da91227479b0-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:49:28 crc kubenswrapper[4779]: I1128 12:49:28.219314 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83zqqz8" event={"ID":"f70b1dfe-4b12-40c8-8052-da91227479b0","Type":"ContainerDied","Data":"5e2711760676f1a566b787dfba3c01e240a596acdca1dc1d6303a9f30203fd8c"} Nov 28 12:49:28 crc kubenswrapper[4779]: I1128 12:49:28.219623 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e2711760676f1a566b787dfba3c01e240a596acdca1dc1d6303a9f30203fd8c" Nov 28 12:49:28 crc kubenswrapper[4779]: I1128 12:49:28.219414 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83zqqz8" Nov 28 12:49:40 crc kubenswrapper[4779]: I1128 12:49:40.198328 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7d5c964c78-9tlcl"] Nov 28 12:49:40 crc kubenswrapper[4779]: E1128 12:49:40.199053 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f70b1dfe-4b12-40c8-8052-da91227479b0" containerName="extract" Nov 28 12:49:40 crc kubenswrapper[4779]: I1128 12:49:40.199068 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="f70b1dfe-4b12-40c8-8052-da91227479b0" containerName="extract" Nov 28 12:49:40 crc kubenswrapper[4779]: E1128 12:49:40.199077 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f70b1dfe-4b12-40c8-8052-da91227479b0" containerName="util" Nov 28 12:49:40 crc kubenswrapper[4779]: I1128 12:49:40.199085 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="f70b1dfe-4b12-40c8-8052-da91227479b0" containerName="util" Nov 28 12:49:40 crc kubenswrapper[4779]: E1128 12:49:40.199113 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb401509-3ef4-41bc-93db-fbee2b5454b9" containerName="console" Nov 28 12:49:40 crc kubenswrapper[4779]: I1128 12:49:40.199123 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb401509-3ef4-41bc-93db-fbee2b5454b9" containerName="console" Nov 28 12:49:40 crc kubenswrapper[4779]: E1128 12:49:40.199134 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f70b1dfe-4b12-40c8-8052-da91227479b0" containerName="pull" Nov 28 12:49:40 crc kubenswrapper[4779]: I1128 12:49:40.199141 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="f70b1dfe-4b12-40c8-8052-da91227479b0" containerName="pull" Nov 28 12:49:40 crc kubenswrapper[4779]: I1128 12:49:40.199238 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="f70b1dfe-4b12-40c8-8052-da91227479b0" containerName="extract" Nov 28 12:49:40 crc kubenswrapper[4779]: I1128 12:49:40.199246 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb401509-3ef4-41bc-93db-fbee2b5454b9" containerName="console" Nov 28 12:49:40 crc kubenswrapper[4779]: I1128 12:49:40.199619 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7d5c964c78-9tlcl" Nov 28 12:49:40 crc kubenswrapper[4779]: I1128 12:49:40.204140 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 28 12:49:40 crc kubenswrapper[4779]: I1128 12:49:40.204326 4779 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 28 12:49:40 crc kubenswrapper[4779]: I1128 12:49:40.205591 4779 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Nov 28 12:49:40 crc kubenswrapper[4779]: I1128 12:49:40.209941 4779 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-mtpsg" Nov 28 12:49:40 crc kubenswrapper[4779]: I1128 12:49:40.213658 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 28 12:49:40 crc kubenswrapper[4779]: I1128 12:49:40.253848 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvxxh\" (UniqueName: \"kubernetes.io/projected/93890301-ca3f-4009-a55d-960edac754a9-kube-api-access-qvxxh\") pod \"metallb-operator-controller-manager-7d5c964c78-9tlcl\" (UID: \"93890301-ca3f-4009-a55d-960edac754a9\") " pod="metallb-system/metallb-operator-controller-manager-7d5c964c78-9tlcl" Nov 28 12:49:40 crc kubenswrapper[4779]: I1128 12:49:40.253901 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/93890301-ca3f-4009-a55d-960edac754a9-apiservice-cert\") pod \"metallb-operator-controller-manager-7d5c964c78-9tlcl\" (UID: \"93890301-ca3f-4009-a55d-960edac754a9\") " pod="metallb-system/metallb-operator-controller-manager-7d5c964c78-9tlcl" Nov 28 12:49:40 crc kubenswrapper[4779]: I1128 12:49:40.253964 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/93890301-ca3f-4009-a55d-960edac754a9-webhook-cert\") pod \"metallb-operator-controller-manager-7d5c964c78-9tlcl\" (UID: \"93890301-ca3f-4009-a55d-960edac754a9\") " pod="metallb-system/metallb-operator-controller-manager-7d5c964c78-9tlcl" Nov 28 12:49:40 crc kubenswrapper[4779]: I1128 12:49:40.273175 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7d5c964c78-9tlcl"] Nov 28 12:49:40 crc kubenswrapper[4779]: I1128 12:49:40.355385 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/93890301-ca3f-4009-a55d-960edac754a9-apiservice-cert\") pod \"metallb-operator-controller-manager-7d5c964c78-9tlcl\" (UID: \"93890301-ca3f-4009-a55d-960edac754a9\") " pod="metallb-system/metallb-operator-controller-manager-7d5c964c78-9tlcl" Nov 28 12:49:40 crc kubenswrapper[4779]: I1128 12:49:40.355506 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/93890301-ca3f-4009-a55d-960edac754a9-webhook-cert\") pod \"metallb-operator-controller-manager-7d5c964c78-9tlcl\" (UID: \"93890301-ca3f-4009-a55d-960edac754a9\") " pod="metallb-system/metallb-operator-controller-manager-7d5c964c78-9tlcl" Nov 28 12:49:40 crc kubenswrapper[4779]: I1128 12:49:40.355552 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvxxh\" (UniqueName: \"kubernetes.io/projected/93890301-ca3f-4009-a55d-960edac754a9-kube-api-access-qvxxh\") pod \"metallb-operator-controller-manager-7d5c964c78-9tlcl\" (UID: \"93890301-ca3f-4009-a55d-960edac754a9\") " pod="metallb-system/metallb-operator-controller-manager-7d5c964c78-9tlcl" Nov 28 12:49:40 crc kubenswrapper[4779]: I1128 12:49:40.363855 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/93890301-ca3f-4009-a55d-960edac754a9-apiservice-cert\") pod \"metallb-operator-controller-manager-7d5c964c78-9tlcl\" (UID: \"93890301-ca3f-4009-a55d-960edac754a9\") " pod="metallb-system/metallb-operator-controller-manager-7d5c964c78-9tlcl" Nov 28 12:49:40 crc kubenswrapper[4779]: I1128 12:49:40.374653 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/93890301-ca3f-4009-a55d-960edac754a9-webhook-cert\") pod \"metallb-operator-controller-manager-7d5c964c78-9tlcl\" (UID: \"93890301-ca3f-4009-a55d-960edac754a9\") " pod="metallb-system/metallb-operator-controller-manager-7d5c964c78-9tlcl" Nov 28 12:49:40 crc kubenswrapper[4779]: I1128 12:49:40.400891 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvxxh\" (UniqueName: \"kubernetes.io/projected/93890301-ca3f-4009-a55d-960edac754a9-kube-api-access-qvxxh\") pod \"metallb-operator-controller-manager-7d5c964c78-9tlcl\" (UID: \"93890301-ca3f-4009-a55d-960edac754a9\") " pod="metallb-system/metallb-operator-controller-manager-7d5c964c78-9tlcl" Nov 28 12:49:40 crc kubenswrapper[4779]: I1128 12:49:40.474342 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7c8544dcdc-ggmwl"] Nov 28 12:49:40 crc kubenswrapper[4779]: I1128 12:49:40.474975 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7c8544dcdc-ggmwl" Nov 28 12:49:40 crc kubenswrapper[4779]: I1128 12:49:40.479461 4779 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 28 12:49:40 crc kubenswrapper[4779]: I1128 12:49:40.479486 4779 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-8b7rj" Nov 28 12:49:40 crc kubenswrapper[4779]: I1128 12:49:40.480326 4779 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 28 12:49:40 crc kubenswrapper[4779]: I1128 12:49:40.488632 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7c8544dcdc-ggmwl"] Nov 28 12:49:40 crc kubenswrapper[4779]: I1128 12:49:40.513072 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7d5c964c78-9tlcl" Nov 28 12:49:40 crc kubenswrapper[4779]: I1128 12:49:40.557856 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/60e79db0-fa26-46e7-80d8-55720f1372a2-webhook-cert\") pod \"metallb-operator-webhook-server-7c8544dcdc-ggmwl\" (UID: \"60e79db0-fa26-46e7-80d8-55720f1372a2\") " pod="metallb-system/metallb-operator-webhook-server-7c8544dcdc-ggmwl" Nov 28 12:49:40 crc kubenswrapper[4779]: I1128 12:49:40.558207 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hmwp\" (UniqueName: \"kubernetes.io/projected/60e79db0-fa26-46e7-80d8-55720f1372a2-kube-api-access-6hmwp\") pod \"metallb-operator-webhook-server-7c8544dcdc-ggmwl\" (UID: \"60e79db0-fa26-46e7-80d8-55720f1372a2\") " pod="metallb-system/metallb-operator-webhook-server-7c8544dcdc-ggmwl" Nov 28 12:49:40 crc kubenswrapper[4779]: I1128 12:49:40.558280 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/60e79db0-fa26-46e7-80d8-55720f1372a2-apiservice-cert\") pod \"metallb-operator-webhook-server-7c8544dcdc-ggmwl\" (UID: \"60e79db0-fa26-46e7-80d8-55720f1372a2\") " pod="metallb-system/metallb-operator-webhook-server-7c8544dcdc-ggmwl" Nov 28 12:49:40 crc kubenswrapper[4779]: I1128 12:49:40.659376 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/60e79db0-fa26-46e7-80d8-55720f1372a2-apiservice-cert\") pod \"metallb-operator-webhook-server-7c8544dcdc-ggmwl\" (UID: \"60e79db0-fa26-46e7-80d8-55720f1372a2\") " pod="metallb-system/metallb-operator-webhook-server-7c8544dcdc-ggmwl" Nov 28 12:49:40 crc kubenswrapper[4779]: I1128 12:49:40.659443 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/60e79db0-fa26-46e7-80d8-55720f1372a2-webhook-cert\") pod \"metallb-operator-webhook-server-7c8544dcdc-ggmwl\" (UID: \"60e79db0-fa26-46e7-80d8-55720f1372a2\") " pod="metallb-system/metallb-operator-webhook-server-7c8544dcdc-ggmwl" Nov 28 12:49:40 crc kubenswrapper[4779]: I1128 12:49:40.659474 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hmwp\" (UniqueName: \"kubernetes.io/projected/60e79db0-fa26-46e7-80d8-55720f1372a2-kube-api-access-6hmwp\") pod \"metallb-operator-webhook-server-7c8544dcdc-ggmwl\" (UID: \"60e79db0-fa26-46e7-80d8-55720f1372a2\") " pod="metallb-system/metallb-operator-webhook-server-7c8544dcdc-ggmwl" Nov 28 12:49:40 crc kubenswrapper[4779]: I1128 12:49:40.663983 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/60e79db0-fa26-46e7-80d8-55720f1372a2-apiservice-cert\") pod \"metallb-operator-webhook-server-7c8544dcdc-ggmwl\" (UID: \"60e79db0-fa26-46e7-80d8-55720f1372a2\") " pod="metallb-system/metallb-operator-webhook-server-7c8544dcdc-ggmwl" Nov 28 12:49:40 crc kubenswrapper[4779]: I1128 12:49:40.665736 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/60e79db0-fa26-46e7-80d8-55720f1372a2-webhook-cert\") pod \"metallb-operator-webhook-server-7c8544dcdc-ggmwl\" (UID: \"60e79db0-fa26-46e7-80d8-55720f1372a2\") " pod="metallb-system/metallb-operator-webhook-server-7c8544dcdc-ggmwl" Nov 28 12:49:40 crc kubenswrapper[4779]: I1128 12:49:40.678869 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hmwp\" (UniqueName: \"kubernetes.io/projected/60e79db0-fa26-46e7-80d8-55720f1372a2-kube-api-access-6hmwp\") pod \"metallb-operator-webhook-server-7c8544dcdc-ggmwl\" (UID: \"60e79db0-fa26-46e7-80d8-55720f1372a2\") " pod="metallb-system/metallb-operator-webhook-server-7c8544dcdc-ggmwl" Nov 28 12:49:40 crc kubenswrapper[4779]: I1128 12:49:40.747018 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7d5c964c78-9tlcl"] Nov 28 12:49:40 crc kubenswrapper[4779]: W1128 12:49:40.753931 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93890301_ca3f_4009_a55d_960edac754a9.slice/crio-13d7b60cd3362925ebfefaf42e54795aa029f6c056b00129a3f51d46d26bd17c WatchSource:0}: Error finding container 13d7b60cd3362925ebfefaf42e54795aa029f6c056b00129a3f51d46d26bd17c: Status 404 returned error can't find the container with id 13d7b60cd3362925ebfefaf42e54795aa029f6c056b00129a3f51d46d26bd17c Nov 28 12:49:40 crc kubenswrapper[4779]: I1128 12:49:40.787676 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7c8544dcdc-ggmwl" Nov 28 12:49:40 crc kubenswrapper[4779]: I1128 12:49:40.986390 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7c8544dcdc-ggmwl"] Nov 28 12:49:40 crc kubenswrapper[4779]: W1128 12:49:40.993295 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60e79db0_fa26_46e7_80d8_55720f1372a2.slice/crio-9d8c93c29aaa75ae17565e461264246b4443650df0abe401906faa918909b338 WatchSource:0}: Error finding container 9d8c93c29aaa75ae17565e461264246b4443650df0abe401906faa918909b338: Status 404 returned error can't find the container with id 9d8c93c29aaa75ae17565e461264246b4443650df0abe401906faa918909b338 Nov 28 12:49:41 crc kubenswrapper[4779]: I1128 12:49:41.313225 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7c8544dcdc-ggmwl" event={"ID":"60e79db0-fa26-46e7-80d8-55720f1372a2","Type":"ContainerStarted","Data":"9d8c93c29aaa75ae17565e461264246b4443650df0abe401906faa918909b338"} Nov 28 12:49:41 crc kubenswrapper[4779]: I1128 12:49:41.314030 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7d5c964c78-9tlcl" event={"ID":"93890301-ca3f-4009-a55d-960edac754a9","Type":"ContainerStarted","Data":"13d7b60cd3362925ebfefaf42e54795aa029f6c056b00129a3f51d46d26bd17c"} Nov 28 12:49:46 crc kubenswrapper[4779]: I1128 12:49:46.345016 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7d5c964c78-9tlcl" event={"ID":"93890301-ca3f-4009-a55d-960edac754a9","Type":"ContainerStarted","Data":"83ae22ce3dd72d3c324885eb93112f5f508b3857c24bfef27fe7db8f8b9cb397"} Nov 28 12:49:46 crc kubenswrapper[4779]: I1128 12:49:46.345689 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7d5c964c78-9tlcl" Nov 28 12:49:46 crc kubenswrapper[4779]: I1128 12:49:46.347896 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7c8544dcdc-ggmwl" event={"ID":"60e79db0-fa26-46e7-80d8-55720f1372a2","Type":"ContainerStarted","Data":"3448b0b559a0b39ac14c936f05abc59ea32ffdafd6ddbf41a3eb2a049c4731aa"} Nov 28 12:49:46 crc kubenswrapper[4779]: I1128 12:49:46.348079 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7c8544dcdc-ggmwl" Nov 28 12:49:46 crc kubenswrapper[4779]: I1128 12:49:46.384343 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7d5c964c78-9tlcl" podStartSLOduration=1.4529170310000001 podStartE2EDuration="6.384319404s" podCreationTimestamp="2025-11-28 12:49:40 +0000 UTC" firstStartedPulling="2025-11-28 12:49:40.756402592 +0000 UTC m=+841.322077946" lastFinishedPulling="2025-11-28 12:49:45.687804955 +0000 UTC m=+846.253480319" observedRunningTime="2025-11-28 12:49:46.374450299 +0000 UTC m=+846.940125703" watchObservedRunningTime="2025-11-28 12:49:46.384319404 +0000 UTC m=+846.949994788" Nov 28 12:50:00 crc kubenswrapper[4779]: I1128 12:50:00.795015 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7c8544dcdc-ggmwl" Nov 28 12:50:00 crc kubenswrapper[4779]: I1128 12:50:00.823969 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7c8544dcdc-ggmwl" podStartSLOduration=16.101156752 podStartE2EDuration="20.823950505s" podCreationTimestamp="2025-11-28 12:49:40 +0000 UTC" firstStartedPulling="2025-11-28 12:49:40.995581592 +0000 UTC m=+841.561256946" lastFinishedPulling="2025-11-28 12:49:45.718375325 +0000 UTC m=+846.284050699" observedRunningTime="2025-11-28 12:49:46.403764536 +0000 UTC m=+846.969439960" watchObservedRunningTime="2025-11-28 12:50:00.823950505 +0000 UTC m=+861.389625869" Nov 28 12:50:20 crc kubenswrapper[4779]: I1128 12:50:20.516693 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7d5c964c78-9tlcl" Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.380840 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-nxg68"] Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.382124 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-nxg68" Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.384188 4779 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.384361 4779 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-jbp6s" Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.385865 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-w5vz4"] Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.397307 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-w5vz4" Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.400448 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.400922 4779 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.411991 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-nxg68"] Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.465773 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-flq64"] Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.466774 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-flq64" Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.468900 4779 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.469170 4779 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.469194 4779 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-4g2z8" Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.469626 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.487263 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-f8648f98b-89xvz"] Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.488313 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-89xvz" Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.493005 4779 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.508240 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-89xvz"] Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.528811 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e5db87da-4229-4c2f-abbd-bb5aff35de97-metrics\") pod \"frr-k8s-w5vz4\" (UID: \"e5db87da-4229-4c2f-abbd-bb5aff35de97\") " pod="metallb-system/frr-k8s-w5vz4" Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.528857 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e5db87da-4229-4c2f-abbd-bb5aff35de97-frr-startup\") pod \"frr-k8s-w5vz4\" (UID: \"e5db87da-4229-4c2f-abbd-bb5aff35de97\") " pod="metallb-system/frr-k8s-w5vz4" Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.528886 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ea534549-07a6-43e1-98e7-906ee50e4146-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-nxg68\" (UID: \"ea534549-07a6-43e1-98e7-906ee50e4146\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-nxg68" Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.528902 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e5db87da-4229-4c2f-abbd-bb5aff35de97-metrics-certs\") pod \"frr-k8s-w5vz4\" (UID: \"e5db87da-4229-4c2f-abbd-bb5aff35de97\") " pod="metallb-system/frr-k8s-w5vz4" Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.529023 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4ph7\" (UniqueName: \"kubernetes.io/projected/e5db87da-4229-4c2f-abbd-bb5aff35de97-kube-api-access-d4ph7\") pod \"frr-k8s-w5vz4\" (UID: \"e5db87da-4229-4c2f-abbd-bb5aff35de97\") " pod="metallb-system/frr-k8s-w5vz4" Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.529153 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e5db87da-4229-4c2f-abbd-bb5aff35de97-reloader\") pod \"frr-k8s-w5vz4\" (UID: \"e5db87da-4229-4c2f-abbd-bb5aff35de97\") " pod="metallb-system/frr-k8s-w5vz4" Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.529190 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e5db87da-4229-4c2f-abbd-bb5aff35de97-frr-conf\") pod \"frr-k8s-w5vz4\" (UID: \"e5db87da-4229-4c2f-abbd-bb5aff35de97\") " pod="metallb-system/frr-k8s-w5vz4" Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.529308 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxhzb\" (UniqueName: \"kubernetes.io/projected/ea534549-07a6-43e1-98e7-906ee50e4146-kube-api-access-lxhzb\") pod \"frr-k8s-webhook-server-7fcb986d4-nxg68\" (UID: \"ea534549-07a6-43e1-98e7-906ee50e4146\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-nxg68" Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.529358 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e5db87da-4229-4c2f-abbd-bb5aff35de97-frr-sockets\") pod \"frr-k8s-w5vz4\" (UID: \"e5db87da-4229-4c2f-abbd-bb5aff35de97\") " pod="metallb-system/frr-k8s-w5vz4" Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.630444 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ea534549-07a6-43e1-98e7-906ee50e4146-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-nxg68\" (UID: \"ea534549-07a6-43e1-98e7-906ee50e4146\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-nxg68" Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.630493 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e5db87da-4229-4c2f-abbd-bb5aff35de97-metrics-certs\") pod \"frr-k8s-w5vz4\" (UID: \"e5db87da-4229-4c2f-abbd-bb5aff35de97\") " pod="metallb-system/frr-k8s-w5vz4" Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.630512 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4ph7\" (UniqueName: \"kubernetes.io/projected/e5db87da-4229-4c2f-abbd-bb5aff35de97-kube-api-access-d4ph7\") pod \"frr-k8s-w5vz4\" (UID: \"e5db87da-4229-4c2f-abbd-bb5aff35de97\") " pod="metallb-system/frr-k8s-w5vz4" Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.630532 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9vrq\" (UniqueName: \"kubernetes.io/projected/7fe4463e-8739-494e-8171-7bfc925826a9-kube-api-access-w9vrq\") pod \"controller-f8648f98b-89xvz\" (UID: \"7fe4463e-8739-494e-8171-7bfc925826a9\") " pod="metallb-system/controller-f8648f98b-89xvz" Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.630560 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e5db87da-4229-4c2f-abbd-bb5aff35de97-reloader\") pod \"frr-k8s-w5vz4\" (UID: \"e5db87da-4229-4c2f-abbd-bb5aff35de97\") " pod="metallb-system/frr-k8s-w5vz4" Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.630575 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7fe4463e-8739-494e-8171-7bfc925826a9-cert\") pod \"controller-f8648f98b-89xvz\" (UID: \"7fe4463e-8739-494e-8171-7bfc925826a9\") " pod="metallb-system/controller-f8648f98b-89xvz" Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.630590 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e5db87da-4229-4c2f-abbd-bb5aff35de97-frr-conf\") pod \"frr-k8s-w5vz4\" (UID: \"e5db87da-4229-4c2f-abbd-bb5aff35de97\") " pod="metallb-system/frr-k8s-w5vz4" Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.630612 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7bd19fff-499e-443a-b571-8af43ae08b4e-memberlist\") pod \"speaker-flq64\" (UID: \"7bd19fff-499e-443a-b571-8af43ae08b4e\") " pod="metallb-system/speaker-flq64" Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.630633 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7bd19fff-499e-443a-b571-8af43ae08b4e-metrics-certs\") pod \"speaker-flq64\" (UID: \"7bd19fff-499e-443a-b571-8af43ae08b4e\") " pod="metallb-system/speaker-flq64" Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.630655 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/7bd19fff-499e-443a-b571-8af43ae08b4e-metallb-excludel2\") pod \"speaker-flq64\" (UID: \"7bd19fff-499e-443a-b571-8af43ae08b4e\") " pod="metallb-system/speaker-flq64" Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.630673 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxhzb\" (UniqueName: \"kubernetes.io/projected/ea534549-07a6-43e1-98e7-906ee50e4146-kube-api-access-lxhzb\") pod \"frr-k8s-webhook-server-7fcb986d4-nxg68\" (UID: \"ea534549-07a6-43e1-98e7-906ee50e4146\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-nxg68" Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.630691 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e5db87da-4229-4c2f-abbd-bb5aff35de97-frr-sockets\") pod \"frr-k8s-w5vz4\" (UID: \"e5db87da-4229-4c2f-abbd-bb5aff35de97\") " pod="metallb-system/frr-k8s-w5vz4" Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.630715 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7fe4463e-8739-494e-8171-7bfc925826a9-metrics-certs\") pod \"controller-f8648f98b-89xvz\" (UID: \"7fe4463e-8739-494e-8171-7bfc925826a9\") " pod="metallb-system/controller-f8648f98b-89xvz" Nov 28 12:50:21 crc kubenswrapper[4779]: E1128 12:50:21.630715 4779 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Nov 28 12:50:21 crc kubenswrapper[4779]: E1128 12:50:21.630840 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e5db87da-4229-4c2f-abbd-bb5aff35de97-metrics-certs podName:e5db87da-4229-4c2f-abbd-bb5aff35de97 nodeName:}" failed. No retries permitted until 2025-11-28 12:50:22.1308127 +0000 UTC m=+882.696488084 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e5db87da-4229-4c2f-abbd-bb5aff35de97-metrics-certs") pod "frr-k8s-w5vz4" (UID: "e5db87da-4229-4c2f-abbd-bb5aff35de97") : secret "frr-k8s-certs-secret" not found Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.630736 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e5db87da-4229-4c2f-abbd-bb5aff35de97-metrics\") pod \"frr-k8s-w5vz4\" (UID: \"e5db87da-4229-4c2f-abbd-bb5aff35de97\") " pod="metallb-system/frr-k8s-w5vz4" Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.631351 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lpn9\" (UniqueName: \"kubernetes.io/projected/7bd19fff-499e-443a-b571-8af43ae08b4e-kube-api-access-5lpn9\") pod \"speaker-flq64\" (UID: \"7bd19fff-499e-443a-b571-8af43ae08b4e\") " pod="metallb-system/speaker-flq64" Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.631414 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e5db87da-4229-4c2f-abbd-bb5aff35de97-frr-startup\") pod \"frr-k8s-w5vz4\" (UID: \"e5db87da-4229-4c2f-abbd-bb5aff35de97\") " pod="metallb-system/frr-k8s-w5vz4" Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.631429 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e5db87da-4229-4c2f-abbd-bb5aff35de97-metrics\") pod \"frr-k8s-w5vz4\" (UID: \"e5db87da-4229-4c2f-abbd-bb5aff35de97\") " pod="metallb-system/frr-k8s-w5vz4" Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.631712 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e5db87da-4229-4c2f-abbd-bb5aff35de97-reloader\") pod \"frr-k8s-w5vz4\" (UID: \"e5db87da-4229-4c2f-abbd-bb5aff35de97\") " pod="metallb-system/frr-k8s-w5vz4" Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.632021 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e5db87da-4229-4c2f-abbd-bb5aff35de97-frr-sockets\") pod \"frr-k8s-w5vz4\" (UID: \"e5db87da-4229-4c2f-abbd-bb5aff35de97\") " pod="metallb-system/frr-k8s-w5vz4" Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.632168 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e5db87da-4229-4c2f-abbd-bb5aff35de97-frr-conf\") pod \"frr-k8s-w5vz4\" (UID: \"e5db87da-4229-4c2f-abbd-bb5aff35de97\") " pod="metallb-system/frr-k8s-w5vz4" Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.632490 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e5db87da-4229-4c2f-abbd-bb5aff35de97-frr-startup\") pod \"frr-k8s-w5vz4\" (UID: \"e5db87da-4229-4c2f-abbd-bb5aff35de97\") " pod="metallb-system/frr-k8s-w5vz4" Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.641739 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ea534549-07a6-43e1-98e7-906ee50e4146-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-nxg68\" (UID: \"ea534549-07a6-43e1-98e7-906ee50e4146\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-nxg68" Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.649870 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4ph7\" (UniqueName: \"kubernetes.io/projected/e5db87da-4229-4c2f-abbd-bb5aff35de97-kube-api-access-d4ph7\") pod \"frr-k8s-w5vz4\" (UID: \"e5db87da-4229-4c2f-abbd-bb5aff35de97\") " pod="metallb-system/frr-k8s-w5vz4" Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.650140 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxhzb\" (UniqueName: \"kubernetes.io/projected/ea534549-07a6-43e1-98e7-906ee50e4146-kube-api-access-lxhzb\") pod \"frr-k8s-webhook-server-7fcb986d4-nxg68\" (UID: \"ea534549-07a6-43e1-98e7-906ee50e4146\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-nxg68" Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.713059 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-nxg68" Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.732208 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lpn9\" (UniqueName: \"kubernetes.io/projected/7bd19fff-499e-443a-b571-8af43ae08b4e-kube-api-access-5lpn9\") pod \"speaker-flq64\" (UID: \"7bd19fff-499e-443a-b571-8af43ae08b4e\") " pod="metallb-system/speaker-flq64" Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.732328 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9vrq\" (UniqueName: \"kubernetes.io/projected/7fe4463e-8739-494e-8171-7bfc925826a9-kube-api-access-w9vrq\") pod \"controller-f8648f98b-89xvz\" (UID: \"7fe4463e-8739-494e-8171-7bfc925826a9\") " pod="metallb-system/controller-f8648f98b-89xvz" Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.732387 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7fe4463e-8739-494e-8171-7bfc925826a9-cert\") pod \"controller-f8648f98b-89xvz\" (UID: \"7fe4463e-8739-494e-8171-7bfc925826a9\") " pod="metallb-system/controller-f8648f98b-89xvz" Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.732923 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7bd19fff-499e-443a-b571-8af43ae08b4e-memberlist\") pod \"speaker-flq64\" (UID: \"7bd19fff-499e-443a-b571-8af43ae08b4e\") " pod="metallb-system/speaker-flq64" Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.732977 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7bd19fff-499e-443a-b571-8af43ae08b4e-metrics-certs\") pod \"speaker-flq64\" (UID: \"7bd19fff-499e-443a-b571-8af43ae08b4e\") " pod="metallb-system/speaker-flq64" Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.733023 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/7bd19fff-499e-443a-b571-8af43ae08b4e-metallb-excludel2\") pod \"speaker-flq64\" (UID: \"7bd19fff-499e-443a-b571-8af43ae08b4e\") " pod="metallb-system/speaker-flq64" Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.733160 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7fe4463e-8739-494e-8171-7bfc925826a9-metrics-certs\") pod \"controller-f8648f98b-89xvz\" (UID: \"7fe4463e-8739-494e-8171-7bfc925826a9\") " pod="metallb-system/controller-f8648f98b-89xvz" Nov 28 12:50:21 crc kubenswrapper[4779]: E1128 12:50:21.733167 4779 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.734183 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/7bd19fff-499e-443a-b571-8af43ae08b4e-metallb-excludel2\") pod \"speaker-flq64\" (UID: \"7bd19fff-499e-443a-b571-8af43ae08b4e\") " pod="metallb-system/speaker-flq64" Nov 28 12:50:21 crc kubenswrapper[4779]: E1128 12:50:21.734278 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7bd19fff-499e-443a-b571-8af43ae08b4e-memberlist podName:7bd19fff-499e-443a-b571-8af43ae08b4e nodeName:}" failed. No retries permitted until 2025-11-28 12:50:22.234242385 +0000 UTC m=+882.799917789 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/7bd19fff-499e-443a-b571-8af43ae08b4e-memberlist") pod "speaker-flq64" (UID: "7bd19fff-499e-443a-b571-8af43ae08b4e") : secret "metallb-memberlist" not found Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.736484 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7bd19fff-499e-443a-b571-8af43ae08b4e-metrics-certs\") pod \"speaker-flq64\" (UID: \"7bd19fff-499e-443a-b571-8af43ae08b4e\") " pod="metallb-system/speaker-flq64" Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.738797 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7fe4463e-8739-494e-8171-7bfc925826a9-cert\") pod \"controller-f8648f98b-89xvz\" (UID: \"7fe4463e-8739-494e-8171-7bfc925826a9\") " pod="metallb-system/controller-f8648f98b-89xvz" Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.740813 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7fe4463e-8739-494e-8171-7bfc925826a9-metrics-certs\") pod \"controller-f8648f98b-89xvz\" (UID: \"7fe4463e-8739-494e-8171-7bfc925826a9\") " pod="metallb-system/controller-f8648f98b-89xvz" Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.752878 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lpn9\" (UniqueName: \"kubernetes.io/projected/7bd19fff-499e-443a-b571-8af43ae08b4e-kube-api-access-5lpn9\") pod \"speaker-flq64\" (UID: \"7bd19fff-499e-443a-b571-8af43ae08b4e\") " pod="metallb-system/speaker-flq64" Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.757078 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9vrq\" (UniqueName: \"kubernetes.io/projected/7fe4463e-8739-494e-8171-7bfc925826a9-kube-api-access-w9vrq\") pod \"controller-f8648f98b-89xvz\" (UID: \"7fe4463e-8739-494e-8171-7bfc925826a9\") " pod="metallb-system/controller-f8648f98b-89xvz" Nov 28 12:50:21 crc kubenswrapper[4779]: I1128 12:50:21.805887 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-89xvz" Nov 28 12:50:22 crc kubenswrapper[4779]: I1128 12:50:22.105834 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-nxg68"] Nov 28 12:50:22 crc kubenswrapper[4779]: I1128 12:50:22.139128 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e5db87da-4229-4c2f-abbd-bb5aff35de97-metrics-certs\") pod \"frr-k8s-w5vz4\" (UID: \"e5db87da-4229-4c2f-abbd-bb5aff35de97\") " pod="metallb-system/frr-k8s-w5vz4" Nov 28 12:50:22 crc kubenswrapper[4779]: I1128 12:50:22.142537 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e5db87da-4229-4c2f-abbd-bb5aff35de97-metrics-certs\") pod \"frr-k8s-w5vz4\" (UID: \"e5db87da-4229-4c2f-abbd-bb5aff35de97\") " pod="metallb-system/frr-k8s-w5vz4" Nov 28 12:50:22 crc kubenswrapper[4779]: I1128 12:50:22.204820 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-89xvz"] Nov 28 12:50:22 crc kubenswrapper[4779]: I1128 12:50:22.240084 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7bd19fff-499e-443a-b571-8af43ae08b4e-memberlist\") pod \"speaker-flq64\" (UID: \"7bd19fff-499e-443a-b571-8af43ae08b4e\") " pod="metallb-system/speaker-flq64" Nov 28 12:50:22 crc kubenswrapper[4779]: E1128 12:50:22.240361 4779 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 28 12:50:22 crc kubenswrapper[4779]: E1128 12:50:22.240428 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7bd19fff-499e-443a-b571-8af43ae08b4e-memberlist podName:7bd19fff-499e-443a-b571-8af43ae08b4e nodeName:}" failed. No retries permitted until 2025-11-28 12:50:23.240410536 +0000 UTC m=+883.806085900 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/7bd19fff-499e-443a-b571-8af43ae08b4e-memberlist") pod "speaker-flq64" (UID: "7bd19fff-499e-443a-b571-8af43ae08b4e") : secret "metallb-memberlist" not found Nov 28 12:50:22 crc kubenswrapper[4779]: I1128 12:50:22.341302 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-w5vz4" Nov 28 12:50:22 crc kubenswrapper[4779]: I1128 12:50:22.608714 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-w5vz4" event={"ID":"e5db87da-4229-4c2f-abbd-bb5aff35de97","Type":"ContainerStarted","Data":"900173bafcf9147cf22f51aea131e688919ad8994bae4a1066c3b07204497d83"} Nov 28 12:50:22 crc kubenswrapper[4779]: I1128 12:50:22.610129 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-nxg68" event={"ID":"ea534549-07a6-43e1-98e7-906ee50e4146","Type":"ContainerStarted","Data":"463e3cf8426da4fd884cff540cab22778979a9eab3d5a2b902ce118badea2b78"} Nov 28 12:50:22 crc kubenswrapper[4779]: I1128 12:50:22.611727 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-89xvz" event={"ID":"7fe4463e-8739-494e-8171-7bfc925826a9","Type":"ContainerStarted","Data":"6379346355b68120a06dfe9d2872119b48f23a49d3bb9e8085b8b170a10d53a0"} Nov 28 12:50:22 crc kubenswrapper[4779]: I1128 12:50:22.611759 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-89xvz" event={"ID":"7fe4463e-8739-494e-8171-7bfc925826a9","Type":"ContainerStarted","Data":"7911c43be2b13cc2945f43dc2331bb2a3baa83476e008f8e0995fae1c972e5fc"} Nov 28 12:50:22 crc kubenswrapper[4779]: I1128 12:50:22.611772 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-89xvz" event={"ID":"7fe4463e-8739-494e-8171-7bfc925826a9","Type":"ContainerStarted","Data":"4e1ec1191cbcc9949455db77457fd66d89c4a9536e2e20d22798b57b19f6a241"} Nov 28 12:50:22 crc kubenswrapper[4779]: I1128 12:50:22.611966 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-f8648f98b-89xvz" Nov 28 12:50:22 crc kubenswrapper[4779]: I1128 12:50:22.629009 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-f8648f98b-89xvz" podStartSLOduration=1.628988263 podStartE2EDuration="1.628988263s" podCreationTimestamp="2025-11-28 12:50:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:50:22.623556697 +0000 UTC m=+883.189232051" watchObservedRunningTime="2025-11-28 12:50:22.628988263 +0000 UTC m=+883.194663617" Nov 28 12:50:23 crc kubenswrapper[4779]: I1128 12:50:23.253270 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7bd19fff-499e-443a-b571-8af43ae08b4e-memberlist\") pod \"speaker-flq64\" (UID: \"7bd19fff-499e-443a-b571-8af43ae08b4e\") " pod="metallb-system/speaker-flq64" Nov 28 12:50:23 crc kubenswrapper[4779]: I1128 12:50:23.261605 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7bd19fff-499e-443a-b571-8af43ae08b4e-memberlist\") pod \"speaker-flq64\" (UID: \"7bd19fff-499e-443a-b571-8af43ae08b4e\") " pod="metallb-system/speaker-flq64" Nov 28 12:50:23 crc kubenswrapper[4779]: I1128 12:50:23.288532 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-flq64" Nov 28 12:50:23 crc kubenswrapper[4779]: W1128 12:50:23.316008 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7bd19fff_499e_443a_b571_8af43ae08b4e.slice/crio-26eeace9253c3b6c04252ea10f638df7f7e69ac1b62c27793248e835ed381713 WatchSource:0}: Error finding container 26eeace9253c3b6c04252ea10f638df7f7e69ac1b62c27793248e835ed381713: Status 404 returned error can't find the container with id 26eeace9253c3b6c04252ea10f638df7f7e69ac1b62c27793248e835ed381713 Nov 28 12:50:23 crc kubenswrapper[4779]: I1128 12:50:23.646317 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-flq64" event={"ID":"7bd19fff-499e-443a-b571-8af43ae08b4e","Type":"ContainerStarted","Data":"26eeace9253c3b6c04252ea10f638df7f7e69ac1b62c27793248e835ed381713"} Nov 28 12:50:24 crc kubenswrapper[4779]: I1128 12:50:24.672788 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-flq64" event={"ID":"7bd19fff-499e-443a-b571-8af43ae08b4e","Type":"ContainerStarted","Data":"285b30909278d54ed1057470579c3c0e84ef57d97813879b77a5b1b7976e50a8"} Nov 28 12:50:24 crc kubenswrapper[4779]: I1128 12:50:24.673086 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-flq64" event={"ID":"7bd19fff-499e-443a-b571-8af43ae08b4e","Type":"ContainerStarted","Data":"aeaf0fa45f64786a2c69953df45bd83723a61fdd0a994678de4ac53a1b889004"} Nov 28 12:50:24 crc kubenswrapper[4779]: I1128 12:50:24.673156 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-flq64" Nov 28 12:50:24 crc kubenswrapper[4779]: I1128 12:50:24.694276 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-flq64" podStartSLOduration=3.694258037 podStartE2EDuration="3.694258037s" podCreationTimestamp="2025-11-28 12:50:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:50:24.689681604 +0000 UTC m=+885.255356968" watchObservedRunningTime="2025-11-28 12:50:24.694258037 +0000 UTC m=+885.259933411" Nov 28 12:50:29 crc kubenswrapper[4779]: I1128 12:50:29.704558 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-nxg68" event={"ID":"ea534549-07a6-43e1-98e7-906ee50e4146","Type":"ContainerStarted","Data":"0945a32cf2d278a9c118e87d5d0f8916e37d0d7a68238bfe441305c669b82d05"} Nov 28 12:50:29 crc kubenswrapper[4779]: I1128 12:50:29.705674 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-nxg68" Nov 28 12:50:29 crc kubenswrapper[4779]: I1128 12:50:29.708325 4779 generic.go:334] "Generic (PLEG): container finished" podID="e5db87da-4229-4c2f-abbd-bb5aff35de97" containerID="ecbe47c9a3df2614b60c75d93498b000f5f3ce6739d044b80b4a4ad264a9849a" exitCode=0 Nov 28 12:50:29 crc kubenswrapper[4779]: I1128 12:50:29.708381 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-w5vz4" event={"ID":"e5db87da-4229-4c2f-abbd-bb5aff35de97","Type":"ContainerDied","Data":"ecbe47c9a3df2614b60c75d93498b000f5f3ce6739d044b80b4a4ad264a9849a"} Nov 28 12:50:29 crc kubenswrapper[4779]: I1128 12:50:29.774653 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-nxg68" podStartSLOduration=1.7153280789999998 podStartE2EDuration="8.774626683s" podCreationTimestamp="2025-11-28 12:50:21 +0000 UTC" firstStartedPulling="2025-11-28 12:50:22.116035979 +0000 UTC m=+882.681711373" lastFinishedPulling="2025-11-28 12:50:29.175334623 +0000 UTC m=+889.741009977" observedRunningTime="2025-11-28 12:50:29.734779144 +0000 UTC m=+890.300454558" watchObservedRunningTime="2025-11-28 12:50:29.774626683 +0000 UTC m=+890.340302077" Nov 28 12:50:30 crc kubenswrapper[4779]: I1128 12:50:30.721230 4779 generic.go:334] "Generic (PLEG): container finished" podID="e5db87da-4229-4c2f-abbd-bb5aff35de97" containerID="becc6b24ac368369cde89d909897d5d33c774bf828f12747f5776545333086d5" exitCode=0 Nov 28 12:50:30 crc kubenswrapper[4779]: I1128 12:50:30.721309 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-w5vz4" event={"ID":"e5db87da-4229-4c2f-abbd-bb5aff35de97","Type":"ContainerDied","Data":"becc6b24ac368369cde89d909897d5d33c774bf828f12747f5776545333086d5"} Nov 28 12:50:31 crc kubenswrapper[4779]: I1128 12:50:31.728244 4779 generic.go:334] "Generic (PLEG): container finished" podID="e5db87da-4229-4c2f-abbd-bb5aff35de97" containerID="97a2edc3fa3eff2f2d64382900c8c5ec17df607161216bd7cb62af4b11f8aff5" exitCode=0 Nov 28 12:50:31 crc kubenswrapper[4779]: I1128 12:50:31.733316 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-w5vz4" event={"ID":"e5db87da-4229-4c2f-abbd-bb5aff35de97","Type":"ContainerDied","Data":"97a2edc3fa3eff2f2d64382900c8c5ec17df607161216bd7cb62af4b11f8aff5"} Nov 28 12:50:32 crc kubenswrapper[4779]: I1128 12:50:32.744300 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-w5vz4" event={"ID":"e5db87da-4229-4c2f-abbd-bb5aff35de97","Type":"ContainerStarted","Data":"f1337dc515ec709782f69b8e1fcd8332c3348a233b66387d25308957a638c148"} Nov 28 12:50:32 crc kubenswrapper[4779]: I1128 12:50:32.744691 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-w5vz4" event={"ID":"e5db87da-4229-4c2f-abbd-bb5aff35de97","Type":"ContainerStarted","Data":"8c3f1ca5d002958e21d13cab2d9f3a96e445abe7b3179df7d16edc549d86c6de"} Nov 28 12:50:32 crc kubenswrapper[4779]: I1128 12:50:32.744715 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-w5vz4" event={"ID":"e5db87da-4229-4c2f-abbd-bb5aff35de97","Type":"ContainerStarted","Data":"4f78f9af5a2090696dc9fdf51bb93a02d28554e80cb5fb89b649f303bb2567a0"} Nov 28 12:50:32 crc kubenswrapper[4779]: I1128 12:50:32.744732 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-w5vz4" event={"ID":"e5db87da-4229-4c2f-abbd-bb5aff35de97","Type":"ContainerStarted","Data":"5795b5b8b3f557563179045f6d3f7cf3978cb344472ba46c7e73981c19dfa011"} Nov 28 12:50:32 crc kubenswrapper[4779]: I1128 12:50:32.744745 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-w5vz4" event={"ID":"e5db87da-4229-4c2f-abbd-bb5aff35de97","Type":"ContainerStarted","Data":"aed36b65620c8dfe569374f0e568e77fb53ce7e327c76b72e9c3b65d60baaf88"} Nov 28 12:50:33 crc kubenswrapper[4779]: I1128 12:50:33.295365 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-flq64" Nov 28 12:50:33 crc kubenswrapper[4779]: I1128 12:50:33.758246 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-w5vz4" event={"ID":"e5db87da-4229-4c2f-abbd-bb5aff35de97","Type":"ContainerStarted","Data":"161fc391d3c6e0124143249ce38beb21564086076af689466faa7bd8ad9568a0"} Nov 28 12:50:33 crc kubenswrapper[4779]: I1128 12:50:33.758490 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-w5vz4" Nov 28 12:50:33 crc kubenswrapper[4779]: I1128 12:50:33.804020 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-w5vz4" podStartSLOduration=6.095374423 podStartE2EDuration="12.803995117s" podCreationTimestamp="2025-11-28 12:50:21 +0000 UTC" firstStartedPulling="2025-11-28 12:50:22.448656144 +0000 UTC m=+883.014331498" lastFinishedPulling="2025-11-28 12:50:29.157276838 +0000 UTC m=+889.722952192" observedRunningTime="2025-11-28 12:50:33.798912261 +0000 UTC m=+894.364587625" watchObservedRunningTime="2025-11-28 12:50:33.803995117 +0000 UTC m=+894.369670501" Nov 28 12:50:36 crc kubenswrapper[4779]: I1128 12:50:36.410760 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-9gzt5"] Nov 28 12:50:36 crc kubenswrapper[4779]: I1128 12:50:36.411953 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9gzt5" Nov 28 12:50:36 crc kubenswrapper[4779]: I1128 12:50:36.416289 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 28 12:50:36 crc kubenswrapper[4779]: I1128 12:50:36.418569 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-hsslk" Nov 28 12:50:36 crc kubenswrapper[4779]: I1128 12:50:36.418841 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 28 12:50:36 crc kubenswrapper[4779]: I1128 12:50:36.426573 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-9gzt5"] Nov 28 12:50:36 crc kubenswrapper[4779]: I1128 12:50:36.549721 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qph2\" (UniqueName: \"kubernetes.io/projected/bf5180d6-52b7-4218-825d-3117ef21c04b-kube-api-access-2qph2\") pod \"openstack-operator-index-9gzt5\" (UID: \"bf5180d6-52b7-4218-825d-3117ef21c04b\") " pod="openstack-operators/openstack-operator-index-9gzt5" Nov 28 12:50:36 crc kubenswrapper[4779]: I1128 12:50:36.650569 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qph2\" (UniqueName: \"kubernetes.io/projected/bf5180d6-52b7-4218-825d-3117ef21c04b-kube-api-access-2qph2\") pod \"openstack-operator-index-9gzt5\" (UID: \"bf5180d6-52b7-4218-825d-3117ef21c04b\") " pod="openstack-operators/openstack-operator-index-9gzt5" Nov 28 12:50:36 crc kubenswrapper[4779]: I1128 12:50:36.678822 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qph2\" (UniqueName: \"kubernetes.io/projected/bf5180d6-52b7-4218-825d-3117ef21c04b-kube-api-access-2qph2\") pod \"openstack-operator-index-9gzt5\" (UID: \"bf5180d6-52b7-4218-825d-3117ef21c04b\") " pod="openstack-operators/openstack-operator-index-9gzt5" Nov 28 12:50:36 crc kubenswrapper[4779]: I1128 12:50:36.740354 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9gzt5" Nov 28 12:50:36 crc kubenswrapper[4779]: I1128 12:50:36.992887 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-9gzt5"] Nov 28 12:50:37 crc kubenswrapper[4779]: I1128 12:50:37.341996 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-w5vz4" Nov 28 12:50:37 crc kubenswrapper[4779]: I1128 12:50:37.418066 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-w5vz4" Nov 28 12:50:37 crc kubenswrapper[4779]: I1128 12:50:37.796188 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9gzt5" event={"ID":"bf5180d6-52b7-4218-825d-3117ef21c04b","Type":"ContainerStarted","Data":"f80ffcb8d837f64601aae651b1d44033079c09d2443198fd02030ff39d98f702"} Nov 28 12:50:39 crc kubenswrapper[4779]: I1128 12:50:39.776205 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-9gzt5"] Nov 28 12:50:40 crc kubenswrapper[4779]: I1128 12:50:40.388837 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-qvbp8"] Nov 28 12:50:40 crc kubenswrapper[4779]: I1128 12:50:40.389493 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-qvbp8"] Nov 28 12:50:40 crc kubenswrapper[4779]: I1128 12:50:40.389590 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qvbp8" Nov 28 12:50:40 crc kubenswrapper[4779]: I1128 12:50:40.537656 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gkpn\" (UniqueName: \"kubernetes.io/projected/527c77d8-6692-434a-88b6-4d5e3dc93337-kube-api-access-9gkpn\") pod \"openstack-operator-index-qvbp8\" (UID: \"527c77d8-6692-434a-88b6-4d5e3dc93337\") " pod="openstack-operators/openstack-operator-index-qvbp8" Nov 28 12:50:40 crc kubenswrapper[4779]: I1128 12:50:40.639524 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gkpn\" (UniqueName: \"kubernetes.io/projected/527c77d8-6692-434a-88b6-4d5e3dc93337-kube-api-access-9gkpn\") pod \"openstack-operator-index-qvbp8\" (UID: \"527c77d8-6692-434a-88b6-4d5e3dc93337\") " pod="openstack-operators/openstack-operator-index-qvbp8" Nov 28 12:50:40 crc kubenswrapper[4779]: I1128 12:50:40.656738 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gkpn\" (UniqueName: \"kubernetes.io/projected/527c77d8-6692-434a-88b6-4d5e3dc93337-kube-api-access-9gkpn\") pod \"openstack-operator-index-qvbp8\" (UID: \"527c77d8-6692-434a-88b6-4d5e3dc93337\") " pod="openstack-operators/openstack-operator-index-qvbp8" Nov 28 12:50:40 crc kubenswrapper[4779]: I1128 12:50:40.757832 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qvbp8" Nov 28 12:50:41 crc kubenswrapper[4779]: I1128 12:50:41.406537 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-qvbp8"] Nov 28 12:50:41 crc kubenswrapper[4779]: W1128 12:50:41.416486 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod527c77d8_6692_434a_88b6_4d5e3dc93337.slice/crio-0402452ae48a1688a9b891eefdd6e6e549a04534fc6d4384edaa123346337f72 WatchSource:0}: Error finding container 0402452ae48a1688a9b891eefdd6e6e549a04534fc6d4384edaa123346337f72: Status 404 returned error can't find the container with id 0402452ae48a1688a9b891eefdd6e6e549a04534fc6d4384edaa123346337f72 Nov 28 12:50:41 crc kubenswrapper[4779]: I1128 12:50:41.721911 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-nxg68" Nov 28 12:50:41 crc kubenswrapper[4779]: I1128 12:50:41.811632 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-f8648f98b-89xvz" Nov 28 12:50:41 crc kubenswrapper[4779]: I1128 12:50:41.824467 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qvbp8" event={"ID":"527c77d8-6692-434a-88b6-4d5e3dc93337","Type":"ContainerStarted","Data":"a84eb7669e58fa14482ec0f297ba7715effc96b7b1b965d038f937b7104ca053"} Nov 28 12:50:41 crc kubenswrapper[4779]: I1128 12:50:41.824576 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qvbp8" event={"ID":"527c77d8-6692-434a-88b6-4d5e3dc93337","Type":"ContainerStarted","Data":"0402452ae48a1688a9b891eefdd6e6e549a04534fc6d4384edaa123346337f72"} Nov 28 12:50:41 crc kubenswrapper[4779]: I1128 12:50:41.826338 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9gzt5" event={"ID":"bf5180d6-52b7-4218-825d-3117ef21c04b","Type":"ContainerStarted","Data":"2a16f3d675a43fbdb1d1a6a2db7a72197f5ade978ae392a0d2748f56539978ef"} Nov 28 12:50:41 crc kubenswrapper[4779]: I1128 12:50:41.826424 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-9gzt5" podUID="bf5180d6-52b7-4218-825d-3117ef21c04b" containerName="registry-server" containerID="cri-o://2a16f3d675a43fbdb1d1a6a2db7a72197f5ade978ae392a0d2748f56539978ef" gracePeriod=2 Nov 28 12:50:41 crc kubenswrapper[4779]: I1128 12:50:41.864689 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-qvbp8" podStartSLOduration=1.800898279 podStartE2EDuration="1.86467447s" podCreationTimestamp="2025-11-28 12:50:40 +0000 UTC" firstStartedPulling="2025-11-28 12:50:41.420665176 +0000 UTC m=+901.986340530" lastFinishedPulling="2025-11-28 12:50:41.484441327 +0000 UTC m=+902.050116721" observedRunningTime="2025-11-28 12:50:41.862661016 +0000 UTC m=+902.428336400" watchObservedRunningTime="2025-11-28 12:50:41.86467447 +0000 UTC m=+902.430349824" Nov 28 12:50:41 crc kubenswrapper[4779]: I1128 12:50:41.888034 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-9gzt5" podStartSLOduration=1.921622062 podStartE2EDuration="5.888007726s" podCreationTimestamp="2025-11-28 12:50:36 +0000 UTC" firstStartedPulling="2025-11-28 12:50:37.00438625 +0000 UTC m=+897.570061614" lastFinishedPulling="2025-11-28 12:50:40.970771914 +0000 UTC m=+901.536447278" observedRunningTime="2025-11-28 12:50:41.880831553 +0000 UTC m=+902.446506937" watchObservedRunningTime="2025-11-28 12:50:41.888007726 +0000 UTC m=+902.453683090" Nov 28 12:50:42 crc kubenswrapper[4779]: I1128 12:50:42.262243 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9gzt5" Nov 28 12:50:42 crc kubenswrapper[4779]: I1128 12:50:42.346122 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-w5vz4" Nov 28 12:50:42 crc kubenswrapper[4779]: I1128 12:50:42.362898 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qph2\" (UniqueName: \"kubernetes.io/projected/bf5180d6-52b7-4218-825d-3117ef21c04b-kube-api-access-2qph2\") pod \"bf5180d6-52b7-4218-825d-3117ef21c04b\" (UID: \"bf5180d6-52b7-4218-825d-3117ef21c04b\") " Nov 28 12:50:42 crc kubenswrapper[4779]: I1128 12:50:42.374919 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf5180d6-52b7-4218-825d-3117ef21c04b-kube-api-access-2qph2" (OuterVolumeSpecName: "kube-api-access-2qph2") pod "bf5180d6-52b7-4218-825d-3117ef21c04b" (UID: "bf5180d6-52b7-4218-825d-3117ef21c04b"). InnerVolumeSpecName "kube-api-access-2qph2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:50:42 crc kubenswrapper[4779]: I1128 12:50:42.464911 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qph2\" (UniqueName: \"kubernetes.io/projected/bf5180d6-52b7-4218-825d-3117ef21c04b-kube-api-access-2qph2\") on node \"crc\" DevicePath \"\"" Nov 28 12:50:42 crc kubenswrapper[4779]: I1128 12:50:42.835522 4779 generic.go:334] "Generic (PLEG): container finished" podID="bf5180d6-52b7-4218-825d-3117ef21c04b" containerID="2a16f3d675a43fbdb1d1a6a2db7a72197f5ade978ae392a0d2748f56539978ef" exitCode=0 Nov 28 12:50:42 crc kubenswrapper[4779]: I1128 12:50:42.836401 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9gzt5" Nov 28 12:50:42 crc kubenswrapper[4779]: I1128 12:50:42.839317 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9gzt5" event={"ID":"bf5180d6-52b7-4218-825d-3117ef21c04b","Type":"ContainerDied","Data":"2a16f3d675a43fbdb1d1a6a2db7a72197f5ade978ae392a0d2748f56539978ef"} Nov 28 12:50:42 crc kubenswrapper[4779]: I1128 12:50:42.839404 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9gzt5" event={"ID":"bf5180d6-52b7-4218-825d-3117ef21c04b","Type":"ContainerDied","Data":"f80ffcb8d837f64601aae651b1d44033079c09d2443198fd02030ff39d98f702"} Nov 28 12:50:42 crc kubenswrapper[4779]: I1128 12:50:42.839480 4779 scope.go:117] "RemoveContainer" containerID="2a16f3d675a43fbdb1d1a6a2db7a72197f5ade978ae392a0d2748f56539978ef" Nov 28 12:50:42 crc kubenswrapper[4779]: I1128 12:50:42.873676 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-9gzt5"] Nov 28 12:50:42 crc kubenswrapper[4779]: I1128 12:50:42.873860 4779 scope.go:117] "RemoveContainer" containerID="2a16f3d675a43fbdb1d1a6a2db7a72197f5ade978ae392a0d2748f56539978ef" Nov 28 12:50:42 crc kubenswrapper[4779]: E1128 12:50:42.874684 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a16f3d675a43fbdb1d1a6a2db7a72197f5ade978ae392a0d2748f56539978ef\": container with ID starting with 2a16f3d675a43fbdb1d1a6a2db7a72197f5ade978ae392a0d2748f56539978ef not found: ID does not exist" containerID="2a16f3d675a43fbdb1d1a6a2db7a72197f5ade978ae392a0d2748f56539978ef" Nov 28 12:50:42 crc kubenswrapper[4779]: I1128 12:50:42.874724 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a16f3d675a43fbdb1d1a6a2db7a72197f5ade978ae392a0d2748f56539978ef"} err="failed to get container status \"2a16f3d675a43fbdb1d1a6a2db7a72197f5ade978ae392a0d2748f56539978ef\": rpc error: code = NotFound desc = could not find container \"2a16f3d675a43fbdb1d1a6a2db7a72197f5ade978ae392a0d2748f56539978ef\": container with ID starting with 2a16f3d675a43fbdb1d1a6a2db7a72197f5ade978ae392a0d2748f56539978ef not found: ID does not exist" Nov 28 12:50:42 crc kubenswrapper[4779]: I1128 12:50:42.878977 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-9gzt5"] Nov 28 12:50:43 crc kubenswrapper[4779]: I1128 12:50:43.740205 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf5180d6-52b7-4218-825d-3117ef21c04b" path="/var/lib/kubelet/pods/bf5180d6-52b7-4218-825d-3117ef21c04b/volumes" Nov 28 12:50:47 crc kubenswrapper[4779]: I1128 12:50:47.799069 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6d58t"] Nov 28 12:50:47 crc kubenswrapper[4779]: E1128 12:50:47.800048 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf5180d6-52b7-4218-825d-3117ef21c04b" containerName="registry-server" Nov 28 12:50:47 crc kubenswrapper[4779]: I1128 12:50:47.800080 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf5180d6-52b7-4218-825d-3117ef21c04b" containerName="registry-server" Nov 28 12:50:47 crc kubenswrapper[4779]: I1128 12:50:47.800383 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf5180d6-52b7-4218-825d-3117ef21c04b" containerName="registry-server" Nov 28 12:50:47 crc kubenswrapper[4779]: I1128 12:50:47.802159 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6d58t" Nov 28 12:50:47 crc kubenswrapper[4779]: I1128 12:50:47.812844 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6d58t"] Nov 28 12:50:47 crc kubenswrapper[4779]: I1128 12:50:47.950950 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6t77\" (UniqueName: \"kubernetes.io/projected/ad5b74e2-ad2a-426b-bf89-aadf1284e5d3-kube-api-access-s6t77\") pod \"community-operators-6d58t\" (UID: \"ad5b74e2-ad2a-426b-bf89-aadf1284e5d3\") " pod="openshift-marketplace/community-operators-6d58t" Nov 28 12:50:47 crc kubenswrapper[4779]: I1128 12:50:47.951411 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad5b74e2-ad2a-426b-bf89-aadf1284e5d3-utilities\") pod \"community-operators-6d58t\" (UID: \"ad5b74e2-ad2a-426b-bf89-aadf1284e5d3\") " pod="openshift-marketplace/community-operators-6d58t" Nov 28 12:50:47 crc kubenswrapper[4779]: I1128 12:50:47.951505 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad5b74e2-ad2a-426b-bf89-aadf1284e5d3-catalog-content\") pod \"community-operators-6d58t\" (UID: \"ad5b74e2-ad2a-426b-bf89-aadf1284e5d3\") " pod="openshift-marketplace/community-operators-6d58t" Nov 28 12:50:48 crc kubenswrapper[4779]: I1128 12:50:48.053269 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad5b74e2-ad2a-426b-bf89-aadf1284e5d3-utilities\") pod \"community-operators-6d58t\" (UID: \"ad5b74e2-ad2a-426b-bf89-aadf1284e5d3\") " pod="openshift-marketplace/community-operators-6d58t" Nov 28 12:50:48 crc kubenswrapper[4779]: I1128 12:50:48.053332 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad5b74e2-ad2a-426b-bf89-aadf1284e5d3-catalog-content\") pod \"community-operators-6d58t\" (UID: \"ad5b74e2-ad2a-426b-bf89-aadf1284e5d3\") " pod="openshift-marketplace/community-operators-6d58t" Nov 28 12:50:48 crc kubenswrapper[4779]: I1128 12:50:48.053367 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6t77\" (UniqueName: \"kubernetes.io/projected/ad5b74e2-ad2a-426b-bf89-aadf1284e5d3-kube-api-access-s6t77\") pod \"community-operators-6d58t\" (UID: \"ad5b74e2-ad2a-426b-bf89-aadf1284e5d3\") " pod="openshift-marketplace/community-operators-6d58t" Nov 28 12:50:48 crc kubenswrapper[4779]: I1128 12:50:48.053844 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad5b74e2-ad2a-426b-bf89-aadf1284e5d3-utilities\") pod \"community-operators-6d58t\" (UID: \"ad5b74e2-ad2a-426b-bf89-aadf1284e5d3\") " pod="openshift-marketplace/community-operators-6d58t" Nov 28 12:50:48 crc kubenswrapper[4779]: I1128 12:50:48.054076 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad5b74e2-ad2a-426b-bf89-aadf1284e5d3-catalog-content\") pod \"community-operators-6d58t\" (UID: \"ad5b74e2-ad2a-426b-bf89-aadf1284e5d3\") " pod="openshift-marketplace/community-operators-6d58t" Nov 28 12:50:48 crc kubenswrapper[4779]: I1128 12:50:48.074875 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6t77\" (UniqueName: \"kubernetes.io/projected/ad5b74e2-ad2a-426b-bf89-aadf1284e5d3-kube-api-access-s6t77\") pod \"community-operators-6d58t\" (UID: \"ad5b74e2-ad2a-426b-bf89-aadf1284e5d3\") " pod="openshift-marketplace/community-operators-6d58t" Nov 28 12:50:48 crc kubenswrapper[4779]: I1128 12:50:48.133036 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6d58t" Nov 28 12:50:48 crc kubenswrapper[4779]: I1128 12:50:48.641793 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6d58t"] Nov 28 12:50:48 crc kubenswrapper[4779]: I1128 12:50:48.891350 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6d58t" event={"ID":"ad5b74e2-ad2a-426b-bf89-aadf1284e5d3","Type":"ContainerStarted","Data":"59497dbf3d745715660761bcfe555ada106e627ea5b1aac73bfbafee4b9dc026"} Nov 28 12:50:49 crc kubenswrapper[4779]: I1128 12:50:49.901775 4779 generic.go:334] "Generic (PLEG): container finished" podID="ad5b74e2-ad2a-426b-bf89-aadf1284e5d3" containerID="81e045252e787aa11f483b41f25241130bac59175210e422a87788e3230e5b06" exitCode=0 Nov 28 12:50:49 crc kubenswrapper[4779]: I1128 12:50:49.901835 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6d58t" event={"ID":"ad5b74e2-ad2a-426b-bf89-aadf1284e5d3","Type":"ContainerDied","Data":"81e045252e787aa11f483b41f25241130bac59175210e422a87788e3230e5b06"} Nov 28 12:50:50 crc kubenswrapper[4779]: I1128 12:50:50.758209 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-qvbp8" Nov 28 12:50:50 crc kubenswrapper[4779]: I1128 12:50:50.758872 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-qvbp8" Nov 28 12:50:50 crc kubenswrapper[4779]: I1128 12:50:50.812231 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-qvbp8" Nov 28 12:50:50 crc kubenswrapper[4779]: I1128 12:50:50.919524 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6d58t" event={"ID":"ad5b74e2-ad2a-426b-bf89-aadf1284e5d3","Type":"ContainerStarted","Data":"ca3c18ddba9049a6d7112c6deb4abe8e0865131dad9bab0898932f93ad51dbb6"} Nov 28 12:50:50 crc kubenswrapper[4779]: I1128 12:50:50.975276 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-qvbp8" Nov 28 12:50:51 crc kubenswrapper[4779]: I1128 12:50:51.928056 4779 generic.go:334] "Generic (PLEG): container finished" podID="ad5b74e2-ad2a-426b-bf89-aadf1284e5d3" containerID="ca3c18ddba9049a6d7112c6deb4abe8e0865131dad9bab0898932f93ad51dbb6" exitCode=0 Nov 28 12:50:51 crc kubenswrapper[4779]: I1128 12:50:51.928157 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6d58t" event={"ID":"ad5b74e2-ad2a-426b-bf89-aadf1284e5d3","Type":"ContainerDied","Data":"ca3c18ddba9049a6d7112c6deb4abe8e0865131dad9bab0898932f93ad51dbb6"} Nov 28 12:50:53 crc kubenswrapper[4779]: I1128 12:50:53.945495 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6d58t" event={"ID":"ad5b74e2-ad2a-426b-bf89-aadf1284e5d3","Type":"ContainerStarted","Data":"6a3b56e11db54797a423d108bd24f2159711cc71e41dc691bc0987ec64139f3f"} Nov 28 12:50:53 crc kubenswrapper[4779]: I1128 12:50:53.965630 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6d58t" podStartSLOduration=4.009886843 podStartE2EDuration="6.96561384s" podCreationTimestamp="2025-11-28 12:50:47 +0000 UTC" firstStartedPulling="2025-11-28 12:50:49.903821235 +0000 UTC m=+910.469496629" lastFinishedPulling="2025-11-28 12:50:52.859548232 +0000 UTC m=+913.425223626" observedRunningTime="2025-11-28 12:50:53.962740983 +0000 UTC m=+914.528416377" watchObservedRunningTime="2025-11-28 12:50:53.96561384 +0000 UTC m=+914.531289204" Nov 28 12:50:57 crc kubenswrapper[4779]: I1128 12:50:57.417669 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/e666c68ff9e9ac0d69ff4488828194992a4afe96aebe623791b2eb27d056z22"] Nov 28 12:50:57 crc kubenswrapper[4779]: I1128 12:50:57.418965 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e666c68ff9e9ac0d69ff4488828194992a4afe96aebe623791b2eb27d056z22" Nov 28 12:50:57 crc kubenswrapper[4779]: I1128 12:50:57.422023 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-j2f9m" Nov 28 12:50:57 crc kubenswrapper[4779]: I1128 12:50:57.425647 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64p9j\" (UniqueName: \"kubernetes.io/projected/57c6c245-3c5b-41bf-9de3-c5d23d132c71-kube-api-access-64p9j\") pod \"e666c68ff9e9ac0d69ff4488828194992a4afe96aebe623791b2eb27d056z22\" (UID: \"57c6c245-3c5b-41bf-9de3-c5d23d132c71\") " pod="openstack-operators/e666c68ff9e9ac0d69ff4488828194992a4afe96aebe623791b2eb27d056z22" Nov 28 12:50:57 crc kubenswrapper[4779]: I1128 12:50:57.425726 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/57c6c245-3c5b-41bf-9de3-c5d23d132c71-util\") pod \"e666c68ff9e9ac0d69ff4488828194992a4afe96aebe623791b2eb27d056z22\" (UID: \"57c6c245-3c5b-41bf-9de3-c5d23d132c71\") " pod="openstack-operators/e666c68ff9e9ac0d69ff4488828194992a4afe96aebe623791b2eb27d056z22" Nov 28 12:50:57 crc kubenswrapper[4779]: I1128 12:50:57.425794 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/57c6c245-3c5b-41bf-9de3-c5d23d132c71-bundle\") pod \"e666c68ff9e9ac0d69ff4488828194992a4afe96aebe623791b2eb27d056z22\" (UID: \"57c6c245-3c5b-41bf-9de3-c5d23d132c71\") " pod="openstack-operators/e666c68ff9e9ac0d69ff4488828194992a4afe96aebe623791b2eb27d056z22" Nov 28 12:50:57 crc kubenswrapper[4779]: I1128 12:50:57.425895 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/e666c68ff9e9ac0d69ff4488828194992a4afe96aebe623791b2eb27d056z22"] Nov 28 12:50:57 crc kubenswrapper[4779]: I1128 12:50:57.527465 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/57c6c245-3c5b-41bf-9de3-c5d23d132c71-bundle\") pod \"e666c68ff9e9ac0d69ff4488828194992a4afe96aebe623791b2eb27d056z22\" (UID: \"57c6c245-3c5b-41bf-9de3-c5d23d132c71\") " pod="openstack-operators/e666c68ff9e9ac0d69ff4488828194992a4afe96aebe623791b2eb27d056z22" Nov 28 12:50:57 crc kubenswrapper[4779]: I1128 12:50:57.527609 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64p9j\" (UniqueName: \"kubernetes.io/projected/57c6c245-3c5b-41bf-9de3-c5d23d132c71-kube-api-access-64p9j\") pod \"e666c68ff9e9ac0d69ff4488828194992a4afe96aebe623791b2eb27d056z22\" (UID: \"57c6c245-3c5b-41bf-9de3-c5d23d132c71\") " pod="openstack-operators/e666c68ff9e9ac0d69ff4488828194992a4afe96aebe623791b2eb27d056z22" Nov 28 12:50:57 crc kubenswrapper[4779]: I1128 12:50:57.527699 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/57c6c245-3c5b-41bf-9de3-c5d23d132c71-util\") pod \"e666c68ff9e9ac0d69ff4488828194992a4afe96aebe623791b2eb27d056z22\" (UID: \"57c6c245-3c5b-41bf-9de3-c5d23d132c71\") " pod="openstack-operators/e666c68ff9e9ac0d69ff4488828194992a4afe96aebe623791b2eb27d056z22" Nov 28 12:50:57 crc kubenswrapper[4779]: I1128 12:50:57.528333 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/57c6c245-3c5b-41bf-9de3-c5d23d132c71-bundle\") pod \"e666c68ff9e9ac0d69ff4488828194992a4afe96aebe623791b2eb27d056z22\" (UID: \"57c6c245-3c5b-41bf-9de3-c5d23d132c71\") " pod="openstack-operators/e666c68ff9e9ac0d69ff4488828194992a4afe96aebe623791b2eb27d056z22" Nov 28 12:50:57 crc kubenswrapper[4779]: I1128 12:50:57.528409 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/57c6c245-3c5b-41bf-9de3-c5d23d132c71-util\") pod \"e666c68ff9e9ac0d69ff4488828194992a4afe96aebe623791b2eb27d056z22\" (UID: \"57c6c245-3c5b-41bf-9de3-c5d23d132c71\") " pod="openstack-operators/e666c68ff9e9ac0d69ff4488828194992a4afe96aebe623791b2eb27d056z22" Nov 28 12:50:57 crc kubenswrapper[4779]: I1128 12:50:57.562589 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64p9j\" (UniqueName: \"kubernetes.io/projected/57c6c245-3c5b-41bf-9de3-c5d23d132c71-kube-api-access-64p9j\") pod \"e666c68ff9e9ac0d69ff4488828194992a4afe96aebe623791b2eb27d056z22\" (UID: \"57c6c245-3c5b-41bf-9de3-c5d23d132c71\") " pod="openstack-operators/e666c68ff9e9ac0d69ff4488828194992a4afe96aebe623791b2eb27d056z22" Nov 28 12:50:57 crc kubenswrapper[4779]: I1128 12:50:57.746519 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e666c68ff9e9ac0d69ff4488828194992a4afe96aebe623791b2eb27d056z22" Nov 28 12:50:58 crc kubenswrapper[4779]: I1128 12:50:58.084525 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/e666c68ff9e9ac0d69ff4488828194992a4afe96aebe623791b2eb27d056z22"] Nov 28 12:50:58 crc kubenswrapper[4779]: I1128 12:50:58.133307 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6d58t" Nov 28 12:50:58 crc kubenswrapper[4779]: I1128 12:50:58.133563 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6d58t" Nov 28 12:50:58 crc kubenswrapper[4779]: I1128 12:50:58.199843 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6d58t" Nov 28 12:50:58 crc kubenswrapper[4779]: I1128 12:50:58.985321 4779 generic.go:334] "Generic (PLEG): container finished" podID="57c6c245-3c5b-41bf-9de3-c5d23d132c71" containerID="7aae477a681d56215c2f2007f3a46a66e40918051d01e8614985ff0827fda481" exitCode=0 Nov 28 12:50:58 crc kubenswrapper[4779]: I1128 12:50:58.985463 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e666c68ff9e9ac0d69ff4488828194992a4afe96aebe623791b2eb27d056z22" event={"ID":"57c6c245-3c5b-41bf-9de3-c5d23d132c71","Type":"ContainerDied","Data":"7aae477a681d56215c2f2007f3a46a66e40918051d01e8614985ff0827fda481"} Nov 28 12:50:58 crc kubenswrapper[4779]: I1128 12:50:58.985845 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e666c68ff9e9ac0d69ff4488828194992a4afe96aebe623791b2eb27d056z22" event={"ID":"57c6c245-3c5b-41bf-9de3-c5d23d132c71","Type":"ContainerStarted","Data":"83810d6cd51dad92a0e97dc70bb590d6d8ee4e16ba4c945ce9f5249960058478"} Nov 28 12:50:59 crc kubenswrapper[4779]: I1128 12:50:59.060218 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6d58t" Nov 28 12:50:59 crc kubenswrapper[4779]: I1128 12:50:59.996302 4779 generic.go:334] "Generic (PLEG): container finished" podID="57c6c245-3c5b-41bf-9de3-c5d23d132c71" containerID="101a876f67c4c9cccc9c69787471d87804f2dd53f1f6b979f215eadcf7d0d7e6" exitCode=0 Nov 28 12:50:59 crc kubenswrapper[4779]: I1128 12:50:59.996366 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e666c68ff9e9ac0d69ff4488828194992a4afe96aebe623791b2eb27d056z22" event={"ID":"57c6c245-3c5b-41bf-9de3-c5d23d132c71","Type":"ContainerDied","Data":"101a876f67c4c9cccc9c69787471d87804f2dd53f1f6b979f215eadcf7d0d7e6"} Nov 28 12:51:00 crc kubenswrapper[4779]: I1128 12:51:00.976511 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6d58t"] Nov 28 12:51:01 crc kubenswrapper[4779]: I1128 12:51:01.008317 4779 generic.go:334] "Generic (PLEG): container finished" podID="57c6c245-3c5b-41bf-9de3-c5d23d132c71" containerID="2b20f43a625e2b56f44a9e1f8c957163b77f68048708b3a70cdbf27b3fcf345d" exitCode=0 Nov 28 12:51:01 crc kubenswrapper[4779]: I1128 12:51:01.008407 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e666c68ff9e9ac0d69ff4488828194992a4afe96aebe623791b2eb27d056z22" event={"ID":"57c6c245-3c5b-41bf-9de3-c5d23d132c71","Type":"ContainerDied","Data":"2b20f43a625e2b56f44a9e1f8c957163b77f68048708b3a70cdbf27b3fcf345d"} Nov 28 12:51:02 crc kubenswrapper[4779]: I1128 12:51:02.016590 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6d58t" podUID="ad5b74e2-ad2a-426b-bf89-aadf1284e5d3" containerName="registry-server" containerID="cri-o://6a3b56e11db54797a423d108bd24f2159711cc71e41dc691bc0987ec64139f3f" gracePeriod=2 Nov 28 12:51:02 crc kubenswrapper[4779]: I1128 12:51:02.317890 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e666c68ff9e9ac0d69ff4488828194992a4afe96aebe623791b2eb27d056z22" Nov 28 12:51:02 crc kubenswrapper[4779]: I1128 12:51:02.398181 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/57c6c245-3c5b-41bf-9de3-c5d23d132c71-util\") pod \"57c6c245-3c5b-41bf-9de3-c5d23d132c71\" (UID: \"57c6c245-3c5b-41bf-9de3-c5d23d132c71\") " Nov 28 12:51:02 crc kubenswrapper[4779]: I1128 12:51:02.398257 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/57c6c245-3c5b-41bf-9de3-c5d23d132c71-bundle\") pod \"57c6c245-3c5b-41bf-9de3-c5d23d132c71\" (UID: \"57c6c245-3c5b-41bf-9de3-c5d23d132c71\") " Nov 28 12:51:02 crc kubenswrapper[4779]: I1128 12:51:02.398349 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64p9j\" (UniqueName: \"kubernetes.io/projected/57c6c245-3c5b-41bf-9de3-c5d23d132c71-kube-api-access-64p9j\") pod \"57c6c245-3c5b-41bf-9de3-c5d23d132c71\" (UID: \"57c6c245-3c5b-41bf-9de3-c5d23d132c71\") " Nov 28 12:51:02 crc kubenswrapper[4779]: I1128 12:51:02.401480 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57c6c245-3c5b-41bf-9de3-c5d23d132c71-bundle" (OuterVolumeSpecName: "bundle") pod "57c6c245-3c5b-41bf-9de3-c5d23d132c71" (UID: "57c6c245-3c5b-41bf-9de3-c5d23d132c71"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:51:02 crc kubenswrapper[4779]: I1128 12:51:02.406624 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57c6c245-3c5b-41bf-9de3-c5d23d132c71-kube-api-access-64p9j" (OuterVolumeSpecName: "kube-api-access-64p9j") pod "57c6c245-3c5b-41bf-9de3-c5d23d132c71" (UID: "57c6c245-3c5b-41bf-9de3-c5d23d132c71"). InnerVolumeSpecName "kube-api-access-64p9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:51:02 crc kubenswrapper[4779]: I1128 12:51:02.418391 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57c6c245-3c5b-41bf-9de3-c5d23d132c71-util" (OuterVolumeSpecName: "util") pod "57c6c245-3c5b-41bf-9de3-c5d23d132c71" (UID: "57c6c245-3c5b-41bf-9de3-c5d23d132c71"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:51:02 crc kubenswrapper[4779]: I1128 12:51:02.460998 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6d58t" Nov 28 12:51:02 crc kubenswrapper[4779]: I1128 12:51:02.499798 4779 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/57c6c245-3c5b-41bf-9de3-c5d23d132c71-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:51:02 crc kubenswrapper[4779]: I1128 12:51:02.499831 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64p9j\" (UniqueName: \"kubernetes.io/projected/57c6c245-3c5b-41bf-9de3-c5d23d132c71-kube-api-access-64p9j\") on node \"crc\" DevicePath \"\"" Nov 28 12:51:02 crc kubenswrapper[4779]: I1128 12:51:02.499844 4779 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/57c6c245-3c5b-41bf-9de3-c5d23d132c71-util\") on node \"crc\" DevicePath \"\"" Nov 28 12:51:02 crc kubenswrapper[4779]: I1128 12:51:02.600252 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6t77\" (UniqueName: \"kubernetes.io/projected/ad5b74e2-ad2a-426b-bf89-aadf1284e5d3-kube-api-access-s6t77\") pod \"ad5b74e2-ad2a-426b-bf89-aadf1284e5d3\" (UID: \"ad5b74e2-ad2a-426b-bf89-aadf1284e5d3\") " Nov 28 12:51:02 crc kubenswrapper[4779]: I1128 12:51:02.600359 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad5b74e2-ad2a-426b-bf89-aadf1284e5d3-catalog-content\") pod \"ad5b74e2-ad2a-426b-bf89-aadf1284e5d3\" (UID: \"ad5b74e2-ad2a-426b-bf89-aadf1284e5d3\") " Nov 28 12:51:02 crc kubenswrapper[4779]: I1128 12:51:02.600399 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad5b74e2-ad2a-426b-bf89-aadf1284e5d3-utilities\") pod \"ad5b74e2-ad2a-426b-bf89-aadf1284e5d3\" (UID: \"ad5b74e2-ad2a-426b-bf89-aadf1284e5d3\") " Nov 28 12:51:02 crc kubenswrapper[4779]: I1128 12:51:02.601395 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad5b74e2-ad2a-426b-bf89-aadf1284e5d3-utilities" (OuterVolumeSpecName: "utilities") pod "ad5b74e2-ad2a-426b-bf89-aadf1284e5d3" (UID: "ad5b74e2-ad2a-426b-bf89-aadf1284e5d3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:51:02 crc kubenswrapper[4779]: I1128 12:51:02.605360 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad5b74e2-ad2a-426b-bf89-aadf1284e5d3-kube-api-access-s6t77" (OuterVolumeSpecName: "kube-api-access-s6t77") pod "ad5b74e2-ad2a-426b-bf89-aadf1284e5d3" (UID: "ad5b74e2-ad2a-426b-bf89-aadf1284e5d3"). InnerVolumeSpecName "kube-api-access-s6t77". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:51:02 crc kubenswrapper[4779]: I1128 12:51:02.700435 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad5b74e2-ad2a-426b-bf89-aadf1284e5d3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ad5b74e2-ad2a-426b-bf89-aadf1284e5d3" (UID: "ad5b74e2-ad2a-426b-bf89-aadf1284e5d3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:51:02 crc kubenswrapper[4779]: I1128 12:51:02.701752 4779 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad5b74e2-ad2a-426b-bf89-aadf1284e5d3-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 12:51:02 crc kubenswrapper[4779]: I1128 12:51:02.701827 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6t77\" (UniqueName: \"kubernetes.io/projected/ad5b74e2-ad2a-426b-bf89-aadf1284e5d3-kube-api-access-s6t77\") on node \"crc\" DevicePath \"\"" Nov 28 12:51:02 crc kubenswrapper[4779]: I1128 12:51:02.701863 4779 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad5b74e2-ad2a-426b-bf89-aadf1284e5d3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 12:51:03 crc kubenswrapper[4779]: I1128 12:51:03.030044 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e666c68ff9e9ac0d69ff4488828194992a4afe96aebe623791b2eb27d056z22" event={"ID":"57c6c245-3c5b-41bf-9de3-c5d23d132c71","Type":"ContainerDied","Data":"83810d6cd51dad92a0e97dc70bb590d6d8ee4e16ba4c945ce9f5249960058478"} Nov 28 12:51:03 crc kubenswrapper[4779]: I1128 12:51:03.030144 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e666c68ff9e9ac0d69ff4488828194992a4afe96aebe623791b2eb27d056z22" Nov 28 12:51:03 crc kubenswrapper[4779]: I1128 12:51:03.030167 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83810d6cd51dad92a0e97dc70bb590d6d8ee4e16ba4c945ce9f5249960058478" Nov 28 12:51:03 crc kubenswrapper[4779]: I1128 12:51:03.034222 4779 generic.go:334] "Generic (PLEG): container finished" podID="ad5b74e2-ad2a-426b-bf89-aadf1284e5d3" containerID="6a3b56e11db54797a423d108bd24f2159711cc71e41dc691bc0987ec64139f3f" exitCode=0 Nov 28 12:51:03 crc kubenswrapper[4779]: I1128 12:51:03.034270 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6d58t" event={"ID":"ad5b74e2-ad2a-426b-bf89-aadf1284e5d3","Type":"ContainerDied","Data":"6a3b56e11db54797a423d108bd24f2159711cc71e41dc691bc0987ec64139f3f"} Nov 28 12:51:03 crc kubenswrapper[4779]: I1128 12:51:03.034300 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6d58t" event={"ID":"ad5b74e2-ad2a-426b-bf89-aadf1284e5d3","Type":"ContainerDied","Data":"59497dbf3d745715660761bcfe555ada106e627ea5b1aac73bfbafee4b9dc026"} Nov 28 12:51:03 crc kubenswrapper[4779]: I1128 12:51:03.034329 4779 scope.go:117] "RemoveContainer" containerID="6a3b56e11db54797a423d108bd24f2159711cc71e41dc691bc0987ec64139f3f" Nov 28 12:51:03 crc kubenswrapper[4779]: I1128 12:51:03.034517 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6d58t" Nov 28 12:51:03 crc kubenswrapper[4779]: I1128 12:51:03.068810 4779 scope.go:117] "RemoveContainer" containerID="ca3c18ddba9049a6d7112c6deb4abe8e0865131dad9bab0898932f93ad51dbb6" Nov 28 12:51:03 crc kubenswrapper[4779]: I1128 12:51:03.087227 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6d58t"] Nov 28 12:51:03 crc kubenswrapper[4779]: I1128 12:51:03.096364 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6d58t"] Nov 28 12:51:03 crc kubenswrapper[4779]: I1128 12:51:03.099256 4779 scope.go:117] "RemoveContainer" containerID="81e045252e787aa11f483b41f25241130bac59175210e422a87788e3230e5b06" Nov 28 12:51:03 crc kubenswrapper[4779]: I1128 12:51:03.123820 4779 scope.go:117] "RemoveContainer" containerID="6a3b56e11db54797a423d108bd24f2159711cc71e41dc691bc0987ec64139f3f" Nov 28 12:51:03 crc kubenswrapper[4779]: E1128 12:51:03.124409 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a3b56e11db54797a423d108bd24f2159711cc71e41dc691bc0987ec64139f3f\": container with ID starting with 6a3b56e11db54797a423d108bd24f2159711cc71e41dc691bc0987ec64139f3f not found: ID does not exist" containerID="6a3b56e11db54797a423d108bd24f2159711cc71e41dc691bc0987ec64139f3f" Nov 28 12:51:03 crc kubenswrapper[4779]: I1128 12:51:03.124472 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a3b56e11db54797a423d108bd24f2159711cc71e41dc691bc0987ec64139f3f"} err="failed to get container status \"6a3b56e11db54797a423d108bd24f2159711cc71e41dc691bc0987ec64139f3f\": rpc error: code = NotFound desc = could not find container \"6a3b56e11db54797a423d108bd24f2159711cc71e41dc691bc0987ec64139f3f\": container with ID starting with 6a3b56e11db54797a423d108bd24f2159711cc71e41dc691bc0987ec64139f3f not found: ID does not exist" Nov 28 12:51:03 crc kubenswrapper[4779]: I1128 12:51:03.124519 4779 scope.go:117] "RemoveContainer" containerID="ca3c18ddba9049a6d7112c6deb4abe8e0865131dad9bab0898932f93ad51dbb6" Nov 28 12:51:03 crc kubenswrapper[4779]: E1128 12:51:03.125405 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca3c18ddba9049a6d7112c6deb4abe8e0865131dad9bab0898932f93ad51dbb6\": container with ID starting with ca3c18ddba9049a6d7112c6deb4abe8e0865131dad9bab0898932f93ad51dbb6 not found: ID does not exist" containerID="ca3c18ddba9049a6d7112c6deb4abe8e0865131dad9bab0898932f93ad51dbb6" Nov 28 12:51:03 crc kubenswrapper[4779]: I1128 12:51:03.125523 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca3c18ddba9049a6d7112c6deb4abe8e0865131dad9bab0898932f93ad51dbb6"} err="failed to get container status \"ca3c18ddba9049a6d7112c6deb4abe8e0865131dad9bab0898932f93ad51dbb6\": rpc error: code = NotFound desc = could not find container \"ca3c18ddba9049a6d7112c6deb4abe8e0865131dad9bab0898932f93ad51dbb6\": container with ID starting with ca3c18ddba9049a6d7112c6deb4abe8e0865131dad9bab0898932f93ad51dbb6 not found: ID does not exist" Nov 28 12:51:03 crc kubenswrapper[4779]: I1128 12:51:03.125608 4779 scope.go:117] "RemoveContainer" containerID="81e045252e787aa11f483b41f25241130bac59175210e422a87788e3230e5b06" Nov 28 12:51:03 crc kubenswrapper[4779]: E1128 12:51:03.126420 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81e045252e787aa11f483b41f25241130bac59175210e422a87788e3230e5b06\": container with ID starting with 81e045252e787aa11f483b41f25241130bac59175210e422a87788e3230e5b06 not found: ID does not exist" containerID="81e045252e787aa11f483b41f25241130bac59175210e422a87788e3230e5b06" Nov 28 12:51:03 crc kubenswrapper[4779]: I1128 12:51:03.126472 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81e045252e787aa11f483b41f25241130bac59175210e422a87788e3230e5b06"} err="failed to get container status \"81e045252e787aa11f483b41f25241130bac59175210e422a87788e3230e5b06\": rpc error: code = NotFound desc = could not find container \"81e045252e787aa11f483b41f25241130bac59175210e422a87788e3230e5b06\": container with ID starting with 81e045252e787aa11f483b41f25241130bac59175210e422a87788e3230e5b06 not found: ID does not exist" Nov 28 12:51:03 crc kubenswrapper[4779]: I1128 12:51:03.740422 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad5b74e2-ad2a-426b-bf89-aadf1284e5d3" path="/var/lib/kubelet/pods/ad5b74e2-ad2a-426b-bf89-aadf1284e5d3/volumes" Nov 28 12:51:05 crc kubenswrapper[4779]: I1128 12:51:05.936054 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7bb768d89f-48p4r"] Nov 28 12:51:05 crc kubenswrapper[4779]: E1128 12:51:05.936337 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57c6c245-3c5b-41bf-9de3-c5d23d132c71" containerName="extract" Nov 28 12:51:05 crc kubenswrapper[4779]: I1128 12:51:05.936353 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="57c6c245-3c5b-41bf-9de3-c5d23d132c71" containerName="extract" Nov 28 12:51:05 crc kubenswrapper[4779]: E1128 12:51:05.936376 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57c6c245-3c5b-41bf-9de3-c5d23d132c71" containerName="pull" Nov 28 12:51:05 crc kubenswrapper[4779]: I1128 12:51:05.936389 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="57c6c245-3c5b-41bf-9de3-c5d23d132c71" containerName="pull" Nov 28 12:51:05 crc kubenswrapper[4779]: E1128 12:51:05.936405 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad5b74e2-ad2a-426b-bf89-aadf1284e5d3" containerName="extract-utilities" Nov 28 12:51:05 crc kubenswrapper[4779]: I1128 12:51:05.936415 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad5b74e2-ad2a-426b-bf89-aadf1284e5d3" containerName="extract-utilities" Nov 28 12:51:05 crc kubenswrapper[4779]: E1128 12:51:05.936425 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad5b74e2-ad2a-426b-bf89-aadf1284e5d3" containerName="extract-content" Nov 28 12:51:05 crc kubenswrapper[4779]: I1128 12:51:05.936433 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad5b74e2-ad2a-426b-bf89-aadf1284e5d3" containerName="extract-content" Nov 28 12:51:05 crc kubenswrapper[4779]: E1128 12:51:05.936452 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57c6c245-3c5b-41bf-9de3-c5d23d132c71" containerName="util" Nov 28 12:51:05 crc kubenswrapper[4779]: I1128 12:51:05.936460 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="57c6c245-3c5b-41bf-9de3-c5d23d132c71" containerName="util" Nov 28 12:51:05 crc kubenswrapper[4779]: E1128 12:51:05.936476 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad5b74e2-ad2a-426b-bf89-aadf1284e5d3" containerName="registry-server" Nov 28 12:51:05 crc kubenswrapper[4779]: I1128 12:51:05.936485 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad5b74e2-ad2a-426b-bf89-aadf1284e5d3" containerName="registry-server" Nov 28 12:51:05 crc kubenswrapper[4779]: I1128 12:51:05.936640 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad5b74e2-ad2a-426b-bf89-aadf1284e5d3" containerName="registry-server" Nov 28 12:51:05 crc kubenswrapper[4779]: I1128 12:51:05.936660 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="57c6c245-3c5b-41bf-9de3-c5d23d132c71" containerName="extract" Nov 28 12:51:05 crc kubenswrapper[4779]: I1128 12:51:05.937128 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7bb768d89f-48p4r" Nov 28 12:51:05 crc kubenswrapper[4779]: I1128 12:51:05.939013 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-rts52" Nov 28 12:51:05 crc kubenswrapper[4779]: I1128 12:51:05.946717 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7cm7\" (UniqueName: \"kubernetes.io/projected/459f9c74-7dc8-401d-8df4-2c1b947f87df-kube-api-access-w7cm7\") pod \"openstack-operator-controller-operator-7bb768d89f-48p4r\" (UID: \"459f9c74-7dc8-401d-8df4-2c1b947f87df\") " pod="openstack-operators/openstack-operator-controller-operator-7bb768d89f-48p4r" Nov 28 12:51:05 crc kubenswrapper[4779]: I1128 12:51:05.965366 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7bb768d89f-48p4r"] Nov 28 12:51:06 crc kubenswrapper[4779]: I1128 12:51:06.048360 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7cm7\" (UniqueName: \"kubernetes.io/projected/459f9c74-7dc8-401d-8df4-2c1b947f87df-kube-api-access-w7cm7\") pod \"openstack-operator-controller-operator-7bb768d89f-48p4r\" (UID: \"459f9c74-7dc8-401d-8df4-2c1b947f87df\") " pod="openstack-operators/openstack-operator-controller-operator-7bb768d89f-48p4r" Nov 28 12:51:06 crc kubenswrapper[4779]: I1128 12:51:06.067109 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7cm7\" (UniqueName: \"kubernetes.io/projected/459f9c74-7dc8-401d-8df4-2c1b947f87df-kube-api-access-w7cm7\") pod \"openstack-operator-controller-operator-7bb768d89f-48p4r\" (UID: \"459f9c74-7dc8-401d-8df4-2c1b947f87df\") " pod="openstack-operators/openstack-operator-controller-operator-7bb768d89f-48p4r" Nov 28 12:51:06 crc kubenswrapper[4779]: I1128 12:51:06.253252 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7bb768d89f-48p4r" Nov 28 12:51:06 crc kubenswrapper[4779]: I1128 12:51:06.517618 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7bb768d89f-48p4r"] Nov 28 12:51:07 crc kubenswrapper[4779]: I1128 12:51:07.089921 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7bb768d89f-48p4r" event={"ID":"459f9c74-7dc8-401d-8df4-2c1b947f87df","Type":"ContainerStarted","Data":"3dc807e4633d54d87b4f1e63536588714b907d1986b0c779364ae2eb92988441"} Nov 28 12:51:12 crc kubenswrapper[4779]: I1128 12:51:12.125380 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7bb768d89f-48p4r" event={"ID":"459f9c74-7dc8-401d-8df4-2c1b947f87df","Type":"ContainerStarted","Data":"7b5bb4d733e1b87ff3d85726bd699a83a859d1504b2aeda2ace4e5ca5a6658ef"} Nov 28 12:51:12 crc kubenswrapper[4779]: I1128 12:51:12.128285 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-7bb768d89f-48p4r" Nov 28 12:51:12 crc kubenswrapper[4779]: I1128 12:51:12.170267 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-7bb768d89f-48p4r" podStartSLOduration=2.253663313 podStartE2EDuration="7.170240624s" podCreationTimestamp="2025-11-28 12:51:05 +0000 UTC" firstStartedPulling="2025-11-28 12:51:06.535165904 +0000 UTC m=+927.100841298" lastFinishedPulling="2025-11-28 12:51:11.451743215 +0000 UTC m=+932.017418609" observedRunningTime="2025-11-28 12:51:12.166858033 +0000 UTC m=+932.732533427" watchObservedRunningTime="2025-11-28 12:51:12.170240624 +0000 UTC m=+932.735916018" Nov 28 12:51:16 crc kubenswrapper[4779]: I1128 12:51:16.257553 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-7bb768d89f-48p4r" Nov 28 12:51:16 crc kubenswrapper[4779]: I1128 12:51:16.285143 4779 patch_prober.go:28] interesting pod/machine-config-daemon-kj9g2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 12:51:16 crc kubenswrapper[4779]: I1128 12:51:16.285446 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 12:51:28 crc kubenswrapper[4779]: I1128 12:51:28.489900 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fr8cd"] Nov 28 12:51:28 crc kubenswrapper[4779]: I1128 12:51:28.493595 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fr8cd" Nov 28 12:51:28 crc kubenswrapper[4779]: I1128 12:51:28.511910 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fr8cd"] Nov 28 12:51:28 crc kubenswrapper[4779]: I1128 12:51:28.588127 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba53250e-91a2-45bf-a609-ebed70fce751-utilities\") pod \"certified-operators-fr8cd\" (UID: \"ba53250e-91a2-45bf-a609-ebed70fce751\") " pod="openshift-marketplace/certified-operators-fr8cd" Nov 28 12:51:28 crc kubenswrapper[4779]: I1128 12:51:28.588178 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpwn2\" (UniqueName: \"kubernetes.io/projected/ba53250e-91a2-45bf-a609-ebed70fce751-kube-api-access-qpwn2\") pod \"certified-operators-fr8cd\" (UID: \"ba53250e-91a2-45bf-a609-ebed70fce751\") " pod="openshift-marketplace/certified-operators-fr8cd" Nov 28 12:51:28 crc kubenswrapper[4779]: I1128 12:51:28.588231 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba53250e-91a2-45bf-a609-ebed70fce751-catalog-content\") pod \"certified-operators-fr8cd\" (UID: \"ba53250e-91a2-45bf-a609-ebed70fce751\") " pod="openshift-marketplace/certified-operators-fr8cd" Nov 28 12:51:28 crc kubenswrapper[4779]: I1128 12:51:28.689281 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpwn2\" (UniqueName: \"kubernetes.io/projected/ba53250e-91a2-45bf-a609-ebed70fce751-kube-api-access-qpwn2\") pod \"certified-operators-fr8cd\" (UID: \"ba53250e-91a2-45bf-a609-ebed70fce751\") " pod="openshift-marketplace/certified-operators-fr8cd" Nov 28 12:51:28 crc kubenswrapper[4779]: I1128 12:51:28.689323 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba53250e-91a2-45bf-a609-ebed70fce751-catalog-content\") pod \"certified-operators-fr8cd\" (UID: \"ba53250e-91a2-45bf-a609-ebed70fce751\") " pod="openshift-marketplace/certified-operators-fr8cd" Nov 28 12:51:28 crc kubenswrapper[4779]: I1128 12:51:28.689391 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba53250e-91a2-45bf-a609-ebed70fce751-utilities\") pod \"certified-operators-fr8cd\" (UID: \"ba53250e-91a2-45bf-a609-ebed70fce751\") " pod="openshift-marketplace/certified-operators-fr8cd" Nov 28 12:51:28 crc kubenswrapper[4779]: I1128 12:51:28.689857 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba53250e-91a2-45bf-a609-ebed70fce751-utilities\") pod \"certified-operators-fr8cd\" (UID: \"ba53250e-91a2-45bf-a609-ebed70fce751\") " pod="openshift-marketplace/certified-operators-fr8cd" Nov 28 12:51:28 crc kubenswrapper[4779]: I1128 12:51:28.690288 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba53250e-91a2-45bf-a609-ebed70fce751-catalog-content\") pod \"certified-operators-fr8cd\" (UID: \"ba53250e-91a2-45bf-a609-ebed70fce751\") " pod="openshift-marketplace/certified-operators-fr8cd" Nov 28 12:51:28 crc kubenswrapper[4779]: I1128 12:51:28.706627 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpwn2\" (UniqueName: \"kubernetes.io/projected/ba53250e-91a2-45bf-a609-ebed70fce751-kube-api-access-qpwn2\") pod \"certified-operators-fr8cd\" (UID: \"ba53250e-91a2-45bf-a609-ebed70fce751\") " pod="openshift-marketplace/certified-operators-fr8cd" Nov 28 12:51:28 crc kubenswrapper[4779]: I1128 12:51:28.808688 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fr8cd" Nov 28 12:51:29 crc kubenswrapper[4779]: W1128 12:51:29.074325 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba53250e_91a2_45bf_a609_ebed70fce751.slice/crio-aba6b8e0addb44e1569cdb96b948ef940c3021f7ac665877e4d1fc7597c479f3 WatchSource:0}: Error finding container aba6b8e0addb44e1569cdb96b948ef940c3021f7ac665877e4d1fc7597c479f3: Status 404 returned error can't find the container with id aba6b8e0addb44e1569cdb96b948ef940c3021f7ac665877e4d1fc7597c479f3 Nov 28 12:51:29 crc kubenswrapper[4779]: I1128 12:51:29.079755 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fr8cd"] Nov 28 12:51:29 crc kubenswrapper[4779]: I1128 12:51:29.227515 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fr8cd" event={"ID":"ba53250e-91a2-45bf-a609-ebed70fce751","Type":"ContainerStarted","Data":"f57a1a4e37666a44f13873f291dce18a05fd7f92323dd3aba1b8c3b8ba81a77f"} Nov 28 12:51:29 crc kubenswrapper[4779]: I1128 12:51:29.227555 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fr8cd" event={"ID":"ba53250e-91a2-45bf-a609-ebed70fce751","Type":"ContainerStarted","Data":"aba6b8e0addb44e1569cdb96b948ef940c3021f7ac665877e4d1fc7597c479f3"} Nov 28 12:51:30 crc kubenswrapper[4779]: I1128 12:51:30.235032 4779 generic.go:334] "Generic (PLEG): container finished" podID="ba53250e-91a2-45bf-a609-ebed70fce751" containerID="f57a1a4e37666a44f13873f291dce18a05fd7f92323dd3aba1b8c3b8ba81a77f" exitCode=0 Nov 28 12:51:30 crc kubenswrapper[4779]: I1128 12:51:30.235338 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fr8cd" event={"ID":"ba53250e-91a2-45bf-a609-ebed70fce751","Type":"ContainerDied","Data":"f57a1a4e37666a44f13873f291dce18a05fd7f92323dd3aba1b8c3b8ba81a77f"} Nov 28 12:51:31 crc kubenswrapper[4779]: I1128 12:51:31.243208 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fr8cd" event={"ID":"ba53250e-91a2-45bf-a609-ebed70fce751","Type":"ContainerStarted","Data":"dcd0ad50feda751464a9036a836e80643f642e0223875c58529e37d0bb92b977"} Nov 28 12:51:32 crc kubenswrapper[4779]: I1128 12:51:32.254399 4779 generic.go:334] "Generic (PLEG): container finished" podID="ba53250e-91a2-45bf-a609-ebed70fce751" containerID="dcd0ad50feda751464a9036a836e80643f642e0223875c58529e37d0bb92b977" exitCode=0 Nov 28 12:51:32 crc kubenswrapper[4779]: I1128 12:51:32.254475 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fr8cd" event={"ID":"ba53250e-91a2-45bf-a609-ebed70fce751","Type":"ContainerDied","Data":"dcd0ad50feda751464a9036a836e80643f642e0223875c58529e37d0bb92b977"} Nov 28 12:51:33 crc kubenswrapper[4779]: I1128 12:51:33.263388 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fr8cd" event={"ID":"ba53250e-91a2-45bf-a609-ebed70fce751","Type":"ContainerStarted","Data":"9b9d4b328133a718706c072881ed99a196d7fd581aae49c971cb44324dddd166"} Nov 28 12:51:33 crc kubenswrapper[4779]: I1128 12:51:33.283556 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fr8cd" podStartSLOduration=2.586752742 podStartE2EDuration="5.283541462s" podCreationTimestamp="2025-11-28 12:51:28 +0000 UTC" firstStartedPulling="2025-11-28 12:51:30.237133941 +0000 UTC m=+950.802809335" lastFinishedPulling="2025-11-28 12:51:32.933922681 +0000 UTC m=+953.499598055" observedRunningTime="2025-11-28 12:51:33.278515057 +0000 UTC m=+953.844190421" watchObservedRunningTime="2025-11-28 12:51:33.283541462 +0000 UTC m=+953.849216816" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.249160 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b64f4fb85-hhr2g"] Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.252959 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-hhr2g" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.256444 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-fxp9g" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.280464 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkzg6\" (UniqueName: \"kubernetes.io/projected/e7e646e3-00c9-4359-b012-aaff60962a76-kube-api-access-dkzg6\") pod \"barbican-operator-controller-manager-7b64f4fb85-hhr2g\" (UID: \"e7e646e3-00c9-4359-b012-aaff60962a76\") " pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-hhr2g" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.282592 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b64f4fb85-hhr2g"] Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.297471 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6b7f75547b-l52fj"] Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.298657 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-l52fj" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.304469 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-955677c94-rh5q9"] Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.305463 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-955677c94-rh5q9" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.307453 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-4crb9" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.307637 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-qvjhb" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.315698 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6b7f75547b-l52fj"] Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.321189 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-955677c94-rh5q9"] Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.326897 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-589cbd6b5b-ns58c"] Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.327928 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-ns58c" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.332890 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-vszmj" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.337946 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5b77f656f-wptr7"] Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.338876 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-589cbd6b5b-ns58c"] Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.347343 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-wptr7" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.350413 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-kpdch" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.383720 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bqxs\" (UniqueName: \"kubernetes.io/projected/8d20efbb-527c-4085-a974-d49ee454b545-kube-api-access-2bqxs\") pod \"designate-operator-controller-manager-955677c94-rh5q9\" (UID: \"8d20efbb-527c-4085-a974-d49ee454b545\") " pod="openstack-operators/designate-operator-controller-manager-955677c94-rh5q9" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.383763 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9g6h\" (UniqueName: \"kubernetes.io/projected/eaf24224-e1f5-44d8-8151-54be9408b429-kube-api-access-j9g6h\") pod \"glance-operator-controller-manager-589cbd6b5b-ns58c\" (UID: \"eaf24224-e1f5-44d8-8151-54be9408b429\") " pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-ns58c" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.383800 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dltn6\" (UniqueName: \"kubernetes.io/projected/b3e0c6a3-33d8-4c1e-8b44-156de87d5621-kube-api-access-dltn6\") pod \"heat-operator-controller-manager-5b77f656f-wptr7\" (UID: \"b3e0c6a3-33d8-4c1e-8b44-156de87d5621\") " pod="openstack-operators/heat-operator-controller-manager-5b77f656f-wptr7" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.383826 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lklwx\" (UniqueName: \"kubernetes.io/projected/854f928b-5068-4de9-b865-7fb2a26ca9e4-kube-api-access-lklwx\") pod \"cinder-operator-controller-manager-6b7f75547b-l52fj\" (UID: \"854f928b-5068-4de9-b865-7fb2a26ca9e4\") " pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-l52fj" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.383887 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkzg6\" (UniqueName: \"kubernetes.io/projected/e7e646e3-00c9-4359-b012-aaff60962a76-kube-api-access-dkzg6\") pod \"barbican-operator-controller-manager-7b64f4fb85-hhr2g\" (UID: \"e7e646e3-00c9-4359-b012-aaff60962a76\") " pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-hhr2g" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.384129 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5b77f656f-wptr7"] Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.397228 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5d494799bf-vd654"] Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.398322 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-vd654" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.401458 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-bvt5h" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.405638 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-7pv5r"] Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.406755 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-7pv5r" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.410008 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-zwqkf" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.410810 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.417220 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkzg6\" (UniqueName: \"kubernetes.io/projected/e7e646e3-00c9-4359-b012-aaff60962a76-kube-api-access-dkzg6\") pod \"barbican-operator-controller-manager-7b64f4fb85-hhr2g\" (UID: \"e7e646e3-00c9-4359-b012-aaff60962a76\") " pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-hhr2g" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.424157 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5d494799bf-vd654"] Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.436895 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-n952x"] Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.437833 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-n952x" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.442742 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-9wkwj" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.445051 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7b4567c7cf-lfj45"] Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.445871 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-lfj45" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.447748 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-8m9s2" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.459169 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-n952x"] Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.466529 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-7pv5r"] Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.481159 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-5d499bf58b-9xxwc"] Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.482221 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-9xxwc" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.485518 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bqxs\" (UniqueName: \"kubernetes.io/projected/8d20efbb-527c-4085-a974-d49ee454b545-kube-api-access-2bqxs\") pod \"designate-operator-controller-manager-955677c94-rh5q9\" (UID: \"8d20efbb-527c-4085-a974-d49ee454b545\") " pod="openstack-operators/designate-operator-controller-manager-955677c94-rh5q9" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.485550 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9g6h\" (UniqueName: \"kubernetes.io/projected/eaf24224-e1f5-44d8-8151-54be9408b429-kube-api-access-j9g6h\") pod \"glance-operator-controller-manager-589cbd6b5b-ns58c\" (UID: \"eaf24224-e1f5-44d8-8151-54be9408b429\") " pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-ns58c" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.485583 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5vjr\" (UniqueName: \"kubernetes.io/projected/493d54b8-1e0a-4270-8180-ba1bc746c783-kube-api-access-t5vjr\") pod \"ironic-operator-controller-manager-67cb4dc6d4-n952x\" (UID: \"493d54b8-1e0a-4270-8180-ba1bc746c783\") " pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-n952x" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.485608 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dltn6\" (UniqueName: \"kubernetes.io/projected/b3e0c6a3-33d8-4c1e-8b44-156de87d5621-kube-api-access-dltn6\") pod \"heat-operator-controller-manager-5b77f656f-wptr7\" (UID: \"b3e0c6a3-33d8-4c1e-8b44-156de87d5621\") " pod="openstack-operators/heat-operator-controller-manager-5b77f656f-wptr7" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.485627 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lklwx\" (UniqueName: \"kubernetes.io/projected/854f928b-5068-4de9-b865-7fb2a26ca9e4-kube-api-access-lklwx\") pod \"cinder-operator-controller-manager-6b7f75547b-l52fj\" (UID: \"854f928b-5068-4de9-b865-7fb2a26ca9e4\") " pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-l52fj" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.485650 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntz5t\" (UniqueName: \"kubernetes.io/projected/40688ccc-932c-411e-8703-4bf0f11ec3bf-kube-api-access-ntz5t\") pod \"horizon-operator-controller-manager-5d494799bf-vd654\" (UID: \"40688ccc-932c-411e-8703-4bf0f11ec3bf\") " pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-vd654" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.485669 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78lhw\" (UniqueName: \"kubernetes.io/projected/af7046d6-f852-4c62-83e6-ea213812d86c-kube-api-access-78lhw\") pod \"infra-operator-controller-manager-57548d458d-7pv5r\" (UID: \"af7046d6-f852-4c62-83e6-ea213812d86c\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-7pv5r" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.485702 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af7046d6-f852-4c62-83e6-ea213812d86c-cert\") pod \"infra-operator-controller-manager-57548d458d-7pv5r\" (UID: \"af7046d6-f852-4c62-83e6-ea213812d86c\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-7pv5r" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.485740 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thhxc\" (UniqueName: \"kubernetes.io/projected/da8e3e32-3cc1-4b1b-91c5-31ac6e660d65-kube-api-access-thhxc\") pod \"keystone-operator-controller-manager-7b4567c7cf-lfj45\" (UID: \"da8e3e32-3cc1-4b1b-91c5-31ac6e660d65\") " pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-lfj45" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.489002 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-dcxgs" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.491902 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7b4567c7cf-lfj45"] Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.505861 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-xqxsn"] Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.506802 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-xqxsn" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.515664 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bqxs\" (UniqueName: \"kubernetes.io/projected/8d20efbb-527c-4085-a974-d49ee454b545-kube-api-access-2bqxs\") pod \"designate-operator-controller-manager-955677c94-rh5q9\" (UID: \"8d20efbb-527c-4085-a974-d49ee454b545\") " pod="openstack-operators/designate-operator-controller-manager-955677c94-rh5q9" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.517660 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lklwx\" (UniqueName: \"kubernetes.io/projected/854f928b-5068-4de9-b865-7fb2a26ca9e4-kube-api-access-lklwx\") pod \"cinder-operator-controller-manager-6b7f75547b-l52fj\" (UID: \"854f928b-5068-4de9-b865-7fb2a26ca9e4\") " pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-l52fj" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.520032 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-bb29z" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.521835 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5d499bf58b-9xxwc"] Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.528618 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dltn6\" (UniqueName: \"kubernetes.io/projected/b3e0c6a3-33d8-4c1e-8b44-156de87d5621-kube-api-access-dltn6\") pod \"heat-operator-controller-manager-5b77f656f-wptr7\" (UID: \"b3e0c6a3-33d8-4c1e-8b44-156de87d5621\") " pod="openstack-operators/heat-operator-controller-manager-5b77f656f-wptr7" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.528681 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-xqxsn"] Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.531857 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9g6h\" (UniqueName: \"kubernetes.io/projected/eaf24224-e1f5-44d8-8151-54be9408b429-kube-api-access-j9g6h\") pod \"glance-operator-controller-manager-589cbd6b5b-ns58c\" (UID: \"eaf24224-e1f5-44d8-8151-54be9408b429\") " pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-ns58c" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.537386 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6fdcddb789-cnfmd"] Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.538405 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-cnfmd" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.547490 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-tmfgg" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.552574 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6fdcddb789-cnfmd"] Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.564320 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-zzflc"] Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.565690 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-zzflc" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.572497 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-7slzd" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.575117 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-64cdc6ff96-kvnt5"] Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.577651 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-kvnt5" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.594929 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-4slz5" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.595420 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-hhr2g" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.596479 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2s64\" (UniqueName: \"kubernetes.io/projected/911b9690-ddec-439e-9ef5-a7d80562f51c-kube-api-access-h2s64\") pod \"neutron-operator-controller-manager-6fdcddb789-cnfmd\" (UID: \"911b9690-ddec-439e-9ef5-a7d80562f51c\") " pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-cnfmd" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.596515 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frmt4\" (UniqueName: \"kubernetes.io/projected/3b4accd2-e9c1-4e51-a559-c5cf108f5af1-kube-api-access-frmt4\") pod \"nova-operator-controller-manager-79556f57fc-zzflc\" (UID: \"3b4accd2-e9c1-4e51-a559-c5cf108f5af1\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-zzflc" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.596545 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af7046d6-f852-4c62-83e6-ea213812d86c-cert\") pod \"infra-operator-controller-manager-57548d458d-7pv5r\" (UID: \"af7046d6-f852-4c62-83e6-ea213812d86c\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-7pv5r" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.596590 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thhxc\" (UniqueName: \"kubernetes.io/projected/da8e3e32-3cc1-4b1b-91c5-31ac6e660d65-kube-api-access-thhxc\") pod \"keystone-operator-controller-manager-7b4567c7cf-lfj45\" (UID: \"da8e3e32-3cc1-4b1b-91c5-31ac6e660d65\") " pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-lfj45" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.623243 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pzsr\" (UniqueName: \"kubernetes.io/projected/75996749-aa6c-4a8e-ba7f-412209db3939-kube-api-access-9pzsr\") pod \"manila-operator-controller-manager-5d499bf58b-9xxwc\" (UID: \"75996749-aa6c-4a8e-ba7f-412209db3939\") " pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-9xxwc" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.623308 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grd5s\" (UniqueName: \"kubernetes.io/projected/b96763b6-e6a4-4429-8fe4-6b23620824c1-kube-api-access-grd5s\") pod \"mariadb-operator-controller-manager-66f4dd4bc7-xqxsn\" (UID: \"b96763b6-e6a4-4429-8fe4-6b23620824c1\") " pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-xqxsn" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.623353 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5vjr\" (UniqueName: \"kubernetes.io/projected/493d54b8-1e0a-4270-8180-ba1bc746c783-kube-api-access-t5vjr\") pod \"ironic-operator-controller-manager-67cb4dc6d4-n952x\" (UID: \"493d54b8-1e0a-4270-8180-ba1bc746c783\") " pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-n952x" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.623409 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5p5d\" (UniqueName: \"kubernetes.io/projected/623cd065-a088-41d4-9b98-8be8d60c0f20-kube-api-access-v5p5d\") pod \"octavia-operator-controller-manager-64cdc6ff96-kvnt5\" (UID: \"623cd065-a088-41d4-9b98-8be8d60c0f20\") " pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-kvnt5" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.623442 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntz5t\" (UniqueName: \"kubernetes.io/projected/40688ccc-932c-411e-8703-4bf0f11ec3bf-kube-api-access-ntz5t\") pod \"horizon-operator-controller-manager-5d494799bf-vd654\" (UID: \"40688ccc-932c-411e-8703-4bf0f11ec3bf\") " pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-vd654" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.623473 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78lhw\" (UniqueName: \"kubernetes.io/projected/af7046d6-f852-4c62-83e6-ea213812d86c-kube-api-access-78lhw\") pod \"infra-operator-controller-manager-57548d458d-7pv5r\" (UID: \"af7046d6-f852-4c62-83e6-ea213812d86c\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-7pv5r" Nov 28 12:51:35 crc kubenswrapper[4779]: E1128 12:51:35.597903 4779 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 28 12:51:35 crc kubenswrapper[4779]: E1128 12:51:35.623950 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af7046d6-f852-4c62-83e6-ea213812d86c-cert podName:af7046d6-f852-4c62-83e6-ea213812d86c nodeName:}" failed. No retries permitted until 2025-11-28 12:51:36.123927808 +0000 UTC m=+956.689603172 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/af7046d6-f852-4c62-83e6-ea213812d86c-cert") pod "infra-operator-controller-manager-57548d458d-7pv5r" (UID: "af7046d6-f852-4c62-83e6-ea213812d86c") : secret "infra-operator-webhook-server-cert" not found Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.600377 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-zzflc"] Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.631777 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-l52fj" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.639958 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-56897c768d-v49kv"] Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.641577 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-v49kv" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.643726 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-955677c94-rh5q9" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.671368 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thhxc\" (UniqueName: \"kubernetes.io/projected/da8e3e32-3cc1-4b1b-91c5-31ac6e660d65-kube-api-access-thhxc\") pod \"keystone-operator-controller-manager-7b4567c7cf-lfj45\" (UID: \"da8e3e32-3cc1-4b1b-91c5-31ac6e660d65\") " pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-lfj45" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.697364 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-64cdc6ff96-kvnt5"] Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.697537 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-974kv" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.701245 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-ns58c" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.701959 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-wptr7" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.708250 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5vjr\" (UniqueName: \"kubernetes.io/projected/493d54b8-1e0a-4270-8180-ba1bc746c783-kube-api-access-t5vjr\") pod \"ironic-operator-controller-manager-67cb4dc6d4-n952x\" (UID: \"493d54b8-1e0a-4270-8180-ba1bc746c783\") " pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-n952x" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.730718 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pzsr\" (UniqueName: \"kubernetes.io/projected/75996749-aa6c-4a8e-ba7f-412209db3939-kube-api-access-9pzsr\") pod \"manila-operator-controller-manager-5d499bf58b-9xxwc\" (UID: \"75996749-aa6c-4a8e-ba7f-412209db3939\") " pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-9xxwc" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.730763 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grd5s\" (UniqueName: \"kubernetes.io/projected/b96763b6-e6a4-4429-8fe4-6b23620824c1-kube-api-access-grd5s\") pod \"mariadb-operator-controller-manager-66f4dd4bc7-xqxsn\" (UID: \"b96763b6-e6a4-4429-8fe4-6b23620824c1\") " pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-xqxsn" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.730793 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5p5d\" (UniqueName: \"kubernetes.io/projected/623cd065-a088-41d4-9b98-8be8d60c0f20-kube-api-access-v5p5d\") pod \"octavia-operator-controller-manager-64cdc6ff96-kvnt5\" (UID: \"623cd065-a088-41d4-9b98-8be8d60c0f20\") " pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-kvnt5" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.730830 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2s64\" (UniqueName: \"kubernetes.io/projected/911b9690-ddec-439e-9ef5-a7d80562f51c-kube-api-access-h2s64\") pod \"neutron-operator-controller-manager-6fdcddb789-cnfmd\" (UID: \"911b9690-ddec-439e-9ef5-a7d80562f51c\") " pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-cnfmd" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.730849 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frmt4\" (UniqueName: \"kubernetes.io/projected/3b4accd2-e9c1-4e51-a559-c5cf108f5af1-kube-api-access-frmt4\") pod \"nova-operator-controller-manager-79556f57fc-zzflc\" (UID: \"3b4accd2-e9c1-4e51-a559-c5cf108f5af1\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-zzflc" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.730893 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmjl6\" (UniqueName: \"kubernetes.io/projected/bb4ac6b3-6655-4e29-8cf7-bdae98df3386-kube-api-access-jmjl6\") pod \"ovn-operator-controller-manager-56897c768d-v49kv\" (UID: \"bb4ac6b3-6655-4e29-8cf7-bdae98df3386\") " pod="openstack-operators/ovn-operator-controller-manager-56897c768d-v49kv" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.738017 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78lhw\" (UniqueName: \"kubernetes.io/projected/af7046d6-f852-4c62-83e6-ea213812d86c-kube-api-access-78lhw\") pod \"infra-operator-controller-manager-57548d458d-7pv5r\" (UID: \"af7046d6-f852-4c62-83e6-ea213812d86c\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-7pv5r" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.758742 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntz5t\" (UniqueName: \"kubernetes.io/projected/40688ccc-932c-411e-8703-4bf0f11ec3bf-kube-api-access-ntz5t\") pod \"horizon-operator-controller-manager-5d494799bf-vd654\" (UID: \"40688ccc-932c-411e-8703-4bf0f11ec3bf\") " pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-vd654" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.770454 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-n952x" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.779285 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bsdkvh"] Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.790711 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2s64\" (UniqueName: \"kubernetes.io/projected/911b9690-ddec-439e-9ef5-a7d80562f51c-kube-api-access-h2s64\") pod \"neutron-operator-controller-manager-6fdcddb789-cnfmd\" (UID: \"911b9690-ddec-439e-9ef5-a7d80562f51c\") " pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-cnfmd" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.791933 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-lfj45" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.798959 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-57988cc5b5-lnf86"] Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.799825 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-lnf86" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.800119 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bsdkvh" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.802713 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-f8kwp" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.802888 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-fnxl9" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.802987 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-56897c768d-v49kv"] Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.803033 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.803517 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frmt4\" (UniqueName: \"kubernetes.io/projected/3b4accd2-e9c1-4e51-a559-c5cf108f5af1-kube-api-access-frmt4\") pod \"nova-operator-controller-manager-79556f57fc-zzflc\" (UID: \"3b4accd2-e9c1-4e51-a559-c5cf108f5af1\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-zzflc" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.803853 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pzsr\" (UniqueName: \"kubernetes.io/projected/75996749-aa6c-4a8e-ba7f-412209db3939-kube-api-access-9pzsr\") pod \"manila-operator-controller-manager-5d499bf58b-9xxwc\" (UID: \"75996749-aa6c-4a8e-ba7f-412209db3939\") " pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-9xxwc" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.807927 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grd5s\" (UniqueName: \"kubernetes.io/projected/b96763b6-e6a4-4429-8fe4-6b23620824c1-kube-api-access-grd5s\") pod \"mariadb-operator-controller-manager-66f4dd4bc7-xqxsn\" (UID: \"b96763b6-e6a4-4429-8fe4-6b23620824c1\") " pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-xqxsn" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.808700 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5p5d\" (UniqueName: \"kubernetes.io/projected/623cd065-a088-41d4-9b98-8be8d60c0f20-kube-api-access-v5p5d\") pod \"octavia-operator-controller-manager-64cdc6ff96-kvnt5\" (UID: \"623cd065-a088-41d4-9b98-8be8d60c0f20\") " pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-kvnt5" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.812201 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-9xxwc" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.834613 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmjl6\" (UniqueName: \"kubernetes.io/projected/bb4ac6b3-6655-4e29-8cf7-bdae98df3386-kube-api-access-jmjl6\") pod \"ovn-operator-controller-manager-56897c768d-v49kv\" (UID: \"bb4ac6b3-6655-4e29-8cf7-bdae98df3386\") " pod="openstack-operators/ovn-operator-controller-manager-56897c768d-v49kv" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.834686 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66bfbaf1-3247-47c1-aa58-19cf5875882e-cert\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6bsdkvh\" (UID: \"66bfbaf1-3247-47c1-aa58-19cf5875882e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bsdkvh" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.834709 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f722q\" (UniqueName: \"kubernetes.io/projected/66bfbaf1-3247-47c1-aa58-19cf5875882e-kube-api-access-f722q\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6bsdkvh\" (UID: \"66bfbaf1-3247-47c1-aa58-19cf5875882e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bsdkvh" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.834744 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqnzp\" (UniqueName: \"kubernetes.io/projected/b1c19869-b98a-40c8-a312-8c49d69bdf0f-kube-api-access-tqnzp\") pod \"placement-operator-controller-manager-57988cc5b5-lnf86\" (UID: \"b1c19869-b98a-40c8-a312-8c49d69bdf0f\") " pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-lnf86" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.845119 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-57988cc5b5-lnf86"] Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.857072 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bsdkvh"] Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.872445 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmjl6\" (UniqueName: \"kubernetes.io/projected/bb4ac6b3-6655-4e29-8cf7-bdae98df3386-kube-api-access-jmjl6\") pod \"ovn-operator-controller-manager-56897c768d-v49kv\" (UID: \"bb4ac6b3-6655-4e29-8cf7-bdae98df3386\") " pod="openstack-operators/ovn-operator-controller-manager-56897c768d-v49kv" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.873123 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-xqxsn" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.878112 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-d77b94747-c6wb2"] Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.893688 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-d77b94747-c6wb2" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.899318 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-7cr7l" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.920735 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-d77b94747-c6wb2"] Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.935729 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66bfbaf1-3247-47c1-aa58-19cf5875882e-cert\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6bsdkvh\" (UID: \"66bfbaf1-3247-47c1-aa58-19cf5875882e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bsdkvh" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.935773 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f722q\" (UniqueName: \"kubernetes.io/projected/66bfbaf1-3247-47c1-aa58-19cf5875882e-kube-api-access-f722q\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6bsdkvh\" (UID: \"66bfbaf1-3247-47c1-aa58-19cf5875882e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bsdkvh" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.935809 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqnzp\" (UniqueName: \"kubernetes.io/projected/b1c19869-b98a-40c8-a312-8c49d69bdf0f-kube-api-access-tqnzp\") pod \"placement-operator-controller-manager-57988cc5b5-lnf86\" (UID: \"b1c19869-b98a-40c8-a312-8c49d69bdf0f\") " pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-lnf86" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.935849 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x2fz\" (UniqueName: \"kubernetes.io/projected/f3d69218-2422-473c-ae41-bd2a2b902355-kube-api-access-8x2fz\") pod \"swift-operator-controller-manager-d77b94747-c6wb2\" (UID: \"f3d69218-2422-473c-ae41-bd2a2b902355\") " pod="openstack-operators/swift-operator-controller-manager-d77b94747-c6wb2" Nov 28 12:51:35 crc kubenswrapper[4779]: E1128 12:51:35.935911 4779 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 12:51:35 crc kubenswrapper[4779]: E1128 12:51:35.935973 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66bfbaf1-3247-47c1-aa58-19cf5875882e-cert podName:66bfbaf1-3247-47c1-aa58-19cf5875882e nodeName:}" failed. No retries permitted until 2025-11-28 12:51:36.43595614 +0000 UTC m=+957.001631494 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/66bfbaf1-3247-47c1-aa58-19cf5875882e-cert") pod "openstack-baremetal-operator-controller-manager-5fcdb54b6bsdkvh" (UID: "66bfbaf1-3247-47c1-aa58-19cf5875882e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.943188 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7574d9569-x822f"] Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.944368 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7574d9569-x822f" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.949418 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-s44jw" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.954262 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f722q\" (UniqueName: \"kubernetes.io/projected/66bfbaf1-3247-47c1-aa58-19cf5875882e-kube-api-access-f722q\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6bsdkvh\" (UID: \"66bfbaf1-3247-47c1-aa58-19cf5875882e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bsdkvh" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.960233 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqnzp\" (UniqueName: \"kubernetes.io/projected/b1c19869-b98a-40c8-a312-8c49d69bdf0f-kube-api-access-tqnzp\") pod \"placement-operator-controller-manager-57988cc5b5-lnf86\" (UID: \"b1c19869-b98a-40c8-a312-8c49d69bdf0f\") " pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-lnf86" Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.963446 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7574d9569-x822f"] Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.994119 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5cd6c7f4c8-h4czz"] Nov 28 12:51:35 crc kubenswrapper[4779]: I1128 12:51:35.995283 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-h4czz" Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.003217 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-ndt69" Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.004175 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-cnfmd" Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.009492 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-zzflc" Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.038548 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x2fz\" (UniqueName: \"kubernetes.io/projected/f3d69218-2422-473c-ae41-bd2a2b902355-kube-api-access-8x2fz\") pod \"swift-operator-controller-manager-d77b94747-c6wb2\" (UID: \"f3d69218-2422-473c-ae41-bd2a2b902355\") " pod="openstack-operators/swift-operator-controller-manager-d77b94747-c6wb2" Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.038677 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-899rs\" (UniqueName: \"kubernetes.io/projected/39fdca45-fa34-4d90-93a9-1123dff79930-kube-api-access-899rs\") pod \"test-operator-controller-manager-5cd6c7f4c8-h4czz\" (UID: \"39fdca45-fa34-4d90-93a9-1123dff79930\") " pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-h4czz" Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.038725 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvlb9\" (UniqueName: \"kubernetes.io/projected/f1d9753d-b49d-4e32-b312-137314283984-kube-api-access-cvlb9\") pod \"telemetry-operator-controller-manager-7574d9569-x822f\" (UID: \"f1d9753d-b49d-4e32-b312-137314283984\") " pod="openstack-operators/telemetry-operator-controller-manager-7574d9569-x822f" Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.041913 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cd6c7f4c8-h4czz"] Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.045318 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-vd654" Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.045693 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-kvnt5" Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.060024 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x2fz\" (UniqueName: \"kubernetes.io/projected/f3d69218-2422-473c-ae41-bd2a2b902355-kube-api-access-8x2fz\") pod \"swift-operator-controller-manager-d77b94747-c6wb2\" (UID: \"f3d69218-2422-473c-ae41-bd2a2b902355\") " pod="openstack-operators/swift-operator-controller-manager-d77b94747-c6wb2" Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.076840 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-v49kv" Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.082046 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-656dcb59d4-hjhz4"] Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.083156 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-hjhz4" Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.084843 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-nbsnt" Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.096866 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-656dcb59d4-hjhz4"] Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.120357 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-lnf86" Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.143397 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvlb9\" (UniqueName: \"kubernetes.io/projected/f1d9753d-b49d-4e32-b312-137314283984-kube-api-access-cvlb9\") pod \"telemetry-operator-controller-manager-7574d9569-x822f\" (UID: \"f1d9753d-b49d-4e32-b312-137314283984\") " pod="openstack-operators/telemetry-operator-controller-manager-7574d9569-x822f" Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.143461 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78l2d\" (UniqueName: \"kubernetes.io/projected/1799095f-becf-4b8e-bb0b-28c04a819e59-kube-api-access-78l2d\") pod \"watcher-operator-controller-manager-656dcb59d4-hjhz4\" (UID: \"1799095f-becf-4b8e-bb0b-28c04a819e59\") " pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-hjhz4" Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.143513 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af7046d6-f852-4c62-83e6-ea213812d86c-cert\") pod \"infra-operator-controller-manager-57548d458d-7pv5r\" (UID: \"af7046d6-f852-4c62-83e6-ea213812d86c\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-7pv5r" Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.143548 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-899rs\" (UniqueName: \"kubernetes.io/projected/39fdca45-fa34-4d90-93a9-1123dff79930-kube-api-access-899rs\") pod \"test-operator-controller-manager-5cd6c7f4c8-h4czz\" (UID: \"39fdca45-fa34-4d90-93a9-1123dff79930\") " pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-h4czz" Nov 28 12:51:36 crc kubenswrapper[4779]: E1128 12:51:36.143984 4779 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 28 12:51:36 crc kubenswrapper[4779]: E1128 12:51:36.144022 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af7046d6-f852-4c62-83e6-ea213812d86c-cert podName:af7046d6-f852-4c62-83e6-ea213812d86c nodeName:}" failed. No retries permitted until 2025-11-28 12:51:37.144010283 +0000 UTC m=+957.709685637 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/af7046d6-f852-4c62-83e6-ea213812d86c-cert") pod "infra-operator-controller-manager-57548d458d-7pv5r" (UID: "af7046d6-f852-4c62-83e6-ea213812d86c") : secret "infra-operator-webhook-server-cert" not found Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.149297 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7d967756df-nvprs"] Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.151291 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7d967756df-nvprs" Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.153145 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.153174 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.154205 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7d967756df-nvprs"] Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.155781 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-sbwgq" Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.161346 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvlb9\" (UniqueName: \"kubernetes.io/projected/f1d9753d-b49d-4e32-b312-137314283984-kube-api-access-cvlb9\") pod \"telemetry-operator-controller-manager-7574d9569-x822f\" (UID: \"f1d9753d-b49d-4e32-b312-137314283984\") " pod="openstack-operators/telemetry-operator-controller-manager-7574d9569-x822f" Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.163187 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-899rs\" (UniqueName: \"kubernetes.io/projected/39fdca45-fa34-4d90-93a9-1123dff79930-kube-api-access-899rs\") pod \"test-operator-controller-manager-5cd6c7f4c8-h4czz\" (UID: \"39fdca45-fa34-4d90-93a9-1123dff79930\") " pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-h4czz" Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.165131 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-495dt"] Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.166288 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-495dt" Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.167574 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-6z4l6" Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.170242 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-495dt"] Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.228198 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-d77b94747-c6wb2" Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.244967 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v24bv\" (UniqueName: \"kubernetes.io/projected/1c62c5f4-5757-46d4-92e5-7fdb2b21c88e-kube-api-access-v24bv\") pod \"rabbitmq-cluster-operator-manager-668c99d594-495dt\" (UID: \"1c62c5f4-5757-46d4-92e5-7fdb2b21c88e\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-495dt" Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.245018 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/31627cc1-b543-4da9-8fe1-ac12e7f09531-metrics-certs\") pod \"openstack-operator-controller-manager-7d967756df-nvprs\" (UID: \"31627cc1-b543-4da9-8fe1-ac12e7f09531\") " pod="openstack-operators/openstack-operator-controller-manager-7d967756df-nvprs" Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.245041 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78l2d\" (UniqueName: \"kubernetes.io/projected/1799095f-becf-4b8e-bb0b-28c04a819e59-kube-api-access-78l2d\") pod \"watcher-operator-controller-manager-656dcb59d4-hjhz4\" (UID: \"1799095f-becf-4b8e-bb0b-28c04a819e59\") " pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-hjhz4" Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.245072 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/31627cc1-b543-4da9-8fe1-ac12e7f09531-webhook-certs\") pod \"openstack-operator-controller-manager-7d967756df-nvprs\" (UID: \"31627cc1-b543-4da9-8fe1-ac12e7f09531\") " pod="openstack-operators/openstack-operator-controller-manager-7d967756df-nvprs" Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.245141 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvcwv\" (UniqueName: \"kubernetes.io/projected/31627cc1-b543-4da9-8fe1-ac12e7f09531-kube-api-access-rvcwv\") pod \"openstack-operator-controller-manager-7d967756df-nvprs\" (UID: \"31627cc1-b543-4da9-8fe1-ac12e7f09531\") " pod="openstack-operators/openstack-operator-controller-manager-7d967756df-nvprs" Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.263382 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78l2d\" (UniqueName: \"kubernetes.io/projected/1799095f-becf-4b8e-bb0b-28c04a819e59-kube-api-access-78l2d\") pod \"watcher-operator-controller-manager-656dcb59d4-hjhz4\" (UID: \"1799095f-becf-4b8e-bb0b-28c04a819e59\") " pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-hjhz4" Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.296524 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7574d9569-x822f" Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.333402 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b64f4fb85-hhr2g"] Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.340076 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-h4czz" Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.346485 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/31627cc1-b543-4da9-8fe1-ac12e7f09531-webhook-certs\") pod \"openstack-operator-controller-manager-7d967756df-nvprs\" (UID: \"31627cc1-b543-4da9-8fe1-ac12e7f09531\") " pod="openstack-operators/openstack-operator-controller-manager-7d967756df-nvprs" Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.346559 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvcwv\" (UniqueName: \"kubernetes.io/projected/31627cc1-b543-4da9-8fe1-ac12e7f09531-kube-api-access-rvcwv\") pod \"openstack-operator-controller-manager-7d967756df-nvprs\" (UID: \"31627cc1-b543-4da9-8fe1-ac12e7f09531\") " pod="openstack-operators/openstack-operator-controller-manager-7d967756df-nvprs" Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.346620 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v24bv\" (UniqueName: \"kubernetes.io/projected/1c62c5f4-5757-46d4-92e5-7fdb2b21c88e-kube-api-access-v24bv\") pod \"rabbitmq-cluster-operator-manager-668c99d594-495dt\" (UID: \"1c62c5f4-5757-46d4-92e5-7fdb2b21c88e\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-495dt" Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.346642 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/31627cc1-b543-4da9-8fe1-ac12e7f09531-metrics-certs\") pod \"openstack-operator-controller-manager-7d967756df-nvprs\" (UID: \"31627cc1-b543-4da9-8fe1-ac12e7f09531\") " pod="openstack-operators/openstack-operator-controller-manager-7d967756df-nvprs" Nov 28 12:51:36 crc kubenswrapper[4779]: E1128 12:51:36.346776 4779 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 28 12:51:36 crc kubenswrapper[4779]: E1128 12:51:36.346816 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31627cc1-b543-4da9-8fe1-ac12e7f09531-metrics-certs podName:31627cc1-b543-4da9-8fe1-ac12e7f09531 nodeName:}" failed. No retries permitted until 2025-11-28 12:51:36.846803264 +0000 UTC m=+957.412478618 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/31627cc1-b543-4da9-8fe1-ac12e7f09531-metrics-certs") pod "openstack-operator-controller-manager-7d967756df-nvprs" (UID: "31627cc1-b543-4da9-8fe1-ac12e7f09531") : secret "metrics-server-cert" not found Nov 28 12:51:36 crc kubenswrapper[4779]: E1128 12:51:36.347088 4779 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 28 12:51:36 crc kubenswrapper[4779]: E1128 12:51:36.347125 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31627cc1-b543-4da9-8fe1-ac12e7f09531-webhook-certs podName:31627cc1-b543-4da9-8fe1-ac12e7f09531 nodeName:}" failed. No retries permitted until 2025-11-28 12:51:36.847118252 +0000 UTC m=+957.412793606 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/31627cc1-b543-4da9-8fe1-ac12e7f09531-webhook-certs") pod "openstack-operator-controller-manager-7d967756df-nvprs" (UID: "31627cc1-b543-4da9-8fe1-ac12e7f09531") : secret "webhook-server-cert" not found Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.364160 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v24bv\" (UniqueName: \"kubernetes.io/projected/1c62c5f4-5757-46d4-92e5-7fdb2b21c88e-kube-api-access-v24bv\") pod \"rabbitmq-cluster-operator-manager-668c99d594-495dt\" (UID: \"1c62c5f4-5757-46d4-92e5-7fdb2b21c88e\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-495dt" Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.365003 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvcwv\" (UniqueName: \"kubernetes.io/projected/31627cc1-b543-4da9-8fe1-ac12e7f09531-kube-api-access-rvcwv\") pod \"openstack-operator-controller-manager-7d967756df-nvprs\" (UID: \"31627cc1-b543-4da9-8fe1-ac12e7f09531\") " pod="openstack-operators/openstack-operator-controller-manager-7d967756df-nvprs" Nov 28 12:51:36 crc kubenswrapper[4779]: W1128 12:51:36.376330 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7e646e3_00c9_4359_b012_aaff60962a76.slice/crio-64c4e80404e85121f02183d533a97b9186948cb2b3a881cb1fd75d5c0e5bed59 WatchSource:0}: Error finding container 64c4e80404e85121f02183d533a97b9186948cb2b3a881cb1fd75d5c0e5bed59: Status 404 returned error can't find the container with id 64c4e80404e85121f02183d533a97b9186948cb2b3a881cb1fd75d5c0e5bed59 Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.448112 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66bfbaf1-3247-47c1-aa58-19cf5875882e-cert\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6bsdkvh\" (UID: \"66bfbaf1-3247-47c1-aa58-19cf5875882e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bsdkvh" Nov 28 12:51:36 crc kubenswrapper[4779]: E1128 12:51:36.448228 4779 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 12:51:36 crc kubenswrapper[4779]: E1128 12:51:36.448306 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66bfbaf1-3247-47c1-aa58-19cf5875882e-cert podName:66bfbaf1-3247-47c1-aa58-19cf5875882e nodeName:}" failed. No retries permitted until 2025-11-28 12:51:37.448288707 +0000 UTC m=+958.013964061 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/66bfbaf1-3247-47c1-aa58-19cf5875882e-cert") pod "openstack-baremetal-operator-controller-manager-5fcdb54b6bsdkvh" (UID: "66bfbaf1-3247-47c1-aa58-19cf5875882e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.481122 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-hjhz4" Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.524464 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-495dt" Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.527880 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6b7f75547b-l52fj"] Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.582367 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-955677c94-rh5q9"] Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.589820 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5b77f656f-wptr7"] Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.598066 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-n952x"] Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.608731 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-589cbd6b5b-ns58c"] Nov 28 12:51:36 crc kubenswrapper[4779]: W1128 12:51:36.629734 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod493d54b8_1e0a_4270_8180_ba1bc746c783.slice/crio-36238e1268638266f031abc515cf46602d2b4ad43a94a273ab6936bef1cbdcca WatchSource:0}: Error finding container 36238e1268638266f031abc515cf46602d2b4ad43a94a273ab6936bef1cbdcca: Status 404 returned error can't find the container with id 36238e1268638266f031abc515cf46602d2b4ad43a94a273ab6936bef1cbdcca Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.787178 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-xqxsn"] Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.798670 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7b4567c7cf-lfj45"] Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.820538 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5d499bf58b-9xxwc"] Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.854415 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/31627cc1-b543-4da9-8fe1-ac12e7f09531-webhook-certs\") pod \"openstack-operator-controller-manager-7d967756df-nvprs\" (UID: \"31627cc1-b543-4da9-8fe1-ac12e7f09531\") " pod="openstack-operators/openstack-operator-controller-manager-7d967756df-nvprs" Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.854534 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/31627cc1-b543-4da9-8fe1-ac12e7f09531-metrics-certs\") pod \"openstack-operator-controller-manager-7d967756df-nvprs\" (UID: \"31627cc1-b543-4da9-8fe1-ac12e7f09531\") " pod="openstack-operators/openstack-operator-controller-manager-7d967756df-nvprs" Nov 28 12:51:36 crc kubenswrapper[4779]: E1128 12:51:36.854587 4779 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 28 12:51:36 crc kubenswrapper[4779]: E1128 12:51:36.854647 4779 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 28 12:51:36 crc kubenswrapper[4779]: E1128 12:51:36.854683 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31627cc1-b543-4da9-8fe1-ac12e7f09531-webhook-certs podName:31627cc1-b543-4da9-8fe1-ac12e7f09531 nodeName:}" failed. No retries permitted until 2025-11-28 12:51:37.854660271 +0000 UTC m=+958.420335635 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/31627cc1-b543-4da9-8fe1-ac12e7f09531-webhook-certs") pod "openstack-operator-controller-manager-7d967756df-nvprs" (UID: "31627cc1-b543-4da9-8fe1-ac12e7f09531") : secret "webhook-server-cert" not found Nov 28 12:51:36 crc kubenswrapper[4779]: E1128 12:51:36.854703 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31627cc1-b543-4da9-8fe1-ac12e7f09531-metrics-certs podName:31627cc1-b543-4da9-8fe1-ac12e7f09531 nodeName:}" failed. No retries permitted until 2025-11-28 12:51:37.854695112 +0000 UTC m=+958.420370476 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/31627cc1-b543-4da9-8fe1-ac12e7f09531-metrics-certs") pod "openstack-operator-controller-manager-7d967756df-nvprs" (UID: "31627cc1-b543-4da9-8fe1-ac12e7f09531") : secret "metrics-server-cert" not found Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.900991 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-d77b94747-c6wb2"] Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.917227 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5d494799bf-vd654"] Nov 28 12:51:36 crc kubenswrapper[4779]: W1128 12:51:36.921821 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40688ccc_932c_411e_8703_4bf0f11ec3bf.slice/crio-cb3cdaa18efc2434ba7e63010b10846d146e5236f850b5135fde6a8d56fec73c WatchSource:0}: Error finding container cb3cdaa18efc2434ba7e63010b10846d146e5236f850b5135fde6a8d56fec73c: Status 404 returned error can't find the container with id cb3cdaa18efc2434ba7e63010b10846d146e5236f850b5135fde6a8d56fec73c Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.925569 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-57988cc5b5-lnf86"] Nov 28 12:51:36 crc kubenswrapper[4779]: W1128 12:51:36.929763 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb4ac6b3_6655_4e29_8cf7_bdae98df3386.slice/crio-f16551fb806cc6ec5fef70bd4764209b139ed902dd8aba81c735775269d6e9e8 WatchSource:0}: Error finding container f16551fb806cc6ec5fef70bd4764209b139ed902dd8aba81c735775269d6e9e8: Status 404 returned error can't find the container with id f16551fb806cc6ec5fef70bd4764209b139ed902dd8aba81c735775269d6e9e8 Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.930839 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-56897c768d-v49kv"] Nov 28 12:51:36 crc kubenswrapper[4779]: E1128 12:51:36.932406 4779 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:bbb543d2d67c73e5df5d6357c3251363eb34a99575c5bf10416edd45dbdae2f6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jmjl6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-56897c768d-v49kv_openstack-operators(bb4ac6b3-6655-4e29-8cf7-bdae98df3386): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 12:51:36 crc kubenswrapper[4779]: E1128 12:51:36.934226 4779 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jmjl6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-56897c768d-v49kv_openstack-operators(bb4ac6b3-6655-4e29-8cf7-bdae98df3386): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 12:51:36 crc kubenswrapper[4779]: E1128 12:51:36.935060 4779 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-frmt4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-79556f57fc-zzflc_openstack-operators(3b4accd2-e9c1-4e51-a559-c5cf108f5af1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 12:51:36 crc kubenswrapper[4779]: E1128 12:51:36.935320 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-v49kv" podUID="bb4ac6b3-6655-4e29-8cf7-bdae98df3386" Nov 28 12:51:36 crc kubenswrapper[4779]: E1128 12:51:36.936841 4779 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-frmt4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-79556f57fc-zzflc_openstack-operators(3b4accd2-e9c1-4e51-a559-c5cf108f5af1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 12:51:36 crc kubenswrapper[4779]: E1128 12:51:36.936895 4779 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:ddc8a82f05930db8ee7a8d6d189b5a66373060656e4baf71ac302f89c477da4c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v5p5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-64cdc6ff96-kvnt5_openstack-operators(623cd065-a088-41d4-9b98-8be8d60c0f20): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 12:51:36 crc kubenswrapper[4779]: W1128 12:51:36.937770 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod911b9690_ddec_439e_9ef5_a7d80562f51c.slice/crio-50044ff438b75e998f658af5704818706fc5ba1ec209524f96a4c35997828638 WatchSource:0}: Error finding container 50044ff438b75e998f658af5704818706fc5ba1ec209524f96a4c35997828638: Status 404 returned error can't find the container with id 50044ff438b75e998f658af5704818706fc5ba1ec209524f96a4c35997828638 Nov 28 12:51:36 crc kubenswrapper[4779]: E1128 12:51:36.938448 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-zzflc" podUID="3b4accd2-e9c1-4e51-a559-c5cf108f5af1" Nov 28 12:51:36 crc kubenswrapper[4779]: E1128 12:51:36.939000 4779 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v5p5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-64cdc6ff96-kvnt5_openstack-operators(623cd065-a088-41d4-9b98-8be8d60c0f20): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.940012 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-64cdc6ff96-kvnt5"] Nov 28 12:51:36 crc kubenswrapper[4779]: E1128 12:51:36.940064 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-kvnt5" podUID="623cd065-a088-41d4-9b98-8be8d60c0f20" Nov 28 12:51:36 crc kubenswrapper[4779]: E1128 12:51:36.941615 4779 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:e00a9ed0ab26c5b745bd804ab1fe6b22428d026f17ea05a05f045e060342f46c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h2s64,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-6fdcddb789-cnfmd_openstack-operators(911b9690-ddec-439e-9ef5-a7d80562f51c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 12:51:36 crc kubenswrapper[4779]: E1128 12:51:36.943989 4779 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h2s64,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-6fdcddb789-cnfmd_openstack-operators(911b9690-ddec-439e-9ef5-a7d80562f51c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 12:51:36 crc kubenswrapper[4779]: E1128 12:51:36.945660 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-cnfmd" podUID="911b9690-ddec-439e-9ef5-a7d80562f51c" Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.956547 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-zzflc"] Nov 28 12:51:36 crc kubenswrapper[4779]: I1128 12:51:36.969964 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6fdcddb789-cnfmd"] Nov 28 12:51:37 crc kubenswrapper[4779]: I1128 12:51:37.076257 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cd6c7f4c8-h4czz"] Nov 28 12:51:37 crc kubenswrapper[4779]: W1128 12:51:37.082494 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39fdca45_fa34_4d90_93a9_1123dff79930.slice/crio-1992b1a48999ad3534090774aca6c440e44ca85e7855650cf1c3cdedbacdd0e3 WatchSource:0}: Error finding container 1992b1a48999ad3534090774aca6c440e44ca85e7855650cf1c3cdedbacdd0e3: Status 404 returned error can't find the container with id 1992b1a48999ad3534090774aca6c440e44ca85e7855650cf1c3cdedbacdd0e3 Nov 28 12:51:37 crc kubenswrapper[4779]: I1128 12:51:37.109309 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7574d9569-x822f"] Nov 28 12:51:37 crc kubenswrapper[4779]: I1128 12:51:37.117268 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-495dt"] Nov 28 12:51:37 crc kubenswrapper[4779]: W1128 12:51:37.119057 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1d9753d_b49d_4e32_b312_137314283984.slice/crio-debdf423130b7f8fe64083e397ffd0c2aad7fc45c551d43b57a586a841c91485 WatchSource:0}: Error finding container debdf423130b7f8fe64083e397ffd0c2aad7fc45c551d43b57a586a841c91485: Status 404 returned error can't find the container with id debdf423130b7f8fe64083e397ffd0c2aad7fc45c551d43b57a586a841c91485 Nov 28 12:51:37 crc kubenswrapper[4779]: W1128 12:51:37.120154 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c62c5f4_5757_46d4_92e5_7fdb2b21c88e.slice/crio-f9f3f29a6835b5bff0bde744377de7de41a7da04fbf27c55dc7a325a9bc978d0 WatchSource:0}: Error finding container f9f3f29a6835b5bff0bde744377de7de41a7da04fbf27c55dc7a325a9bc978d0: Status 404 returned error can't find the container with id f9f3f29a6835b5bff0bde744377de7de41a7da04fbf27c55dc7a325a9bc978d0 Nov 28 12:51:37 crc kubenswrapper[4779]: W1128 12:51:37.122289 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1799095f_becf_4b8e_bb0b_28c04a819e59.slice/crio-4b9e7c0a6b1d2ead61b3993df395eb7689082e36b6bdceb54e18499d88dddea7 WatchSource:0}: Error finding container 4b9e7c0a6b1d2ead61b3993df395eb7689082e36b6bdceb54e18499d88dddea7: Status 404 returned error can't find the container with id 4b9e7c0a6b1d2ead61b3993df395eb7689082e36b6bdceb54e18499d88dddea7 Nov 28 12:51:37 crc kubenswrapper[4779]: E1128 12:51:37.122621 4779 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v24bv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-495dt_openstack-operators(1c62c5f4-5757-46d4-92e5-7fdb2b21c88e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 12:51:37 crc kubenswrapper[4779]: E1128 12:51:37.122788 4779 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.50:5001/openstack-k8s-operators/telemetry-operator:a56cff847472bbc2ff74c1f159f60d5390d3c1bf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cvlb9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-7574d9569-x822f_openstack-operators(f1d9753d-b49d-4e32-b312-137314283984): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 12:51:37 crc kubenswrapper[4779]: E1128 12:51:37.124225 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-495dt" podUID="1c62c5f4-5757-46d4-92e5-7fdb2b21c88e" Nov 28 12:51:37 crc kubenswrapper[4779]: E1128 12:51:37.124288 4779 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:6bed55b172b9ee8ccc3952cbfc543d8bd44e2690f6db94348a754152fd78f4cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-78l2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-656dcb59d4-hjhz4_openstack-operators(1799095f-becf-4b8e-bb0b-28c04a819e59): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 12:51:37 crc kubenswrapper[4779]: E1128 12:51:37.124972 4779 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cvlb9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-7574d9569-x822f_openstack-operators(f1d9753d-b49d-4e32-b312-137314283984): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 12:51:37 crc kubenswrapper[4779]: E1128 12:51:37.126284 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/telemetry-operator-controller-manager-7574d9569-x822f" podUID="f1d9753d-b49d-4e32-b312-137314283984" Nov 28 12:51:37 crc kubenswrapper[4779]: E1128 12:51:37.129056 4779 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-78l2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-656dcb59d4-hjhz4_openstack-operators(1799095f-becf-4b8e-bb0b-28c04a819e59): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 12:51:37 crc kubenswrapper[4779]: E1128 12:51:37.130189 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-hjhz4" podUID="1799095f-becf-4b8e-bb0b-28c04a819e59" Nov 28 12:51:37 crc kubenswrapper[4779]: I1128 12:51:37.131415 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-656dcb59d4-hjhz4"] Nov 28 12:51:37 crc kubenswrapper[4779]: I1128 12:51:37.159646 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af7046d6-f852-4c62-83e6-ea213812d86c-cert\") pod \"infra-operator-controller-manager-57548d458d-7pv5r\" (UID: \"af7046d6-f852-4c62-83e6-ea213812d86c\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-7pv5r" Nov 28 12:51:37 crc kubenswrapper[4779]: E1128 12:51:37.159839 4779 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 28 12:51:37 crc kubenswrapper[4779]: E1128 12:51:37.159914 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af7046d6-f852-4c62-83e6-ea213812d86c-cert podName:af7046d6-f852-4c62-83e6-ea213812d86c nodeName:}" failed. No retries permitted until 2025-11-28 12:51:39.159893491 +0000 UTC m=+959.725568855 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/af7046d6-f852-4c62-83e6-ea213812d86c-cert") pod "infra-operator-controller-manager-57548d458d-7pv5r" (UID: "af7046d6-f852-4c62-83e6-ea213812d86c") : secret "infra-operator-webhook-server-cert" not found Nov 28 12:51:37 crc kubenswrapper[4779]: I1128 12:51:37.316026 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-ns58c" event={"ID":"eaf24224-e1f5-44d8-8151-54be9408b429","Type":"ContainerStarted","Data":"732cbd74070a6fd8f591cdee3ff00c49881f58685b87e84f99315890f62a4056"} Nov 28 12:51:37 crc kubenswrapper[4779]: I1128 12:51:37.318409 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-hjhz4" event={"ID":"1799095f-becf-4b8e-bb0b-28c04a819e59","Type":"ContainerStarted","Data":"4b9e7c0a6b1d2ead61b3993df395eb7689082e36b6bdceb54e18499d88dddea7"} Nov 28 12:51:37 crc kubenswrapper[4779]: I1128 12:51:37.320595 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-l52fj" event={"ID":"854f928b-5068-4de9-b865-7fb2a26ca9e4","Type":"ContainerStarted","Data":"dbb3cad3bf0f9f8c50dbc80bd08321f9a3455578809c962209c04b0ca189cfad"} Nov 28 12:51:37 crc kubenswrapper[4779]: E1128 12:51:37.321328 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:6bed55b172b9ee8ccc3952cbfc543d8bd44e2690f6db94348a754152fd78f4cf\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-hjhz4" podUID="1799095f-becf-4b8e-bb0b-28c04a819e59" Nov 28 12:51:37 crc kubenswrapper[4779]: I1128 12:51:37.323456 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-v49kv" event={"ID":"bb4ac6b3-6655-4e29-8cf7-bdae98df3386","Type":"ContainerStarted","Data":"f16551fb806cc6ec5fef70bd4764209b139ed902dd8aba81c735775269d6e9e8"} Nov 28 12:51:37 crc kubenswrapper[4779]: E1128 12:51:37.334588 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:bbb543d2d67c73e5df5d6357c3251363eb34a99575c5bf10416edd45dbdae2f6\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-v49kv" podUID="bb4ac6b3-6655-4e29-8cf7-bdae98df3386" Nov 28 12:51:37 crc kubenswrapper[4779]: I1128 12:51:37.335538 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-xqxsn" event={"ID":"b96763b6-e6a4-4429-8fe4-6b23620824c1","Type":"ContainerStarted","Data":"29fe15e242579f30083df2df70a83a3e91fea4d57e3889c6c21a96afbe11313e"} Nov 28 12:51:37 crc kubenswrapper[4779]: I1128 12:51:37.337472 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-9xxwc" event={"ID":"75996749-aa6c-4a8e-ba7f-412209db3939","Type":"ContainerStarted","Data":"6f81f89fa1fc5a15d7c00487fc74273c27ee42157c6b18340d45381f6b8fb1c4"} Nov 28 12:51:37 crc kubenswrapper[4779]: I1128 12:51:37.340009 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-zzflc" event={"ID":"3b4accd2-e9c1-4e51-a559-c5cf108f5af1","Type":"ContainerStarted","Data":"3595af280de467cbca7c3b91a0c7311f66869de6263e4bff6308df1f024fb54a"} Nov 28 12:51:37 crc kubenswrapper[4779]: I1128 12:51:37.341839 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-lfj45" event={"ID":"da8e3e32-3cc1-4b1b-91c5-31ac6e660d65","Type":"ContainerStarted","Data":"3e3f5b0157445b76e13216f0fbed2e8a1152c51a0a9091ce6f122d78c1d8fbf8"} Nov 28 12:51:37 crc kubenswrapper[4779]: E1128 12:51:37.343327 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-zzflc" podUID="3b4accd2-e9c1-4e51-a559-c5cf108f5af1" Nov 28 12:51:37 crc kubenswrapper[4779]: I1128 12:51:37.344022 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-hhr2g" event={"ID":"e7e646e3-00c9-4359-b012-aaff60962a76","Type":"ContainerStarted","Data":"64c4e80404e85121f02183d533a97b9186948cb2b3a881cb1fd75d5c0e5bed59"} Nov 28 12:51:37 crc kubenswrapper[4779]: I1128 12:51:37.345299 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-cnfmd" event={"ID":"911b9690-ddec-439e-9ef5-a7d80562f51c","Type":"ContainerStarted","Data":"50044ff438b75e998f658af5704818706fc5ba1ec209524f96a4c35997828638"} Nov 28 12:51:37 crc kubenswrapper[4779]: I1128 12:51:37.348215 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-lnf86" event={"ID":"b1c19869-b98a-40c8-a312-8c49d69bdf0f","Type":"ContainerStarted","Data":"6fe67184cbd0003057455e70d83bc06bc9165154f0f80253837c19c489af850e"} Nov 28 12:51:37 crc kubenswrapper[4779]: E1128 12:51:37.349214 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:e00a9ed0ab26c5b745bd804ab1fe6b22428d026f17ea05a05f045e060342f46c\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-cnfmd" podUID="911b9690-ddec-439e-9ef5-a7d80562f51c" Nov 28 12:51:37 crc kubenswrapper[4779]: I1128 12:51:37.355460 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-d77b94747-c6wb2" event={"ID":"f3d69218-2422-473c-ae41-bd2a2b902355","Type":"ContainerStarted","Data":"ae5b950e944fb2005f758da4d1c39c1e2cb09b42376d986b0e498ffb1f0626eb"} Nov 28 12:51:37 crc kubenswrapper[4779]: I1128 12:51:37.359065 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7574d9569-x822f" event={"ID":"f1d9753d-b49d-4e32-b312-137314283984","Type":"ContainerStarted","Data":"debdf423130b7f8fe64083e397ffd0c2aad7fc45c551d43b57a586a841c91485"} Nov 28 12:51:37 crc kubenswrapper[4779]: E1128 12:51:37.364865 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.50:5001/openstack-k8s-operators/telemetry-operator:a56cff847472bbc2ff74c1f159f60d5390d3c1bf\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-7574d9569-x822f" podUID="f1d9753d-b49d-4e32-b312-137314283984" Nov 28 12:51:37 crc kubenswrapper[4779]: I1128 12:51:37.364870 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-495dt" event={"ID":"1c62c5f4-5757-46d4-92e5-7fdb2b21c88e","Type":"ContainerStarted","Data":"f9f3f29a6835b5bff0bde744377de7de41a7da04fbf27c55dc7a325a9bc978d0"} Nov 28 12:51:37 crc kubenswrapper[4779]: E1128 12:51:37.369677 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-495dt" podUID="1c62c5f4-5757-46d4-92e5-7fdb2b21c88e" Nov 28 12:51:37 crc kubenswrapper[4779]: I1128 12:51:37.373232 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-kvnt5" event={"ID":"623cd065-a088-41d4-9b98-8be8d60c0f20","Type":"ContainerStarted","Data":"d5d987c5fac10e1f59171eda507e861bcbfe88ed8ed2128e5a048e9d4ded286e"} Nov 28 12:51:37 crc kubenswrapper[4779]: E1128 12:51:37.375175 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ddc8a82f05930db8ee7a8d6d189b5a66373060656e4baf71ac302f89c477da4c\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-kvnt5" podUID="623cd065-a088-41d4-9b98-8be8d60c0f20" Nov 28 12:51:37 crc kubenswrapper[4779]: I1128 12:51:37.375498 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-vd654" event={"ID":"40688ccc-932c-411e-8703-4bf0f11ec3bf","Type":"ContainerStarted","Data":"cb3cdaa18efc2434ba7e63010b10846d146e5236f850b5135fde6a8d56fec73c"} Nov 28 12:51:37 crc kubenswrapper[4779]: I1128 12:51:37.376583 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-wptr7" event={"ID":"b3e0c6a3-33d8-4c1e-8b44-156de87d5621","Type":"ContainerStarted","Data":"977af554f08ee5d1a15c3b97ddf8e6bdd1d1d6ea1321c3a206139f460639555c"} Nov 28 12:51:37 crc kubenswrapper[4779]: I1128 12:51:37.377995 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-n952x" event={"ID":"493d54b8-1e0a-4270-8180-ba1bc746c783","Type":"ContainerStarted","Data":"36238e1268638266f031abc515cf46602d2b4ad43a94a273ab6936bef1cbdcca"} Nov 28 12:51:37 crc kubenswrapper[4779]: I1128 12:51:37.392316 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-955677c94-rh5q9" event={"ID":"8d20efbb-527c-4085-a974-d49ee454b545","Type":"ContainerStarted","Data":"d98fdf9f458c8b323f86d670fb097ae25f784d96df822d159e91cf2b3ee276b1"} Nov 28 12:51:37 crc kubenswrapper[4779]: I1128 12:51:37.393939 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-h4czz" event={"ID":"39fdca45-fa34-4d90-93a9-1123dff79930","Type":"ContainerStarted","Data":"1992b1a48999ad3534090774aca6c440e44ca85e7855650cf1c3cdedbacdd0e3"} Nov 28 12:51:37 crc kubenswrapper[4779]: I1128 12:51:37.465646 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66bfbaf1-3247-47c1-aa58-19cf5875882e-cert\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6bsdkvh\" (UID: \"66bfbaf1-3247-47c1-aa58-19cf5875882e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bsdkvh" Nov 28 12:51:37 crc kubenswrapper[4779]: E1128 12:51:37.466663 4779 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 12:51:37 crc kubenswrapper[4779]: E1128 12:51:37.466712 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66bfbaf1-3247-47c1-aa58-19cf5875882e-cert podName:66bfbaf1-3247-47c1-aa58-19cf5875882e nodeName:}" failed. No retries permitted until 2025-11-28 12:51:39.466696823 +0000 UTC m=+960.032372187 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/66bfbaf1-3247-47c1-aa58-19cf5875882e-cert") pod "openstack-baremetal-operator-controller-manager-5fcdb54b6bsdkvh" (UID: "66bfbaf1-3247-47c1-aa58-19cf5875882e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 12:51:37 crc kubenswrapper[4779]: I1128 12:51:37.872361 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/31627cc1-b543-4da9-8fe1-ac12e7f09531-webhook-certs\") pod \"openstack-operator-controller-manager-7d967756df-nvprs\" (UID: \"31627cc1-b543-4da9-8fe1-ac12e7f09531\") " pod="openstack-operators/openstack-operator-controller-manager-7d967756df-nvprs" Nov 28 12:51:37 crc kubenswrapper[4779]: E1128 12:51:37.872545 4779 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 28 12:51:37 crc kubenswrapper[4779]: E1128 12:51:37.872767 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31627cc1-b543-4da9-8fe1-ac12e7f09531-webhook-certs podName:31627cc1-b543-4da9-8fe1-ac12e7f09531 nodeName:}" failed. No retries permitted until 2025-11-28 12:51:39.872747909 +0000 UTC m=+960.438423263 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/31627cc1-b543-4da9-8fe1-ac12e7f09531-webhook-certs") pod "openstack-operator-controller-manager-7d967756df-nvprs" (UID: "31627cc1-b543-4da9-8fe1-ac12e7f09531") : secret "webhook-server-cert" not found Nov 28 12:51:37 crc kubenswrapper[4779]: I1128 12:51:37.873222 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/31627cc1-b543-4da9-8fe1-ac12e7f09531-metrics-certs\") pod \"openstack-operator-controller-manager-7d967756df-nvprs\" (UID: \"31627cc1-b543-4da9-8fe1-ac12e7f09531\") " pod="openstack-operators/openstack-operator-controller-manager-7d967756df-nvprs" Nov 28 12:51:37 crc kubenswrapper[4779]: E1128 12:51:37.873365 4779 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 28 12:51:37 crc kubenswrapper[4779]: E1128 12:51:37.873395 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31627cc1-b543-4da9-8fe1-ac12e7f09531-metrics-certs podName:31627cc1-b543-4da9-8fe1-ac12e7f09531 nodeName:}" failed. No retries permitted until 2025-11-28 12:51:39.873387886 +0000 UTC m=+960.439063230 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/31627cc1-b543-4da9-8fe1-ac12e7f09531-metrics-certs") pod "openstack-operator-controller-manager-7d967756df-nvprs" (UID: "31627cc1-b543-4da9-8fe1-ac12e7f09531") : secret "metrics-server-cert" not found Nov 28 12:51:38 crc kubenswrapper[4779]: E1128 12:51:38.414717 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-495dt" podUID="1c62c5f4-5757-46d4-92e5-7fdb2b21c88e" Nov 28 12:51:38 crc kubenswrapper[4779]: E1128 12:51:38.415542 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:e00a9ed0ab26c5b745bd804ab1fe6b22428d026f17ea05a05f045e060342f46c\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-cnfmd" podUID="911b9690-ddec-439e-9ef5-a7d80562f51c" Nov 28 12:51:38 crc kubenswrapper[4779]: E1128 12:51:38.415790 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:6bed55b172b9ee8ccc3952cbfc543d8bd44e2690f6db94348a754152fd78f4cf\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-hjhz4" podUID="1799095f-becf-4b8e-bb0b-28c04a819e59" Nov 28 12:51:38 crc kubenswrapper[4779]: E1128 12:51:38.416351 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ddc8a82f05930db8ee7a8d6d189b5a66373060656e4baf71ac302f89c477da4c\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-kvnt5" podUID="623cd065-a088-41d4-9b98-8be8d60c0f20" Nov 28 12:51:38 crc kubenswrapper[4779]: E1128 12:51:38.416564 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.50:5001/openstack-k8s-operators/telemetry-operator:a56cff847472bbc2ff74c1f159f60d5390d3c1bf\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-7574d9569-x822f" podUID="f1d9753d-b49d-4e32-b312-137314283984" Nov 28 12:51:38 crc kubenswrapper[4779]: E1128 12:51:38.419850 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:bbb543d2d67c73e5df5d6357c3251363eb34a99575c5bf10416edd45dbdae2f6\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-v49kv" podUID="bb4ac6b3-6655-4e29-8cf7-bdae98df3386" Nov 28 12:51:38 crc kubenswrapper[4779]: E1128 12:51:38.419935 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-zzflc" podUID="3b4accd2-e9c1-4e51-a559-c5cf108f5af1" Nov 28 12:51:38 crc kubenswrapper[4779]: I1128 12:51:38.809943 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fr8cd" Nov 28 12:51:38 crc kubenswrapper[4779]: I1128 12:51:38.810286 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fr8cd" Nov 28 12:51:38 crc kubenswrapper[4779]: I1128 12:51:38.909982 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fr8cd" Nov 28 12:51:39 crc kubenswrapper[4779]: I1128 12:51:39.197764 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af7046d6-f852-4c62-83e6-ea213812d86c-cert\") pod \"infra-operator-controller-manager-57548d458d-7pv5r\" (UID: \"af7046d6-f852-4c62-83e6-ea213812d86c\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-7pv5r" Nov 28 12:51:39 crc kubenswrapper[4779]: E1128 12:51:39.197963 4779 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 28 12:51:39 crc kubenswrapper[4779]: E1128 12:51:39.198009 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af7046d6-f852-4c62-83e6-ea213812d86c-cert podName:af7046d6-f852-4c62-83e6-ea213812d86c nodeName:}" failed. No retries permitted until 2025-11-28 12:51:43.197996097 +0000 UTC m=+963.763671441 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/af7046d6-f852-4c62-83e6-ea213812d86c-cert") pod "infra-operator-controller-manager-57548d458d-7pv5r" (UID: "af7046d6-f852-4c62-83e6-ea213812d86c") : secret "infra-operator-webhook-server-cert" not found Nov 28 12:51:39 crc kubenswrapper[4779]: I1128 12:51:39.464532 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fr8cd" Nov 28 12:51:39 crc kubenswrapper[4779]: I1128 12:51:39.503455 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fr8cd"] Nov 28 12:51:39 crc kubenswrapper[4779]: I1128 12:51:39.504107 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66bfbaf1-3247-47c1-aa58-19cf5875882e-cert\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6bsdkvh\" (UID: \"66bfbaf1-3247-47c1-aa58-19cf5875882e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bsdkvh" Nov 28 12:51:39 crc kubenswrapper[4779]: E1128 12:51:39.504265 4779 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 12:51:39 crc kubenswrapper[4779]: E1128 12:51:39.504340 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66bfbaf1-3247-47c1-aa58-19cf5875882e-cert podName:66bfbaf1-3247-47c1-aa58-19cf5875882e nodeName:}" failed. No retries permitted until 2025-11-28 12:51:43.504322396 +0000 UTC m=+964.069997750 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/66bfbaf1-3247-47c1-aa58-19cf5875882e-cert") pod "openstack-baremetal-operator-controller-manager-5fcdb54b6bsdkvh" (UID: "66bfbaf1-3247-47c1-aa58-19cf5875882e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 12:51:39 crc kubenswrapper[4779]: I1128 12:51:39.910656 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/31627cc1-b543-4da9-8fe1-ac12e7f09531-metrics-certs\") pod \"openstack-operator-controller-manager-7d967756df-nvprs\" (UID: \"31627cc1-b543-4da9-8fe1-ac12e7f09531\") " pod="openstack-operators/openstack-operator-controller-manager-7d967756df-nvprs" Nov 28 12:51:39 crc kubenswrapper[4779]: I1128 12:51:39.911198 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/31627cc1-b543-4da9-8fe1-ac12e7f09531-webhook-certs\") pod \"openstack-operator-controller-manager-7d967756df-nvprs\" (UID: \"31627cc1-b543-4da9-8fe1-ac12e7f09531\") " pod="openstack-operators/openstack-operator-controller-manager-7d967756df-nvprs" Nov 28 12:51:39 crc kubenswrapper[4779]: E1128 12:51:39.911532 4779 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 28 12:51:39 crc kubenswrapper[4779]: E1128 12:51:39.911615 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31627cc1-b543-4da9-8fe1-ac12e7f09531-webhook-certs podName:31627cc1-b543-4da9-8fe1-ac12e7f09531 nodeName:}" failed. No retries permitted until 2025-11-28 12:51:43.911587284 +0000 UTC m=+964.477262678 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/31627cc1-b543-4da9-8fe1-ac12e7f09531-webhook-certs") pod "openstack-operator-controller-manager-7d967756df-nvprs" (UID: "31627cc1-b543-4da9-8fe1-ac12e7f09531") : secret "webhook-server-cert" not found Nov 28 12:51:39 crc kubenswrapper[4779]: E1128 12:51:39.912240 4779 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 28 12:51:39 crc kubenswrapper[4779]: E1128 12:51:39.912306 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31627cc1-b543-4da9-8fe1-ac12e7f09531-metrics-certs podName:31627cc1-b543-4da9-8fe1-ac12e7f09531 nodeName:}" failed. No retries permitted until 2025-11-28 12:51:43.912289833 +0000 UTC m=+964.477965227 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/31627cc1-b543-4da9-8fe1-ac12e7f09531-metrics-certs") pod "openstack-operator-controller-manager-7d967756df-nvprs" (UID: "31627cc1-b543-4da9-8fe1-ac12e7f09531") : secret "metrics-server-cert" not found Nov 28 12:51:41 crc kubenswrapper[4779]: I1128 12:51:41.434754 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fr8cd" podUID="ba53250e-91a2-45bf-a609-ebed70fce751" containerName="registry-server" containerID="cri-o://9b9d4b328133a718706c072881ed99a196d7fd581aae49c971cb44324dddd166" gracePeriod=2 Nov 28 12:51:43 crc kubenswrapper[4779]: I1128 12:51:43.264279 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af7046d6-f852-4c62-83e6-ea213812d86c-cert\") pod \"infra-operator-controller-manager-57548d458d-7pv5r\" (UID: \"af7046d6-f852-4c62-83e6-ea213812d86c\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-7pv5r" Nov 28 12:51:43 crc kubenswrapper[4779]: E1128 12:51:43.264833 4779 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 28 12:51:43 crc kubenswrapper[4779]: E1128 12:51:43.264886 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af7046d6-f852-4c62-83e6-ea213812d86c-cert podName:af7046d6-f852-4c62-83e6-ea213812d86c nodeName:}" failed. No retries permitted until 2025-11-28 12:51:51.264869389 +0000 UTC m=+971.830544743 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/af7046d6-f852-4c62-83e6-ea213812d86c-cert") pod "infra-operator-controller-manager-57548d458d-7pv5r" (UID: "af7046d6-f852-4c62-83e6-ea213812d86c") : secret "infra-operator-webhook-server-cert" not found Nov 28 12:51:43 crc kubenswrapper[4779]: I1128 12:51:43.474215 4779 generic.go:334] "Generic (PLEG): container finished" podID="ba53250e-91a2-45bf-a609-ebed70fce751" containerID="9b9d4b328133a718706c072881ed99a196d7fd581aae49c971cb44324dddd166" exitCode=0 Nov 28 12:51:43 crc kubenswrapper[4779]: I1128 12:51:43.474314 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fr8cd" event={"ID":"ba53250e-91a2-45bf-a609-ebed70fce751","Type":"ContainerDied","Data":"9b9d4b328133a718706c072881ed99a196d7fd581aae49c971cb44324dddd166"} Nov 28 12:51:43 crc kubenswrapper[4779]: I1128 12:51:43.569030 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66bfbaf1-3247-47c1-aa58-19cf5875882e-cert\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6bsdkvh\" (UID: \"66bfbaf1-3247-47c1-aa58-19cf5875882e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bsdkvh" Nov 28 12:51:43 crc kubenswrapper[4779]: E1128 12:51:43.569298 4779 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 12:51:43 crc kubenswrapper[4779]: E1128 12:51:43.569417 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66bfbaf1-3247-47c1-aa58-19cf5875882e-cert podName:66bfbaf1-3247-47c1-aa58-19cf5875882e nodeName:}" failed. No retries permitted until 2025-11-28 12:51:51.569376399 +0000 UTC m=+972.135051763 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/66bfbaf1-3247-47c1-aa58-19cf5875882e-cert") pod "openstack-baremetal-operator-controller-manager-5fcdb54b6bsdkvh" (UID: "66bfbaf1-3247-47c1-aa58-19cf5875882e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 12:51:43 crc kubenswrapper[4779]: I1128 12:51:43.976209 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/31627cc1-b543-4da9-8fe1-ac12e7f09531-webhook-certs\") pod \"openstack-operator-controller-manager-7d967756df-nvprs\" (UID: \"31627cc1-b543-4da9-8fe1-ac12e7f09531\") " pod="openstack-operators/openstack-operator-controller-manager-7d967756df-nvprs" Nov 28 12:51:43 crc kubenswrapper[4779]: I1128 12:51:43.976374 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/31627cc1-b543-4da9-8fe1-ac12e7f09531-metrics-certs\") pod \"openstack-operator-controller-manager-7d967756df-nvprs\" (UID: \"31627cc1-b543-4da9-8fe1-ac12e7f09531\") " pod="openstack-operators/openstack-operator-controller-manager-7d967756df-nvprs" Nov 28 12:51:43 crc kubenswrapper[4779]: E1128 12:51:43.976369 4779 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 28 12:51:43 crc kubenswrapper[4779]: E1128 12:51:43.976415 4779 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 28 12:51:43 crc kubenswrapper[4779]: E1128 12:51:43.976530 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31627cc1-b543-4da9-8fe1-ac12e7f09531-webhook-certs podName:31627cc1-b543-4da9-8fe1-ac12e7f09531 nodeName:}" failed. No retries permitted until 2025-11-28 12:51:51.976510304 +0000 UTC m=+972.542185678 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/31627cc1-b543-4da9-8fe1-ac12e7f09531-webhook-certs") pod "openstack-operator-controller-manager-7d967756df-nvprs" (UID: "31627cc1-b543-4da9-8fe1-ac12e7f09531") : secret "webhook-server-cert" not found Nov 28 12:51:43 crc kubenswrapper[4779]: E1128 12:51:43.976588 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31627cc1-b543-4da9-8fe1-ac12e7f09531-metrics-certs podName:31627cc1-b543-4da9-8fe1-ac12e7f09531 nodeName:}" failed. No retries permitted until 2025-11-28 12:51:51.976554085 +0000 UTC m=+972.542229459 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/31627cc1-b543-4da9-8fe1-ac12e7f09531-metrics-certs") pod "openstack-operator-controller-manager-7d967756df-nvprs" (UID: "31627cc1-b543-4da9-8fe1-ac12e7f09531") : secret "metrics-server-cert" not found Nov 28 12:51:45 crc kubenswrapper[4779]: I1128 12:51:45.239322 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lqczf"] Nov 28 12:51:45 crc kubenswrapper[4779]: I1128 12:51:45.241238 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lqczf" Nov 28 12:51:45 crc kubenswrapper[4779]: I1128 12:51:45.296156 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99aa273c-f970-4e1f-9484-14575338368f-catalog-content\") pod \"redhat-marketplace-lqczf\" (UID: \"99aa273c-f970-4e1f-9484-14575338368f\") " pod="openshift-marketplace/redhat-marketplace-lqczf" Nov 28 12:51:45 crc kubenswrapper[4779]: I1128 12:51:45.296426 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99aa273c-f970-4e1f-9484-14575338368f-utilities\") pod \"redhat-marketplace-lqczf\" (UID: \"99aa273c-f970-4e1f-9484-14575338368f\") " pod="openshift-marketplace/redhat-marketplace-lqczf" Nov 28 12:51:45 crc kubenswrapper[4779]: I1128 12:51:45.296502 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfvs4\" (UniqueName: \"kubernetes.io/projected/99aa273c-f970-4e1f-9484-14575338368f-kube-api-access-tfvs4\") pod \"redhat-marketplace-lqczf\" (UID: \"99aa273c-f970-4e1f-9484-14575338368f\") " pod="openshift-marketplace/redhat-marketplace-lqczf" Nov 28 12:51:45 crc kubenswrapper[4779]: I1128 12:51:45.302991 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lqczf"] Nov 28 12:51:45 crc kubenswrapper[4779]: I1128 12:51:45.397562 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99aa273c-f970-4e1f-9484-14575338368f-catalog-content\") pod \"redhat-marketplace-lqczf\" (UID: \"99aa273c-f970-4e1f-9484-14575338368f\") " pod="openshift-marketplace/redhat-marketplace-lqczf" Nov 28 12:51:45 crc kubenswrapper[4779]: I1128 12:51:45.397664 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99aa273c-f970-4e1f-9484-14575338368f-utilities\") pod \"redhat-marketplace-lqczf\" (UID: \"99aa273c-f970-4e1f-9484-14575338368f\") " pod="openshift-marketplace/redhat-marketplace-lqczf" Nov 28 12:51:45 crc kubenswrapper[4779]: I1128 12:51:45.397700 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfvs4\" (UniqueName: \"kubernetes.io/projected/99aa273c-f970-4e1f-9484-14575338368f-kube-api-access-tfvs4\") pod \"redhat-marketplace-lqczf\" (UID: \"99aa273c-f970-4e1f-9484-14575338368f\") " pod="openshift-marketplace/redhat-marketplace-lqczf" Nov 28 12:51:45 crc kubenswrapper[4779]: I1128 12:51:45.398252 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99aa273c-f970-4e1f-9484-14575338368f-utilities\") pod \"redhat-marketplace-lqczf\" (UID: \"99aa273c-f970-4e1f-9484-14575338368f\") " pod="openshift-marketplace/redhat-marketplace-lqczf" Nov 28 12:51:45 crc kubenswrapper[4779]: I1128 12:51:45.398616 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99aa273c-f970-4e1f-9484-14575338368f-catalog-content\") pod \"redhat-marketplace-lqczf\" (UID: \"99aa273c-f970-4e1f-9484-14575338368f\") " pod="openshift-marketplace/redhat-marketplace-lqczf" Nov 28 12:51:45 crc kubenswrapper[4779]: I1128 12:51:45.439809 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfvs4\" (UniqueName: \"kubernetes.io/projected/99aa273c-f970-4e1f-9484-14575338368f-kube-api-access-tfvs4\") pod \"redhat-marketplace-lqczf\" (UID: \"99aa273c-f970-4e1f-9484-14575338368f\") " pod="openshift-marketplace/redhat-marketplace-lqczf" Nov 28 12:51:45 crc kubenswrapper[4779]: I1128 12:51:45.564937 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lqczf" Nov 28 12:51:46 crc kubenswrapper[4779]: I1128 12:51:46.285141 4779 patch_prober.go:28] interesting pod/machine-config-daemon-kj9g2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 12:51:46 crc kubenswrapper[4779]: I1128 12:51:46.285498 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 12:51:48 crc kubenswrapper[4779]: E1128 12:51:48.810931 4779 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9b9d4b328133a718706c072881ed99a196d7fd581aae49c971cb44324dddd166 is running failed: container process not found" containerID="9b9d4b328133a718706c072881ed99a196d7fd581aae49c971cb44324dddd166" cmd=["grpc_health_probe","-addr=:50051"] Nov 28 12:51:48 crc kubenswrapper[4779]: E1128 12:51:48.812381 4779 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9b9d4b328133a718706c072881ed99a196d7fd581aae49c971cb44324dddd166 is running failed: container process not found" containerID="9b9d4b328133a718706c072881ed99a196d7fd581aae49c971cb44324dddd166" cmd=["grpc_health_probe","-addr=:50051"] Nov 28 12:51:48 crc kubenswrapper[4779]: E1128 12:51:48.812936 4779 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9b9d4b328133a718706c072881ed99a196d7fd581aae49c971cb44324dddd166 is running failed: container process not found" containerID="9b9d4b328133a718706c072881ed99a196d7fd581aae49c971cb44324dddd166" cmd=["grpc_health_probe","-addr=:50051"] Nov 28 12:51:48 crc kubenswrapper[4779]: E1128 12:51:48.813000 4779 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9b9d4b328133a718706c072881ed99a196d7fd581aae49c971cb44324dddd166 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-fr8cd" podUID="ba53250e-91a2-45bf-a609-ebed70fce751" containerName="registry-server" Nov 28 12:51:49 crc kubenswrapper[4779]: E1128 12:51:49.015111 4779 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:210517b918e30df1c95fc7d961c8e57e9a9d1cc2b9fe7eb4dad2034dd53a90aa" Nov 28 12:51:49 crc kubenswrapper[4779]: E1128 12:51:49.015313 4779 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:210517b918e30df1c95fc7d961c8e57e9a9d1cc2b9fe7eb4dad2034dd53a90aa,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-899rs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5cd6c7f4c8-h4czz_openstack-operators(39fdca45-fa34-4d90-93a9-1123dff79930): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 12:51:49 crc kubenswrapper[4779]: E1128 12:51:49.707593 4779 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:45ae665ce2ea81aef212ee402cb02693ee49001a7c88c40c9598ff2859b838a2" Nov 28 12:51:49 crc kubenswrapper[4779]: E1128 12:51:49.707831 4779 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:45ae665ce2ea81aef212ee402cb02693ee49001a7c88c40c9598ff2859b838a2,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j9g6h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-589cbd6b5b-ns58c_openstack-operators(eaf24224-e1f5-44d8-8151-54be9408b429): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 12:51:50 crc kubenswrapper[4779]: E1128 12:51:50.245078 4779 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:ec4e5c911c1d0f1ea211a04b251a9d2e95b69d141c1caf07a0381693b2d6368b" Nov 28 12:51:50 crc kubenswrapper[4779]: E1128 12:51:50.245337 4779 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:ec4e5c911c1d0f1ea211a04b251a9d2e95b69d141c1caf07a0381693b2d6368b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2bqxs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-955677c94-rh5q9_openstack-operators(8d20efbb-527c-4085-a974-d49ee454b545): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 12:51:50 crc kubenswrapper[4779]: E1128 12:51:50.842617 4779 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:d65dbfc956e9cf376f3c48fc3a0942cb7306b5164f898c40d1efca106df81db7" Nov 28 12:51:50 crc kubenswrapper[4779]: E1128 12:51:50.842785 4779 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:d65dbfc956e9cf376f3c48fc3a0942cb7306b5164f898c40d1efca106df81db7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t5vjr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-67cb4dc6d4-n952x_openstack-operators(493d54b8-1e0a-4270-8180-ba1bc746c783): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 12:51:51 crc kubenswrapper[4779]: I1128 12:51:51.307032 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af7046d6-f852-4c62-83e6-ea213812d86c-cert\") pod \"infra-operator-controller-manager-57548d458d-7pv5r\" (UID: \"af7046d6-f852-4c62-83e6-ea213812d86c\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-7pv5r" Nov 28 12:51:51 crc kubenswrapper[4779]: I1128 12:51:51.322785 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af7046d6-f852-4c62-83e6-ea213812d86c-cert\") pod \"infra-operator-controller-manager-57548d458d-7pv5r\" (UID: \"af7046d6-f852-4c62-83e6-ea213812d86c\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-7pv5r" Nov 28 12:51:51 crc kubenswrapper[4779]: I1128 12:51:51.365482 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-zwqkf" Nov 28 12:51:51 crc kubenswrapper[4779]: I1128 12:51:51.373207 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-7pv5r" Nov 28 12:51:51 crc kubenswrapper[4779]: E1128 12:51:51.402430 4779 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:89910bc3ecceb7590d3207ac294eb7354de358cf39ef03c72323b26c598e50e6" Nov 28 12:51:51 crc kubenswrapper[4779]: E1128 12:51:51.402663 4779 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:89910bc3ecceb7590d3207ac294eb7354de358cf39ef03c72323b26c598e50e6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9pzsr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-5d499bf58b-9xxwc_openstack-operators(75996749-aa6c-4a8e-ba7f-412209db3939): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 12:51:51 crc kubenswrapper[4779]: I1128 12:51:51.611874 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66bfbaf1-3247-47c1-aa58-19cf5875882e-cert\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6bsdkvh\" (UID: \"66bfbaf1-3247-47c1-aa58-19cf5875882e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bsdkvh" Nov 28 12:51:51 crc kubenswrapper[4779]: I1128 12:51:51.617336 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66bfbaf1-3247-47c1-aa58-19cf5875882e-cert\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6bsdkvh\" (UID: \"66bfbaf1-3247-47c1-aa58-19cf5875882e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bsdkvh" Nov 28 12:51:51 crc kubenswrapper[4779]: I1128 12:51:51.731366 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-f8kwp" Nov 28 12:51:51 crc kubenswrapper[4779]: I1128 12:51:51.739652 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bsdkvh" Nov 28 12:51:51 crc kubenswrapper[4779]: E1128 12:51:51.990498 4779 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:72236301580ff9080f7e311b832d7ba66666a9afeda51f969745229624ff26e4" Nov 28 12:51:51 crc kubenswrapper[4779]: E1128 12:51:51.990685 4779 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:72236301580ff9080f7e311b832d7ba66666a9afeda51f969745229624ff26e4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8x2fz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-d77b94747-c6wb2_openstack-operators(f3d69218-2422-473c-ae41-bd2a2b902355): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 12:51:52 crc kubenswrapper[4779]: I1128 12:51:52.017213 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/31627cc1-b543-4da9-8fe1-ac12e7f09531-webhook-certs\") pod \"openstack-operator-controller-manager-7d967756df-nvprs\" (UID: \"31627cc1-b543-4da9-8fe1-ac12e7f09531\") " pod="openstack-operators/openstack-operator-controller-manager-7d967756df-nvprs" Nov 28 12:51:52 crc kubenswrapper[4779]: I1128 12:51:52.017323 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/31627cc1-b543-4da9-8fe1-ac12e7f09531-metrics-certs\") pod \"openstack-operator-controller-manager-7d967756df-nvprs\" (UID: \"31627cc1-b543-4da9-8fe1-ac12e7f09531\") " pod="openstack-operators/openstack-operator-controller-manager-7d967756df-nvprs" Nov 28 12:51:52 crc kubenswrapper[4779]: E1128 12:51:52.017364 4779 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 28 12:51:52 crc kubenswrapper[4779]: E1128 12:51:52.017429 4779 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 28 12:51:52 crc kubenswrapper[4779]: E1128 12:51:52.017437 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31627cc1-b543-4da9-8fe1-ac12e7f09531-webhook-certs podName:31627cc1-b543-4da9-8fe1-ac12e7f09531 nodeName:}" failed. No retries permitted until 2025-11-28 12:52:08.017420096 +0000 UTC m=+988.583095450 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/31627cc1-b543-4da9-8fe1-ac12e7f09531-webhook-certs") pod "openstack-operator-controller-manager-7d967756df-nvprs" (UID: "31627cc1-b543-4da9-8fe1-ac12e7f09531") : secret "webhook-server-cert" not found Nov 28 12:51:52 crc kubenswrapper[4779]: E1128 12:51:52.017461 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31627cc1-b543-4da9-8fe1-ac12e7f09531-metrics-certs podName:31627cc1-b543-4da9-8fe1-ac12e7f09531 nodeName:}" failed. No retries permitted until 2025-11-28 12:52:08.017451597 +0000 UTC m=+988.583126951 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/31627cc1-b543-4da9-8fe1-ac12e7f09531-metrics-certs") pod "openstack-operator-controller-manager-7d967756df-nvprs" (UID: "31627cc1-b543-4da9-8fe1-ac12e7f09531") : secret "metrics-server-cert" not found Nov 28 12:51:52 crc kubenswrapper[4779]: E1128 12:51:52.442042 4779 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:225958f250a1075b69439d776a13acc45c78695c21abda23600fb53ca1640423" Nov 28 12:51:52 crc kubenswrapper[4779]: E1128 12:51:52.442270 4779 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:225958f250a1075b69439d776a13acc45c78695c21abda23600fb53ca1640423,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tqnzp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-57988cc5b5-lnf86_openstack-operators(b1c19869-b98a-40c8-a312-8c49d69bdf0f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 12:51:53 crc kubenswrapper[4779]: I1128 12:51:53.612115 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fr8cd" Nov 28 12:51:53 crc kubenswrapper[4779]: I1128 12:51:53.742229 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpwn2\" (UniqueName: \"kubernetes.io/projected/ba53250e-91a2-45bf-a609-ebed70fce751-kube-api-access-qpwn2\") pod \"ba53250e-91a2-45bf-a609-ebed70fce751\" (UID: \"ba53250e-91a2-45bf-a609-ebed70fce751\") " Nov 28 12:51:53 crc kubenswrapper[4779]: I1128 12:51:53.742326 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba53250e-91a2-45bf-a609-ebed70fce751-catalog-content\") pod \"ba53250e-91a2-45bf-a609-ebed70fce751\" (UID: \"ba53250e-91a2-45bf-a609-ebed70fce751\") " Nov 28 12:51:53 crc kubenswrapper[4779]: I1128 12:51:53.742419 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba53250e-91a2-45bf-a609-ebed70fce751-utilities\") pod \"ba53250e-91a2-45bf-a609-ebed70fce751\" (UID: \"ba53250e-91a2-45bf-a609-ebed70fce751\") " Nov 28 12:51:53 crc kubenswrapper[4779]: I1128 12:51:53.743648 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba53250e-91a2-45bf-a609-ebed70fce751-utilities" (OuterVolumeSpecName: "utilities") pod "ba53250e-91a2-45bf-a609-ebed70fce751" (UID: "ba53250e-91a2-45bf-a609-ebed70fce751"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:51:53 crc kubenswrapper[4779]: I1128 12:51:53.748404 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba53250e-91a2-45bf-a609-ebed70fce751-kube-api-access-qpwn2" (OuterVolumeSpecName: "kube-api-access-qpwn2") pod "ba53250e-91a2-45bf-a609-ebed70fce751" (UID: "ba53250e-91a2-45bf-a609-ebed70fce751"). InnerVolumeSpecName "kube-api-access-qpwn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:51:53 crc kubenswrapper[4779]: I1128 12:51:53.802221 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba53250e-91a2-45bf-a609-ebed70fce751-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ba53250e-91a2-45bf-a609-ebed70fce751" (UID: "ba53250e-91a2-45bf-a609-ebed70fce751"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:51:53 crc kubenswrapper[4779]: I1128 12:51:53.844625 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpwn2\" (UniqueName: \"kubernetes.io/projected/ba53250e-91a2-45bf-a609-ebed70fce751-kube-api-access-qpwn2\") on node \"crc\" DevicePath \"\"" Nov 28 12:51:53 crc kubenswrapper[4779]: I1128 12:51:53.844653 4779 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba53250e-91a2-45bf-a609-ebed70fce751-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 12:51:53 crc kubenswrapper[4779]: I1128 12:51:53.844663 4779 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba53250e-91a2-45bf-a609-ebed70fce751-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 12:51:54 crc kubenswrapper[4779]: E1128 12:51:54.217403 4779 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:25faa5b0e4801d4d3b01a28b877ed3188eee71f33ad66f3c2e86b7921758e711" Nov 28 12:51:54 crc kubenswrapper[4779]: E1128 12:51:54.217598 4779 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:25faa5b0e4801d4d3b01a28b877ed3188eee71f33ad66f3c2e86b7921758e711,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-thhxc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7b4567c7cf-lfj45_openstack-operators(da8e3e32-3cc1-4b1b-91c5-31ac6e660d65): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 12:51:54 crc kubenswrapper[4779]: I1128 12:51:54.557603 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fr8cd" event={"ID":"ba53250e-91a2-45bf-a609-ebed70fce751","Type":"ContainerDied","Data":"aba6b8e0addb44e1569cdb96b948ef940c3021f7ac665877e4d1fc7597c479f3"} Nov 28 12:51:54 crc kubenswrapper[4779]: I1128 12:51:54.557648 4779 scope.go:117] "RemoveContainer" containerID="9b9d4b328133a718706c072881ed99a196d7fd581aae49c971cb44324dddd166" Nov 28 12:51:54 crc kubenswrapper[4779]: I1128 12:51:54.557722 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fr8cd" Nov 28 12:51:54 crc kubenswrapper[4779]: I1128 12:51:54.600665 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fr8cd"] Nov 28 12:51:54 crc kubenswrapper[4779]: I1128 12:51:54.612173 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fr8cd"] Nov 28 12:51:55 crc kubenswrapper[4779]: I1128 12:51:55.738463 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba53250e-91a2-45bf-a609-ebed70fce751" path="/var/lib/kubelet/pods/ba53250e-91a2-45bf-a609-ebed70fce751/volumes" Nov 28 12:51:57 crc kubenswrapper[4779]: I1128 12:51:57.292565 4779 scope.go:117] "RemoveContainer" containerID="dcd0ad50feda751464a9036a836e80643f642e0223875c58529e37d0bb92b977" Nov 28 12:51:57 crc kubenswrapper[4779]: I1128 12:51:57.541811 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lqczf"] Nov 28 12:51:58 crc kubenswrapper[4779]: W1128 12:51:58.893585 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99aa273c_f970_4e1f_9484_14575338368f.slice/crio-17a641447715ff13fcd040a5524da62d2c4397bfdbb3eba928a31a7bb8bb23d5 WatchSource:0}: Error finding container 17a641447715ff13fcd040a5524da62d2c4397bfdbb3eba928a31a7bb8bb23d5: Status 404 returned error can't find the container with id 17a641447715ff13fcd040a5524da62d2c4397bfdbb3eba928a31a7bb8bb23d5 Nov 28 12:51:59 crc kubenswrapper[4779]: I1128 12:51:59.183935 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-7pv5r"] Nov 28 12:51:59 crc kubenswrapper[4779]: I1128 12:51:59.345726 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bsdkvh"] Nov 28 12:51:59 crc kubenswrapper[4779]: I1128 12:51:59.511829 4779 scope.go:117] "RemoveContainer" containerID="f57a1a4e37666a44f13873f291dce18a05fd7f92323dd3aba1b8c3b8ba81a77f" Nov 28 12:51:59 crc kubenswrapper[4779]: W1128 12:51:59.519110 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66bfbaf1_3247_47c1_aa58_19cf5875882e.slice/crio-4f449c535228773d3fbc454a1855727a23262c53fe0b9738a2a99fec3508b6cf WatchSource:0}: Error finding container 4f449c535228773d3fbc454a1855727a23262c53fe0b9738a2a99fec3508b6cf: Status 404 returned error can't find the container with id 4f449c535228773d3fbc454a1855727a23262c53fe0b9738a2a99fec3508b6cf Nov 28 12:51:59 crc kubenswrapper[4779]: I1128 12:51:59.600808 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bsdkvh" event={"ID":"66bfbaf1-3247-47c1-aa58-19cf5875882e","Type":"ContainerStarted","Data":"4f449c535228773d3fbc454a1855727a23262c53fe0b9738a2a99fec3508b6cf"} Nov 28 12:51:59 crc kubenswrapper[4779]: I1128 12:51:59.604702 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-7pv5r" event={"ID":"af7046d6-f852-4c62-83e6-ea213812d86c","Type":"ContainerStarted","Data":"37f2d333071c38ade8aef990ee8038859805df5de60967925fef1f5634b8a1b4"} Nov 28 12:51:59 crc kubenswrapper[4779]: I1128 12:51:59.605824 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lqczf" event={"ID":"99aa273c-f970-4e1f-9484-14575338368f","Type":"ContainerStarted","Data":"17a641447715ff13fcd040a5524da62d2c4397bfdbb3eba928a31a7bb8bb23d5"} Nov 28 12:52:00 crc kubenswrapper[4779]: I1128 12:52:00.617399 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-hhr2g" event={"ID":"e7e646e3-00c9-4359-b012-aaff60962a76","Type":"ContainerStarted","Data":"07738b4d834ccacd81389c8f56133223e3c24472d80d526250a5dc99c479c9ea"} Nov 28 12:52:00 crc kubenswrapper[4779]: I1128 12:52:00.619685 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-wptr7" event={"ID":"b3e0c6a3-33d8-4c1e-8b44-156de87d5621","Type":"ContainerStarted","Data":"64bb35e56195ea3c6c8426e19247c8773953b0f581754b300bc7e8bf6c9ac8dd"} Nov 28 12:52:00 crc kubenswrapper[4779]: I1128 12:52:00.621668 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-vd654" event={"ID":"40688ccc-932c-411e-8703-4bf0f11ec3bf","Type":"ContainerStarted","Data":"1008174d70203aa52c1444cc94d0d1247ef5134d8836d604fe1f34dfce7d5313"} Nov 28 12:52:00 crc kubenswrapper[4779]: I1128 12:52:00.623879 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-l52fj" event={"ID":"854f928b-5068-4de9-b865-7fb2a26ca9e4","Type":"ContainerStarted","Data":"137b1452e7c2534dacf4ca081b204e9ad4f378d3210abf65f1039369d8f656de"} Nov 28 12:52:00 crc kubenswrapper[4779]: I1128 12:52:00.627519 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-xqxsn" event={"ID":"b96763b6-e6a4-4429-8fe4-6b23620824c1","Type":"ContainerStarted","Data":"622ba1f53e648ef00c7482b51a77fdde5d5d13b51deccb2a2dc7641b1bdf394e"} Nov 28 12:52:02 crc kubenswrapper[4779]: I1128 12:52:02.641933 4779 generic.go:334] "Generic (PLEG): container finished" podID="99aa273c-f970-4e1f-9484-14575338368f" containerID="6c09f935629be945c52ce2b92631305695bfa5b7a71e84a0a149d4fa76ee7be6" exitCode=0 Nov 28 12:52:02 crc kubenswrapper[4779]: I1128 12:52:02.642287 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lqczf" event={"ID":"99aa273c-f970-4e1f-9484-14575338368f","Type":"ContainerDied","Data":"6c09f935629be945c52ce2b92631305695bfa5b7a71e84a0a149d4fa76ee7be6"} Nov 28 12:52:02 crc kubenswrapper[4779]: I1128 12:52:02.644965 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-cnfmd" event={"ID":"911b9690-ddec-439e-9ef5-a7d80562f51c","Type":"ContainerStarted","Data":"23697b38a9796c97e618c05f3862979fbeaca16bc9191255c551a864294d0305"} Nov 28 12:52:03 crc kubenswrapper[4779]: I1128 12:52:03.652450 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-hjhz4" event={"ID":"1799095f-becf-4b8e-bb0b-28c04a819e59","Type":"ContainerStarted","Data":"46515a5f7adb4eca7a62368ed6a4bbf92721211164373ee8810c0a8657085fa5"} Nov 28 12:52:03 crc kubenswrapper[4779]: I1128 12:52:03.654448 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-v49kv" event={"ID":"bb4ac6b3-6655-4e29-8cf7-bdae98df3386","Type":"ContainerStarted","Data":"7d61d5730e90c2f11862bfb8026ec436212d05eb6300996cf246eca40e8fdec3"} Nov 28 12:52:03 crc kubenswrapper[4779]: I1128 12:52:03.655952 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-495dt" event={"ID":"1c62c5f4-5757-46d4-92e5-7fdb2b21c88e","Type":"ContainerStarted","Data":"fde580bfd2a3c75f5f6c7beb2173ef40a957b915bdf7af5934bec49248cd0a82"} Nov 28 12:52:03 crc kubenswrapper[4779]: I1128 12:52:03.658331 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-kvnt5" event={"ID":"623cd065-a088-41d4-9b98-8be8d60c0f20","Type":"ContainerStarted","Data":"890c2f3d855f28db4602d5d9a6d70080e568b09c0bcbe8526c1813292cc47200"} Nov 28 12:52:03 crc kubenswrapper[4779]: I1128 12:52:03.682850 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-495dt" podStartSLOduration=5.151011154 podStartE2EDuration="27.682822983s" podCreationTimestamp="2025-11-28 12:51:36 +0000 UTC" firstStartedPulling="2025-11-28 12:51:37.122447626 +0000 UTC m=+957.688122990" lastFinishedPulling="2025-11-28 12:51:59.654259445 +0000 UTC m=+980.219934819" observedRunningTime="2025-11-28 12:52:03.671933281 +0000 UTC m=+984.237608635" watchObservedRunningTime="2025-11-28 12:52:03.682822983 +0000 UTC m=+984.248498347" Nov 28 12:52:04 crc kubenswrapper[4779]: E1128 12:52:04.588525 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-lnf86" podUID="b1c19869-b98a-40c8-a312-8c49d69bdf0f" Nov 28 12:52:04 crc kubenswrapper[4779]: I1128 12:52:04.667821 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-zzflc" event={"ID":"3b4accd2-e9c1-4e51-a559-c5cf108f5af1","Type":"ContainerStarted","Data":"bdbcc6b0268f8ba02879d4a244f2d1143626f37e2274a52d008c5a14d834274f"} Nov 28 12:52:04 crc kubenswrapper[4779]: I1128 12:52:04.674394 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-lnf86" event={"ID":"b1c19869-b98a-40c8-a312-8c49d69bdf0f","Type":"ContainerStarted","Data":"759854f0c3cfe8c207f49a8ca6973494f4e9a1200ed9a11824e377c5bf24b262"} Nov 28 12:52:04 crc kubenswrapper[4779]: I1128 12:52:04.684399 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-7pv5r" event={"ID":"af7046d6-f852-4c62-83e6-ea213812d86c","Type":"ContainerStarted","Data":"0970243fa6c7069e823c81bed1519b742d0ae7318515a92504ee583a00225cdc"} Nov 28 12:52:04 crc kubenswrapper[4779]: I1128 12:52:04.688864 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lqczf" event={"ID":"99aa273c-f970-4e1f-9484-14575338368f","Type":"ContainerStarted","Data":"94226cd7a87a50f25c4b22bd58ce9223db3443b5bf9cca09206d8cb69d638711"} Nov 28 12:52:04 crc kubenswrapper[4779]: E1128 12:52:04.695658 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-ns58c" podUID="eaf24224-e1f5-44d8-8151-54be9408b429" Nov 28 12:52:04 crc kubenswrapper[4779]: I1128 12:52:04.702886 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7574d9569-x822f" event={"ID":"f1d9753d-b49d-4e32-b312-137314283984","Type":"ContainerStarted","Data":"61e71169c10835959b1f06d83b45f325b4fe4afc8367b2471cb693d446e91fea"} Nov 28 12:52:05 crc kubenswrapper[4779]: I1128 12:52:05.718725 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-ns58c" event={"ID":"eaf24224-e1f5-44d8-8151-54be9408b429","Type":"ContainerStarted","Data":"d815a3cfa687542f6c303885125ab9936f7497f0868b73cfa30fb6bd0f8152e9"} Nov 28 12:52:05 crc kubenswrapper[4779]: I1128 12:52:05.725296 4779 generic.go:334] "Generic (PLEG): container finished" podID="99aa273c-f970-4e1f-9484-14575338368f" containerID="94226cd7a87a50f25c4b22bd58ce9223db3443b5bf9cca09206d8cb69d638711" exitCode=0 Nov 28 12:52:05 crc kubenswrapper[4779]: I1128 12:52:05.742041 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lqczf" event={"ID":"99aa273c-f970-4e1f-9484-14575338368f","Type":"ContainerDied","Data":"94226cd7a87a50f25c4b22bd58ce9223db3443b5bf9cca09206d8cb69d638711"} Nov 28 12:52:05 crc kubenswrapper[4779]: I1128 12:52:05.742076 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-zzflc" Nov 28 12:52:05 crc kubenswrapper[4779]: I1128 12:52:05.742118 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-zzflc" event={"ID":"3b4accd2-e9c1-4e51-a559-c5cf108f5af1","Type":"ContainerStarted","Data":"57b5ea0627ae15985a50b11f23eddc2c4a762cbcc6afd02d65de3471f20510bc"} Nov 28 12:52:05 crc kubenswrapper[4779]: I1128 12:52:05.770818 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-zzflc" podStartSLOduration=8.086945715 podStartE2EDuration="30.770800982s" podCreationTimestamp="2025-11-28 12:51:35 +0000 UTC" firstStartedPulling="2025-11-28 12:51:36.934927915 +0000 UTC m=+957.500603269" lastFinishedPulling="2025-11-28 12:51:59.618783192 +0000 UTC m=+980.184458536" observedRunningTime="2025-11-28 12:52:05.766276532 +0000 UTC m=+986.331951906" watchObservedRunningTime="2025-11-28 12:52:05.770800982 +0000 UTC m=+986.336476346" Nov 28 12:52:08 crc kubenswrapper[4779]: I1128 12:52:08.059560 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/31627cc1-b543-4da9-8fe1-ac12e7f09531-metrics-certs\") pod \"openstack-operator-controller-manager-7d967756df-nvprs\" (UID: \"31627cc1-b543-4da9-8fe1-ac12e7f09531\") " pod="openstack-operators/openstack-operator-controller-manager-7d967756df-nvprs" Nov 28 12:52:08 crc kubenswrapper[4779]: I1128 12:52:08.061088 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/31627cc1-b543-4da9-8fe1-ac12e7f09531-webhook-certs\") pod \"openstack-operator-controller-manager-7d967756df-nvprs\" (UID: \"31627cc1-b543-4da9-8fe1-ac12e7f09531\") " pod="openstack-operators/openstack-operator-controller-manager-7d967756df-nvprs" Nov 28 12:52:08 crc kubenswrapper[4779]: I1128 12:52:08.068481 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/31627cc1-b543-4da9-8fe1-ac12e7f09531-metrics-certs\") pod \"openstack-operator-controller-manager-7d967756df-nvprs\" (UID: \"31627cc1-b543-4da9-8fe1-ac12e7f09531\") " pod="openstack-operators/openstack-operator-controller-manager-7d967756df-nvprs" Nov 28 12:52:08 crc kubenswrapper[4779]: I1128 12:52:08.070287 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/31627cc1-b543-4da9-8fe1-ac12e7f09531-webhook-certs\") pod \"openstack-operator-controller-manager-7d967756df-nvprs\" (UID: \"31627cc1-b543-4da9-8fe1-ac12e7f09531\") " pod="openstack-operators/openstack-operator-controller-manager-7d967756df-nvprs" Nov 28 12:52:08 crc kubenswrapper[4779]: I1128 12:52:08.323128 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-sbwgq" Nov 28 12:52:08 crc kubenswrapper[4779]: I1128 12:52:08.330243 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7d967756df-nvprs" Nov 28 12:52:13 crc kubenswrapper[4779]: I1128 12:52:13.378963 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7d967756df-nvprs"] Nov 28 12:52:13 crc kubenswrapper[4779]: W1128 12:52:13.386990 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31627cc1_b543_4da9_8fe1_ac12e7f09531.slice/crio-86905c5b27fff7e1a7df9fbbaa13fcee8c5e3c827c10e4268b01385ddc0ef2e4 WatchSource:0}: Error finding container 86905c5b27fff7e1a7df9fbbaa13fcee8c5e3c827c10e4268b01385ddc0ef2e4: Status 404 returned error can't find the container with id 86905c5b27fff7e1a7df9fbbaa13fcee8c5e3c827c10e4268b01385ddc0ef2e4 Nov 28 12:52:13 crc kubenswrapper[4779]: I1128 12:52:13.824298 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7d967756df-nvprs" event={"ID":"31627cc1-b543-4da9-8fe1-ac12e7f09531","Type":"ContainerStarted","Data":"86905c5b27fff7e1a7df9fbbaa13fcee8c5e3c827c10e4268b01385ddc0ef2e4"} Nov 28 12:52:14 crc kubenswrapper[4779]: E1128 12:52:14.700803 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-h4czz" podUID="39fdca45-fa34-4d90-93a9-1123dff79930" Nov 28 12:52:16 crc kubenswrapper[4779]: I1128 12:52:14.833208 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-kvnt5" event={"ID":"623cd065-a088-41d4-9b98-8be8d60c0f20","Type":"ContainerStarted","Data":"c53530afc4987fef69cc30f4df023a3f051e0df87eb277bcb910a132290eeb6c"} Nov 28 12:52:16 crc kubenswrapper[4779]: I1128 12:52:14.835293 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-wptr7" event={"ID":"b3e0c6a3-33d8-4c1e-8b44-156de87d5621","Type":"ContainerStarted","Data":"7d8038f1e9ae5d2e9f3eb0d157e01bba196628aa5e2e5fb5b0fee3596d5740fb"} Nov 28 12:52:16 crc kubenswrapper[4779]: I1128 12:52:14.836657 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-955677c94-rh5q9" event={"ID":"8d20efbb-527c-4085-a974-d49ee454b545","Type":"ContainerStarted","Data":"587dad8cf5ce3dd090de15c44459b39672584db3ba72960f839e1f569c033646"} Nov 28 12:52:16 crc kubenswrapper[4779]: I1128 12:52:14.837726 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-d77b94747-c6wb2" event={"ID":"f3d69218-2422-473c-ae41-bd2a2b902355","Type":"ContainerStarted","Data":"7ec300faaadb333281e9ffa7587ec653fad8e3c5d8c115b2abf4200fdbd6faf3"} Nov 28 12:52:16 crc kubenswrapper[4779]: I1128 12:52:14.838905 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-hjhz4" event={"ID":"1799095f-becf-4b8e-bb0b-28c04a819e59","Type":"ContainerStarted","Data":"b1684fff12791b9baecfd86821e2833c47ef66e2f19cb2918c9322ee9d7849c3"} Nov 28 12:52:16 crc kubenswrapper[4779]: I1128 12:52:14.839874 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7d967756df-nvprs" event={"ID":"31627cc1-b543-4da9-8fe1-ac12e7f09531","Type":"ContainerStarted","Data":"6a264265e80f3a430e21c0ca1ce13161be3fa7e521c06325bb5d4fb1ca620be8"} Nov 28 12:52:16 crc kubenswrapper[4779]: I1128 12:52:14.841843 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-cnfmd" event={"ID":"911b9690-ddec-439e-9ef5-a7d80562f51c","Type":"ContainerStarted","Data":"39a7f11acfa206f1b096d890365edb6a16ded4c9aebbcca1bd25528d84a9c9ce"} Nov 28 12:52:16 crc kubenswrapper[4779]: I1128 12:52:14.843453 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-vd654" event={"ID":"40688ccc-932c-411e-8703-4bf0f11ec3bf","Type":"ContainerStarted","Data":"82c7b2a12c4c96e8c2551e90a753f8213d61183876ff57982160baed1f3da7cd"} Nov 28 12:52:16 crc kubenswrapper[4779]: I1128 12:52:14.844862 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-l52fj" event={"ID":"854f928b-5068-4de9-b865-7fb2a26ca9e4","Type":"ContainerStarted","Data":"e971f95feb93beb97d20f0056be7629e7baed82987dd06bf30c3ffc3f03a02f4"} Nov 28 12:52:16 crc kubenswrapper[4779]: I1128 12:52:14.846295 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7574d9569-x822f" event={"ID":"f1d9753d-b49d-4e32-b312-137314283984","Type":"ContainerStarted","Data":"f27ba16ee68f5d5b2bad5545d85f3b29bb374f6db1b843ffbf0c10f8cdeed556"} Nov 28 12:52:16 crc kubenswrapper[4779]: I1128 12:52:14.847487 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-9xxwc" event={"ID":"75996749-aa6c-4a8e-ba7f-412209db3939","Type":"ContainerStarted","Data":"e4257f9410ecac4a09ab56d8b400708763709a8ef7d2c4842ec3ba3efa9c9467"} Nov 28 12:52:16 crc kubenswrapper[4779]: I1128 12:52:14.848892 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-lfj45" event={"ID":"da8e3e32-3cc1-4b1b-91c5-31ac6e660d65","Type":"ContainerStarted","Data":"4d1fb1433c8356fa0109a5af8eb6a08f7ba8462e926a06770e3e5570bcdc2129"} Nov 28 12:52:16 crc kubenswrapper[4779]: I1128 12:52:14.850389 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-hhr2g" event={"ID":"e7e646e3-00c9-4359-b012-aaff60962a76","Type":"ContainerStarted","Data":"f6c8a581c93703db408459a572bdc4a4d2f4e011cb3459538481564d7b71368d"} Nov 28 12:52:16 crc kubenswrapper[4779]: I1128 12:52:14.851634 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-v49kv" event={"ID":"bb4ac6b3-6655-4e29-8cf7-bdae98df3386","Type":"ContainerStarted","Data":"e75ba42837f0213873a274cd08b6ab865bc48acc7544e6893affad6a9b604a35"} Nov 28 12:52:16 crc kubenswrapper[4779]: I1128 12:52:14.852538 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-h4czz" event={"ID":"39fdca45-fa34-4d90-93a9-1123dff79930","Type":"ContainerStarted","Data":"1ff798b1740ea892645a24b89aaa076005367d75367895c3aa89c0a052d55164"} Nov 28 12:52:16 crc kubenswrapper[4779]: I1128 12:52:14.856874 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-7pv5r" event={"ID":"af7046d6-f852-4c62-83e6-ea213812d86c","Type":"ContainerStarted","Data":"a126909abc6073dfe03d0f1dffa1af65318215209587b66b20277c7b9ae1a8c0"} Nov 28 12:52:16 crc kubenswrapper[4779]: I1128 12:52:14.857931 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-n952x" event={"ID":"493d54b8-1e0a-4270-8180-ba1bc746c783","Type":"ContainerStarted","Data":"4560a990d8bf1c1eb36d167314c14399cbf902b8312e697bb789c86ffa72c18d"} Nov 28 12:52:16 crc kubenswrapper[4779]: I1128 12:52:14.858996 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-xqxsn" event={"ID":"b96763b6-e6a4-4429-8fe4-6b23620824c1","Type":"ContainerStarted","Data":"a876dd3463c2992654a935c6c7a3bdd8443a000121437627988173066cfa9d72"} Nov 28 12:52:16 crc kubenswrapper[4779]: I1128 12:52:16.284395 4779 patch_prober.go:28] interesting pod/machine-config-daemon-kj9g2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 12:52:16 crc kubenswrapper[4779]: I1128 12:52:16.284475 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 12:52:16 crc kubenswrapper[4779]: I1128 12:52:16.337657 4779 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" Nov 28 12:52:16 crc kubenswrapper[4779]: I1128 12:52:16.338041 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-zzflc" Nov 28 12:52:16 crc kubenswrapper[4779]: I1128 12:52:16.339315 4779 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2ec718b6174bac7e525f03254a91895c0abe5e9151b5cc34c2ee2019a1b96a1c"} pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 12:52:16 crc kubenswrapper[4779]: I1128 12:52:16.339393 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" containerID="cri-o://2ec718b6174bac7e525f03254a91895c0abe5e9151b5cc34c2ee2019a1b96a1c" gracePeriod=600 Nov 28 12:52:16 crc kubenswrapper[4779]: E1128 12:52:16.986981 4779 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:51a478c52d9012c08743f63b44a3721c7ff7a0599ba9c2cf89ad54ea41b19e41" Nov 28 12:52:16 crc kubenswrapper[4779]: E1128 12:52:16.987470 4779 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:51a478c52d9012c08743f63b44a3721c7ff7a0599ba9c2cf89ad54ea41b19e41,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-evaluator:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-notifier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT,Value:registry.redhat.io/ubi9/httpd-24:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/mysqld-exporter:v0.15.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-backup:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-volume:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_API_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_PROC_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-processor:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-backend-bind9:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-mdns:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-producer:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-unbound:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-frr:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-iscsid:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT,Value:quay.io/sustainable_computing_io/kepler:release-0.7.12,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cron:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-multipathd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/node-exporter:v1.5.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/navidys/prometheus-podman-exporter:v1.10.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api-cfn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-redis:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/ironic-python-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT,Value:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-share:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-netutils:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-novncproxy:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-health-manager:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rsyslog:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-must-gather:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/edpm-hardened-uefi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-account:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-container:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-object:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-applier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-decision-engine:current-podified,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f722q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-5fcdb54b6bsdkvh_openstack-operators(66bfbaf1-3247-47c1-aa58-19cf5875882e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 12:52:17 crc kubenswrapper[4779]: I1128 12:52:17.348302 4779 generic.go:334] "Generic (PLEG): container finished" podID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerID="2ec718b6174bac7e525f03254a91895c0abe5e9151b5cc34c2ee2019a1b96a1c" exitCode=0 Nov 28 12:52:17 crc kubenswrapper[4779]: I1128 12:52:17.348387 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" event={"ID":"3b2a3eb4-4de5-491b-b466-3a35b7d745ec","Type":"ContainerDied","Data":"2ec718b6174bac7e525f03254a91895c0abe5e9151b5cc34c2ee2019a1b96a1c"} Nov 28 12:52:17 crc kubenswrapper[4779]: I1128 12:52:17.348728 4779 scope.go:117] "RemoveContainer" containerID="f5e93de974be41a0eb6481eaf0510c9e7e4484d2b3ab950a8d456a68806d2e6f" Nov 28 12:52:17 crc kubenswrapper[4779]: I1128 12:52:17.349219 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-v49kv" Nov 28 12:52:17 crc kubenswrapper[4779]: I1128 12:52:17.349376 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-hjhz4" Nov 28 12:52:17 crc kubenswrapper[4779]: I1128 12:52:17.349422 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-xqxsn" Nov 28 12:52:17 crc kubenswrapper[4779]: I1128 12:52:17.354369 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-v49kv" Nov 28 12:52:17 crc kubenswrapper[4779]: I1128 12:52:17.354716 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-xqxsn" Nov 28 12:52:17 crc kubenswrapper[4779]: I1128 12:52:17.354807 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-hjhz4" Nov 28 12:52:17 crc kubenswrapper[4779]: I1128 12:52:17.370914 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-hjhz4" podStartSLOduration=15.226581986 podStartE2EDuration="42.370891124s" podCreationTimestamp="2025-11-28 12:51:35 +0000 UTC" firstStartedPulling="2025-11-28 12:51:37.124149862 +0000 UTC m=+957.689825226" lastFinishedPulling="2025-11-28 12:52:04.26845902 +0000 UTC m=+984.834134364" observedRunningTime="2025-11-28 12:52:17.369541261 +0000 UTC m=+997.935216625" watchObservedRunningTime="2025-11-28 12:52:17.370891124 +0000 UTC m=+997.936566508" Nov 28 12:52:17 crc kubenswrapper[4779]: I1128 12:52:17.402446 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7574d9569-x822f" podStartSLOduration=17.737219075 podStartE2EDuration="42.402421708s" podCreationTimestamp="2025-11-28 12:51:35 +0000 UTC" firstStartedPulling="2025-11-28 12:51:37.122698693 +0000 UTC m=+957.688374067" lastFinishedPulling="2025-11-28 12:52:01.787901316 +0000 UTC m=+982.353576700" observedRunningTime="2025-11-28 12:52:17.395475886 +0000 UTC m=+997.961151260" watchObservedRunningTime="2025-11-28 12:52:17.402421708 +0000 UTC m=+997.968097102" Nov 28 12:52:17 crc kubenswrapper[4779]: I1128 12:52:17.426580 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-v49kv" podStartSLOduration=15.007164873 podStartE2EDuration="42.426556022s" podCreationTimestamp="2025-11-28 12:51:35 +0000 UTC" firstStartedPulling="2025-11-28 12:51:36.932214762 +0000 UTC m=+957.497890116" lastFinishedPulling="2025-11-28 12:52:04.351605911 +0000 UTC m=+984.917281265" observedRunningTime="2025-11-28 12:52:17.420477256 +0000 UTC m=+997.986152650" watchObservedRunningTime="2025-11-28 12:52:17.426556022 +0000 UTC m=+997.992231416" Nov 28 12:52:17 crc kubenswrapper[4779]: I1128 12:52:17.452034 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-xqxsn" podStartSLOduration=15.089044089 podStartE2EDuration="42.452005569s" podCreationTimestamp="2025-11-28 12:51:35 +0000 UTC" firstStartedPulling="2025-11-28 12:51:36.803506569 +0000 UTC m=+957.369181923" lastFinishedPulling="2025-11-28 12:52:04.166468049 +0000 UTC m=+984.732143403" observedRunningTime="2025-11-28 12:52:17.443377228 +0000 UTC m=+998.009052612" watchObservedRunningTime="2025-11-28 12:52:17.452005569 +0000 UTC m=+998.017680973" Nov 28 12:52:17 crc kubenswrapper[4779]: E1128 12:52:17.482127 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-n952x" podUID="493d54b8-1e0a-4270-8180-ba1bc746c783" Nov 28 12:52:17 crc kubenswrapper[4779]: E1128 12:52:17.482513 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-955677c94-rh5q9" podUID="8d20efbb-527c-4085-a974-d49ee454b545" Nov 28 12:52:17 crc kubenswrapper[4779]: E1128 12:52:17.482786 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-d77b94747-c6wb2" podUID="f3d69218-2422-473c-ae41-bd2a2b902355" Nov 28 12:52:17 crc kubenswrapper[4779]: E1128 12:52:17.637493 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-lfj45" podUID="da8e3e32-3cc1-4b1b-91c5-31ac6e660d65" Nov 28 12:52:17 crc kubenswrapper[4779]: E1128 12:52:17.637796 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-9xxwc" podUID="75996749-aa6c-4a8e-ba7f-412209db3939" Nov 28 12:52:18 crc kubenswrapper[4779]: I1128 12:52:18.360137 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bsdkvh" event={"ID":"66bfbaf1-3247-47c1-aa58-19cf5875882e","Type":"ContainerStarted","Data":"ba954990fb177f43cda2c130a8775f5dd1ea74328e53d85316ab4515cf5324b6"} Nov 28 12:52:18 crc kubenswrapper[4779]: I1128 12:52:18.366582 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7d967756df-nvprs" Nov 28 12:52:18 crc kubenswrapper[4779]: I1128 12:52:18.422334 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-7d967756df-nvprs" podStartSLOduration=43.422308335 podStartE2EDuration="43.422308335s" podCreationTimestamp="2025-11-28 12:51:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:52:18.418409726 +0000 UTC m=+998.984085120" watchObservedRunningTime="2025-11-28 12:52:18.422308335 +0000 UTC m=+998.987983719" Nov 28 12:52:18 crc kubenswrapper[4779]: I1128 12:52:18.495689 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-hhr2g" podStartSLOduration=15.724741469 podStartE2EDuration="43.495655143s" podCreationTimestamp="2025-11-28 12:51:35 +0000 UTC" firstStartedPulling="2025-11-28 12:51:36.390443435 +0000 UTC m=+956.956118789" lastFinishedPulling="2025-11-28 12:52:04.161357079 +0000 UTC m=+984.727032463" observedRunningTime="2025-11-28 12:52:18.489948983 +0000 UTC m=+999.055624377" watchObservedRunningTime="2025-11-28 12:52:18.495655143 +0000 UTC m=+999.061330537" Nov 28 12:52:18 crc kubenswrapper[4779]: I1128 12:52:18.545996 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-l52fj" podStartSLOduration=15.704423283 podStartE2EDuration="43.545978977s" podCreationTimestamp="2025-11-28 12:51:35 +0000 UTC" firstStartedPulling="2025-11-28 12:51:36.545912807 +0000 UTC m=+957.111588161" lastFinishedPulling="2025-11-28 12:52:04.387468501 +0000 UTC m=+984.953143855" observedRunningTime="2025-11-28 12:52:18.520167634 +0000 UTC m=+999.085843018" watchObservedRunningTime="2025-11-28 12:52:18.545978977 +0000 UTC m=+999.111654331" Nov 28 12:52:18 crc kubenswrapper[4779]: I1128 12:52:18.576767 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-57548d458d-7pv5r" podStartSLOduration=38.893908346 podStartE2EDuration="43.576739557s" podCreationTimestamp="2025-11-28 12:51:35 +0000 UTC" firstStartedPulling="2025-11-28 12:51:59.391022351 +0000 UTC m=+979.956697705" lastFinishedPulling="2025-11-28 12:52:04.073853552 +0000 UTC m=+984.639528916" observedRunningTime="2025-11-28 12:52:18.56381724 +0000 UTC m=+999.129492634" watchObservedRunningTime="2025-11-28 12:52:18.576739557 +0000 UTC m=+999.142414951" Nov 28 12:52:18 crc kubenswrapper[4779]: I1128 12:52:18.587148 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-cnfmd" podStartSLOduration=16.057881327 podStartE2EDuration="43.58712769s" podCreationTimestamp="2025-11-28 12:51:35 +0000 UTC" firstStartedPulling="2025-11-28 12:51:36.941490481 +0000 UTC m=+957.507165835" lastFinishedPulling="2025-11-28 12:52:04.470736844 +0000 UTC m=+985.036412198" observedRunningTime="2025-11-28 12:52:18.586151353 +0000 UTC m=+999.151826717" watchObservedRunningTime="2025-11-28 12:52:18.58712769 +0000 UTC m=+999.152803054" Nov 28 12:52:18 crc kubenswrapper[4779]: I1128 12:52:18.608484 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-vd654" podStartSLOduration=16.287588211 podStartE2EDuration="43.608472635s" podCreationTimestamp="2025-11-28 12:51:35 +0000 UTC" firstStartedPulling="2025-11-28 12:51:36.924083784 +0000 UTC m=+957.489759138" lastFinishedPulling="2025-11-28 12:52:04.244968208 +0000 UTC m=+984.810643562" observedRunningTime="2025-11-28 12:52:18.607161052 +0000 UTC m=+999.172836426" watchObservedRunningTime="2025-11-28 12:52:18.608472635 +0000 UTC m=+999.174147999" Nov 28 12:52:18 crc kubenswrapper[4779]: I1128 12:52:18.625291 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-wptr7" podStartSLOduration=16.072707785 podStartE2EDuration="43.62527402s" podCreationTimestamp="2025-11-28 12:51:35 +0000 UTC" firstStartedPulling="2025-11-28 12:51:36.608752903 +0000 UTC m=+957.174428257" lastFinishedPulling="2025-11-28 12:52:04.161319138 +0000 UTC m=+984.726994492" observedRunningTime="2025-11-28 12:52:18.621190498 +0000 UTC m=+999.186865852" watchObservedRunningTime="2025-11-28 12:52:18.62527402 +0000 UTC m=+999.190949374" Nov 28 12:52:18 crc kubenswrapper[4779]: I1128 12:52:18.681509 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-kvnt5" podStartSLOduration=16.058410263 podStartE2EDuration="43.681486768s" podCreationTimestamp="2025-11-28 12:51:35 +0000 UTC" firstStartedPulling="2025-11-28 12:51:36.936748964 +0000 UTC m=+957.502424328" lastFinishedPulling="2025-11-28 12:52:04.559825479 +0000 UTC m=+985.125500833" observedRunningTime="2025-11-28 12:52:18.674523005 +0000 UTC m=+999.240198359" watchObservedRunningTime="2025-11-28 12:52:18.681486768 +0000 UTC m=+999.247162132" Nov 28 12:52:18 crc kubenswrapper[4779]: E1128 12:52:18.785417 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bsdkvh" podUID="66bfbaf1-3247-47c1-aa58-19cf5875882e" Nov 28 12:52:19 crc kubenswrapper[4779]: I1128 12:52:19.379177 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7d967756df-nvprs" Nov 28 12:52:19 crc kubenswrapper[4779]: E1128 12:52:19.626195 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:51a478c52d9012c08743f63b44a3721c7ff7a0599ba9c2cf89ad54ea41b19e41\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bsdkvh" podUID="66bfbaf1-3247-47c1-aa58-19cf5875882e" Nov 28 12:52:21 crc kubenswrapper[4779]: I1128 12:52:21.374241 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-57548d458d-7pv5r" Nov 28 12:52:21 crc kubenswrapper[4779]: I1128 12:52:21.379719 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-57548d458d-7pv5r" Nov 28 12:52:21 crc kubenswrapper[4779]: I1128 12:52:21.386364 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-lfj45" event={"ID":"da8e3e32-3cc1-4b1b-91c5-31ac6e660d65","Type":"ContainerStarted","Data":"37220415a0bf4fe7dad6ecbab4066c9dd7989019e8ef5ddae5f2bdffaca35606"} Nov 28 12:52:21 crc kubenswrapper[4779]: I1128 12:52:21.386554 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-lfj45" Nov 28 12:52:21 crc kubenswrapper[4779]: I1128 12:52:21.387833 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-lnf86" event={"ID":"b1c19869-b98a-40c8-a312-8c49d69bdf0f","Type":"ContainerStarted","Data":"29c6df5f443cdc0346c00dad9606c1c266a6d84f11e0a5b47ec0a28cea1cec20"} Nov 28 12:52:21 crc kubenswrapper[4779]: I1128 12:52:21.387956 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-lnf86" Nov 28 12:52:21 crc kubenswrapper[4779]: I1128 12:52:21.389926 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-955677c94-rh5q9" event={"ID":"8d20efbb-527c-4085-a974-d49ee454b545","Type":"ContainerStarted","Data":"cd271a397e22540eb157881c9733469c0d899f7ccc42e5315e852586c57b89fb"} Nov 28 12:52:21 crc kubenswrapper[4779]: I1128 12:52:21.390101 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-955677c94-rh5q9" Nov 28 12:52:21 crc kubenswrapper[4779]: I1128 12:52:21.391247 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-d77b94747-c6wb2" event={"ID":"f3d69218-2422-473c-ae41-bd2a2b902355","Type":"ContainerStarted","Data":"35a9c31036fa9c95bf2e230f50e6c28d7d53c78b1073e694760e20588b94b19b"} Nov 28 12:52:21 crc kubenswrapper[4779]: I1128 12:52:21.391407 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-d77b94747-c6wb2" Nov 28 12:52:21 crc kubenswrapper[4779]: I1128 12:52:21.392651 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-9xxwc" event={"ID":"75996749-aa6c-4a8e-ba7f-412209db3939","Type":"ContainerStarted","Data":"daf2c5b3b305ea9f84da189a4c7081a34e415c3f2a9f047e32dc4305e9ec5ae6"} Nov 28 12:52:21 crc kubenswrapper[4779]: I1128 12:52:21.392787 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-9xxwc" Nov 28 12:52:21 crc kubenswrapper[4779]: I1128 12:52:21.394665 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lqczf" event={"ID":"99aa273c-f970-4e1f-9484-14575338368f","Type":"ContainerStarted","Data":"ecf90947b92c463515637c86756e566aef79207b6196d3aff2480e497803310a"} Nov 28 12:52:21 crc kubenswrapper[4779]: I1128 12:52:21.396021 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-ns58c" event={"ID":"eaf24224-e1f5-44d8-8151-54be9408b429","Type":"ContainerStarted","Data":"bc7513e9ad72f490608ee09deb1feaa1b5ec6bf23b19ce653d7312ffdce1203f"} Nov 28 12:52:21 crc kubenswrapper[4779]: I1128 12:52:21.396144 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-ns58c" Nov 28 12:52:21 crc kubenswrapper[4779]: I1128 12:52:21.397890 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" event={"ID":"3b2a3eb4-4de5-491b-b466-3a35b7d745ec","Type":"ContainerStarted","Data":"19d1e85c2d2159fafc03753bd25b2d9cba3a3d26bcb40723109739bd64095a04"} Nov 28 12:52:21 crc kubenswrapper[4779]: I1128 12:52:21.399226 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-n952x" event={"ID":"493d54b8-1e0a-4270-8180-ba1bc746c783","Type":"ContainerStarted","Data":"750b99da47d4a37144a04413171c39183dff35a303be1c90c16478eda76105c0"} Nov 28 12:52:21 crc kubenswrapper[4779]: I1128 12:52:21.399389 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-n952x" Nov 28 12:52:21 crc kubenswrapper[4779]: I1128 12:52:21.400837 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-h4czz" event={"ID":"39fdca45-fa34-4d90-93a9-1123dff79930","Type":"ContainerStarted","Data":"560e88318d213dfc3aecbe308030bd8ca6ef9ca9095ce10bf2d7923682d3716f"} Nov 28 12:52:21 crc kubenswrapper[4779]: I1128 12:52:21.400947 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-h4czz" Nov 28 12:52:21 crc kubenswrapper[4779]: I1128 12:52:21.601873 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-d77b94747-c6wb2" podStartSLOduration=2.8549085290000003 podStartE2EDuration="46.601855518s" podCreationTimestamp="2025-11-28 12:51:35 +0000 UTC" firstStartedPulling="2025-11-28 12:51:36.929991362 +0000 UTC m=+957.495666726" lastFinishedPulling="2025-11-28 12:52:20.676938351 +0000 UTC m=+1001.242613715" observedRunningTime="2025-11-28 12:52:21.60026949 +0000 UTC m=+1002.165944844" watchObservedRunningTime="2025-11-28 12:52:21.601855518 +0000 UTC m=+1002.167530872" Nov 28 12:52:21 crc kubenswrapper[4779]: I1128 12:52:21.651216 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-n952x" podStartSLOduration=2.494857446 podStartE2EDuration="46.651201335s" podCreationTimestamp="2025-11-28 12:51:35 +0000 UTC" firstStartedPulling="2025-11-28 12:51:36.639929439 +0000 UTC m=+957.205604793" lastFinishedPulling="2025-11-28 12:52:20.796273328 +0000 UTC m=+1001.361948682" observedRunningTime="2025-11-28 12:52:21.647226335 +0000 UTC m=+1002.212901689" watchObservedRunningTime="2025-11-28 12:52:21.651201335 +0000 UTC m=+1002.216876689" Nov 28 12:52:21 crc kubenswrapper[4779]: I1128 12:52:21.678651 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-lnf86" podStartSLOduration=2.973837357 podStartE2EDuration="46.678638317s" podCreationTimestamp="2025-11-28 12:51:35 +0000 UTC" firstStartedPulling="2025-11-28 12:51:36.926633922 +0000 UTC m=+957.492309276" lastFinishedPulling="2025-11-28 12:52:20.631434862 +0000 UTC m=+1001.197110236" observedRunningTime="2025-11-28 12:52:21.676280616 +0000 UTC m=+1002.241955970" watchObservedRunningTime="2025-11-28 12:52:21.678638317 +0000 UTC m=+1002.244313671" Nov 28 12:52:21 crc kubenswrapper[4779]: I1128 12:52:21.717750 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-lfj45" podStartSLOduration=2.84465731 podStartE2EDuration="46.717734774s" podCreationTimestamp="2025-11-28 12:51:35 +0000 UTC" firstStartedPulling="2025-11-28 12:51:36.803821137 +0000 UTC m=+957.369496491" lastFinishedPulling="2025-11-28 12:52:20.676898581 +0000 UTC m=+1001.242573955" observedRunningTime="2025-11-28 12:52:21.715602816 +0000 UTC m=+1002.281278170" watchObservedRunningTime="2025-11-28 12:52:21.717734774 +0000 UTC m=+1002.283410128" Nov 28 12:52:21 crc kubenswrapper[4779]: I1128 12:52:21.741859 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lqczf" podStartSLOduration=19.158054631 podStartE2EDuration="36.741839077s" podCreationTimestamp="2025-11-28 12:51:45 +0000 UTC" firstStartedPulling="2025-11-28 12:52:02.722557045 +0000 UTC m=+983.288232409" lastFinishedPulling="2025-11-28 12:52:20.306341461 +0000 UTC m=+1000.872016855" observedRunningTime="2025-11-28 12:52:21.739509456 +0000 UTC m=+1002.305184810" watchObservedRunningTime="2025-11-28 12:52:21.741839077 +0000 UTC m=+1002.307514431" Nov 28 12:52:21 crc kubenswrapper[4779]: I1128 12:52:21.777795 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-9xxwc" podStartSLOduration=2.91087118 podStartE2EDuration="46.777781269s" podCreationTimestamp="2025-11-28 12:51:35 +0000 UTC" firstStartedPulling="2025-11-28 12:51:36.809990002 +0000 UTC m=+957.375665356" lastFinishedPulling="2025-11-28 12:52:20.676900071 +0000 UTC m=+1001.242575445" observedRunningTime="2025-11-28 12:52:21.773541344 +0000 UTC m=+1002.339216698" watchObservedRunningTime="2025-11-28 12:52:21.777781269 +0000 UTC m=+1002.343456613" Nov 28 12:52:21 crc kubenswrapper[4779]: I1128 12:52:21.790866 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-h4czz" podStartSLOduration=3.148278615 podStartE2EDuration="46.790850508s" podCreationTimestamp="2025-11-28 12:51:35 +0000 UTC" firstStartedPulling="2025-11-28 12:51:37.084607701 +0000 UTC m=+957.650283095" lastFinishedPulling="2025-11-28 12:52:20.727179624 +0000 UTC m=+1001.292854988" observedRunningTime="2025-11-28 12:52:21.788056279 +0000 UTC m=+1002.353731633" watchObservedRunningTime="2025-11-28 12:52:21.790850508 +0000 UTC m=+1002.356525862" Nov 28 12:52:21 crc kubenswrapper[4779]: I1128 12:52:21.836420 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-955677c94-rh5q9" podStartSLOduration=2.779885601 podStartE2EDuration="46.836405648s" podCreationTimestamp="2025-11-28 12:51:35 +0000 UTC" firstStartedPulling="2025-11-28 12:51:36.584633186 +0000 UTC m=+957.150308540" lastFinishedPulling="2025-11-28 12:52:20.641153193 +0000 UTC m=+1001.206828587" observedRunningTime="2025-11-28 12:52:21.815349928 +0000 UTC m=+1002.381025282" watchObservedRunningTime="2025-11-28 12:52:21.836405648 +0000 UTC m=+1002.402081002" Nov 28 12:52:21 crc kubenswrapper[4779]: I1128 12:52:21.837184 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-ns58c" podStartSLOduration=2.8482399000000003 podStartE2EDuration="46.837180542s" podCreationTimestamp="2025-11-28 12:51:35 +0000 UTC" firstStartedPulling="2025-11-28 12:51:36.645868379 +0000 UTC m=+957.211543733" lastFinishedPulling="2025-11-28 12:52:20.634809011 +0000 UTC m=+1001.200484375" observedRunningTime="2025-11-28 12:52:21.834522035 +0000 UTC m=+1002.400197389" watchObservedRunningTime="2025-11-28 12:52:21.837180542 +0000 UTC m=+1002.402855896" Nov 28 12:52:25 crc kubenswrapper[4779]: I1128 12:52:25.565848 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lqczf" Nov 28 12:52:25 crc kubenswrapper[4779]: I1128 12:52:25.566813 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lqczf" Nov 28 12:52:25 crc kubenswrapper[4779]: I1128 12:52:25.596322 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-hhr2g" Nov 28 12:52:25 crc kubenswrapper[4779]: I1128 12:52:25.601088 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-hhr2g" Nov 28 12:52:25 crc kubenswrapper[4779]: I1128 12:52:25.634862 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-l52fj" Nov 28 12:52:25 crc kubenswrapper[4779]: I1128 12:52:25.639399 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-l52fj" Nov 28 12:52:25 crc kubenswrapper[4779]: I1128 12:52:25.653685 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lqczf" Nov 28 12:52:25 crc kubenswrapper[4779]: I1128 12:52:25.703143 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-wptr7" Nov 28 12:52:25 crc kubenswrapper[4779]: I1128 12:52:25.714474 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-wptr7" Nov 28 12:52:26 crc kubenswrapper[4779]: I1128 12:52:26.005921 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-cnfmd" Nov 28 12:52:26 crc kubenswrapper[4779]: I1128 12:52:26.009268 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-cnfmd" Nov 28 12:52:26 crc kubenswrapper[4779]: I1128 12:52:26.047274 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-kvnt5" Nov 28 12:52:26 crc kubenswrapper[4779]: I1128 12:52:26.047360 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-vd654" Nov 28 12:52:26 crc kubenswrapper[4779]: I1128 12:52:26.050729 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-vd654" Nov 28 12:52:26 crc kubenswrapper[4779]: I1128 12:52:26.051304 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-kvnt5" Nov 28 12:52:26 crc kubenswrapper[4779]: I1128 12:52:26.124240 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-lnf86" Nov 28 12:52:26 crc kubenswrapper[4779]: I1128 12:52:26.232888 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-d77b94747-c6wb2" Nov 28 12:52:26 crc kubenswrapper[4779]: I1128 12:52:26.297692 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7574d9569-x822f" Nov 28 12:52:26 crc kubenswrapper[4779]: I1128 12:52:26.299675 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7574d9569-x822f" Nov 28 12:52:26 crc kubenswrapper[4779]: I1128 12:52:26.345068 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-h4czz" Nov 28 12:52:26 crc kubenswrapper[4779]: I1128 12:52:26.505314 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lqczf" Nov 28 12:52:26 crc kubenswrapper[4779]: I1128 12:52:26.552312 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lqczf"] Nov 28 12:52:28 crc kubenswrapper[4779]: I1128 12:52:28.470560 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lqczf" podUID="99aa273c-f970-4e1f-9484-14575338368f" containerName="registry-server" containerID="cri-o://ecf90947b92c463515637c86756e566aef79207b6196d3aff2480e497803310a" gracePeriod=2 Nov 28 12:52:28 crc kubenswrapper[4779]: I1128 12:52:28.953923 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lqczf" Nov 28 12:52:28 crc kubenswrapper[4779]: I1128 12:52:28.983338 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfvs4\" (UniqueName: \"kubernetes.io/projected/99aa273c-f970-4e1f-9484-14575338368f-kube-api-access-tfvs4\") pod \"99aa273c-f970-4e1f-9484-14575338368f\" (UID: \"99aa273c-f970-4e1f-9484-14575338368f\") " Nov 28 12:52:28 crc kubenswrapper[4779]: I1128 12:52:28.989081 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99aa273c-f970-4e1f-9484-14575338368f-kube-api-access-tfvs4" (OuterVolumeSpecName: "kube-api-access-tfvs4") pod "99aa273c-f970-4e1f-9484-14575338368f" (UID: "99aa273c-f970-4e1f-9484-14575338368f"). InnerVolumeSpecName "kube-api-access-tfvs4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:52:29 crc kubenswrapper[4779]: I1128 12:52:29.084443 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99aa273c-f970-4e1f-9484-14575338368f-catalog-content\") pod \"99aa273c-f970-4e1f-9484-14575338368f\" (UID: \"99aa273c-f970-4e1f-9484-14575338368f\") " Nov 28 12:52:29 crc kubenswrapper[4779]: I1128 12:52:29.084643 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99aa273c-f970-4e1f-9484-14575338368f-utilities\") pod \"99aa273c-f970-4e1f-9484-14575338368f\" (UID: \"99aa273c-f970-4e1f-9484-14575338368f\") " Nov 28 12:52:29 crc kubenswrapper[4779]: I1128 12:52:29.085019 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfvs4\" (UniqueName: \"kubernetes.io/projected/99aa273c-f970-4e1f-9484-14575338368f-kube-api-access-tfvs4\") on node \"crc\" DevicePath \"\"" Nov 28 12:52:29 crc kubenswrapper[4779]: I1128 12:52:29.086338 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99aa273c-f970-4e1f-9484-14575338368f-utilities" (OuterVolumeSpecName: "utilities") pod "99aa273c-f970-4e1f-9484-14575338368f" (UID: "99aa273c-f970-4e1f-9484-14575338368f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:52:29 crc kubenswrapper[4779]: I1128 12:52:29.105470 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99aa273c-f970-4e1f-9484-14575338368f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "99aa273c-f970-4e1f-9484-14575338368f" (UID: "99aa273c-f970-4e1f-9484-14575338368f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:52:29 crc kubenswrapper[4779]: I1128 12:52:29.186998 4779 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99aa273c-f970-4e1f-9484-14575338368f-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 12:52:29 crc kubenswrapper[4779]: I1128 12:52:29.187028 4779 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99aa273c-f970-4e1f-9484-14575338368f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 12:52:29 crc kubenswrapper[4779]: I1128 12:52:29.481780 4779 generic.go:334] "Generic (PLEG): container finished" podID="99aa273c-f970-4e1f-9484-14575338368f" containerID="ecf90947b92c463515637c86756e566aef79207b6196d3aff2480e497803310a" exitCode=0 Nov 28 12:52:29 crc kubenswrapper[4779]: I1128 12:52:29.481846 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lqczf" event={"ID":"99aa273c-f970-4e1f-9484-14575338368f","Type":"ContainerDied","Data":"ecf90947b92c463515637c86756e566aef79207b6196d3aff2480e497803310a"} Nov 28 12:52:29 crc kubenswrapper[4779]: I1128 12:52:29.481926 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lqczf" Nov 28 12:52:29 crc kubenswrapper[4779]: I1128 12:52:29.481946 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lqczf" event={"ID":"99aa273c-f970-4e1f-9484-14575338368f","Type":"ContainerDied","Data":"17a641447715ff13fcd040a5524da62d2c4397bfdbb3eba928a31a7bb8bb23d5"} Nov 28 12:52:29 crc kubenswrapper[4779]: I1128 12:52:29.482025 4779 scope.go:117] "RemoveContainer" containerID="ecf90947b92c463515637c86756e566aef79207b6196d3aff2480e497803310a" Nov 28 12:52:29 crc kubenswrapper[4779]: I1128 12:52:29.512509 4779 scope.go:117] "RemoveContainer" containerID="94226cd7a87a50f25c4b22bd58ce9223db3443b5bf9cca09206d8cb69d638711" Nov 28 12:52:29 crc kubenswrapper[4779]: I1128 12:52:29.534546 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lqczf"] Nov 28 12:52:29 crc kubenswrapper[4779]: I1128 12:52:29.544036 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lqczf"] Nov 28 12:52:29 crc kubenswrapper[4779]: I1128 12:52:29.552370 4779 scope.go:117] "RemoveContainer" containerID="6c09f935629be945c52ce2b92631305695bfa5b7a71e84a0a149d4fa76ee7be6" Nov 28 12:52:29 crc kubenswrapper[4779]: I1128 12:52:29.569454 4779 scope.go:117] "RemoveContainer" containerID="ecf90947b92c463515637c86756e566aef79207b6196d3aff2480e497803310a" Nov 28 12:52:29 crc kubenswrapper[4779]: E1128 12:52:29.569901 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecf90947b92c463515637c86756e566aef79207b6196d3aff2480e497803310a\": container with ID starting with ecf90947b92c463515637c86756e566aef79207b6196d3aff2480e497803310a not found: ID does not exist" containerID="ecf90947b92c463515637c86756e566aef79207b6196d3aff2480e497803310a" Nov 28 12:52:29 crc kubenswrapper[4779]: I1128 12:52:29.569943 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecf90947b92c463515637c86756e566aef79207b6196d3aff2480e497803310a"} err="failed to get container status \"ecf90947b92c463515637c86756e566aef79207b6196d3aff2480e497803310a\": rpc error: code = NotFound desc = could not find container \"ecf90947b92c463515637c86756e566aef79207b6196d3aff2480e497803310a\": container with ID starting with ecf90947b92c463515637c86756e566aef79207b6196d3aff2480e497803310a not found: ID does not exist" Nov 28 12:52:29 crc kubenswrapper[4779]: I1128 12:52:29.569973 4779 scope.go:117] "RemoveContainer" containerID="94226cd7a87a50f25c4b22bd58ce9223db3443b5bf9cca09206d8cb69d638711" Nov 28 12:52:29 crc kubenswrapper[4779]: E1128 12:52:29.570386 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94226cd7a87a50f25c4b22bd58ce9223db3443b5bf9cca09206d8cb69d638711\": container with ID starting with 94226cd7a87a50f25c4b22bd58ce9223db3443b5bf9cca09206d8cb69d638711 not found: ID does not exist" containerID="94226cd7a87a50f25c4b22bd58ce9223db3443b5bf9cca09206d8cb69d638711" Nov 28 12:52:29 crc kubenswrapper[4779]: I1128 12:52:29.570433 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94226cd7a87a50f25c4b22bd58ce9223db3443b5bf9cca09206d8cb69d638711"} err="failed to get container status \"94226cd7a87a50f25c4b22bd58ce9223db3443b5bf9cca09206d8cb69d638711\": rpc error: code = NotFound desc = could not find container \"94226cd7a87a50f25c4b22bd58ce9223db3443b5bf9cca09206d8cb69d638711\": container with ID starting with 94226cd7a87a50f25c4b22bd58ce9223db3443b5bf9cca09206d8cb69d638711 not found: ID does not exist" Nov 28 12:52:29 crc kubenswrapper[4779]: I1128 12:52:29.570461 4779 scope.go:117] "RemoveContainer" containerID="6c09f935629be945c52ce2b92631305695bfa5b7a71e84a0a149d4fa76ee7be6" Nov 28 12:52:29 crc kubenswrapper[4779]: E1128 12:52:29.570680 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c09f935629be945c52ce2b92631305695bfa5b7a71e84a0a149d4fa76ee7be6\": container with ID starting with 6c09f935629be945c52ce2b92631305695bfa5b7a71e84a0a149d4fa76ee7be6 not found: ID does not exist" containerID="6c09f935629be945c52ce2b92631305695bfa5b7a71e84a0a149d4fa76ee7be6" Nov 28 12:52:29 crc kubenswrapper[4779]: I1128 12:52:29.570704 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c09f935629be945c52ce2b92631305695bfa5b7a71e84a0a149d4fa76ee7be6"} err="failed to get container status \"6c09f935629be945c52ce2b92631305695bfa5b7a71e84a0a149d4fa76ee7be6\": rpc error: code = NotFound desc = could not find container \"6c09f935629be945c52ce2b92631305695bfa5b7a71e84a0a149d4fa76ee7be6\": container with ID starting with 6c09f935629be945c52ce2b92631305695bfa5b7a71e84a0a149d4fa76ee7be6 not found: ID does not exist" Nov 28 12:52:29 crc kubenswrapper[4779]: I1128 12:52:29.735457 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99aa273c-f970-4e1f-9484-14575338368f" path="/var/lib/kubelet/pods/99aa273c-f970-4e1f-9484-14575338368f/volumes" Nov 28 12:52:35 crc kubenswrapper[4779]: I1128 12:52:35.568452 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bsdkvh" event={"ID":"66bfbaf1-3247-47c1-aa58-19cf5875882e","Type":"ContainerStarted","Data":"d34f585fe67432d7d3f1aec5507c211932de16f5ab1eee32c32aba7ac18d11da"} Nov 28 12:52:35 crc kubenswrapper[4779]: I1128 12:52:35.569371 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bsdkvh" Nov 28 12:52:35 crc kubenswrapper[4779]: I1128 12:52:35.614705 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bsdkvh" podStartSLOduration=24.930263957 podStartE2EDuration="1m0.614673081s" podCreationTimestamp="2025-11-28 12:51:35 +0000 UTC" firstStartedPulling="2025-11-28 12:51:59.544482146 +0000 UTC m=+980.110157500" lastFinishedPulling="2025-11-28 12:52:35.22889123 +0000 UTC m=+1015.794566624" observedRunningTime="2025-11-28 12:52:35.609995605 +0000 UTC m=+1016.175670959" watchObservedRunningTime="2025-11-28 12:52:35.614673081 +0000 UTC m=+1016.180348475" Nov 28 12:52:35 crc kubenswrapper[4779]: I1128 12:52:35.646619 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-955677c94-rh5q9" Nov 28 12:52:35 crc kubenswrapper[4779]: I1128 12:52:35.711425 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-ns58c" Nov 28 12:52:35 crc kubenswrapper[4779]: I1128 12:52:35.781869 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-n952x" Nov 28 12:52:35 crc kubenswrapper[4779]: I1128 12:52:35.796004 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-lfj45" Nov 28 12:52:35 crc kubenswrapper[4779]: I1128 12:52:35.817755 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-9xxwc" Nov 28 12:52:41 crc kubenswrapper[4779]: I1128 12:52:41.749361 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bsdkvh" Nov 28 12:52:56 crc kubenswrapper[4779]: I1128 12:52:56.389941 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-7vgkf"] Nov 28 12:52:56 crc kubenswrapper[4779]: E1128 12:52:56.390490 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99aa273c-f970-4e1f-9484-14575338368f" containerName="registry-server" Nov 28 12:52:56 crc kubenswrapper[4779]: I1128 12:52:56.390517 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="99aa273c-f970-4e1f-9484-14575338368f" containerName="registry-server" Nov 28 12:52:56 crc kubenswrapper[4779]: E1128 12:52:56.390547 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99aa273c-f970-4e1f-9484-14575338368f" containerName="extract-content" Nov 28 12:52:56 crc kubenswrapper[4779]: I1128 12:52:56.390553 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="99aa273c-f970-4e1f-9484-14575338368f" containerName="extract-content" Nov 28 12:52:56 crc kubenswrapper[4779]: E1128 12:52:56.390566 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba53250e-91a2-45bf-a609-ebed70fce751" containerName="registry-server" Nov 28 12:52:56 crc kubenswrapper[4779]: I1128 12:52:56.390572 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba53250e-91a2-45bf-a609-ebed70fce751" containerName="registry-server" Nov 28 12:52:56 crc kubenswrapper[4779]: E1128 12:52:56.390586 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba53250e-91a2-45bf-a609-ebed70fce751" containerName="extract-content" Nov 28 12:52:56 crc kubenswrapper[4779]: I1128 12:52:56.390592 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba53250e-91a2-45bf-a609-ebed70fce751" containerName="extract-content" Nov 28 12:52:56 crc kubenswrapper[4779]: E1128 12:52:56.390601 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99aa273c-f970-4e1f-9484-14575338368f" containerName="extract-utilities" Nov 28 12:52:56 crc kubenswrapper[4779]: I1128 12:52:56.390608 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="99aa273c-f970-4e1f-9484-14575338368f" containerName="extract-utilities" Nov 28 12:52:56 crc kubenswrapper[4779]: E1128 12:52:56.390622 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba53250e-91a2-45bf-a609-ebed70fce751" containerName="extract-utilities" Nov 28 12:52:56 crc kubenswrapper[4779]: I1128 12:52:56.390628 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba53250e-91a2-45bf-a609-ebed70fce751" containerName="extract-utilities" Nov 28 12:52:56 crc kubenswrapper[4779]: I1128 12:52:56.390754 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="99aa273c-f970-4e1f-9484-14575338368f" containerName="registry-server" Nov 28 12:52:56 crc kubenswrapper[4779]: I1128 12:52:56.390767 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba53250e-91a2-45bf-a609-ebed70fce751" containerName="registry-server" Nov 28 12:52:56 crc kubenswrapper[4779]: I1128 12:52:56.393954 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-7vgkf" Nov 28 12:52:56 crc kubenswrapper[4779]: I1128 12:52:56.397179 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 28 12:52:56 crc kubenswrapper[4779]: I1128 12:52:56.397434 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 28 12:52:56 crc kubenswrapper[4779]: I1128 12:52:56.397541 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 28 12:52:56 crc kubenswrapper[4779]: I1128 12:52:56.397642 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-r6qb5" Nov 28 12:52:56 crc kubenswrapper[4779]: I1128 12:52:56.403277 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-7vgkf"] Nov 28 12:52:56 crc kubenswrapper[4779]: I1128 12:52:56.467710 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-sr5r4"] Nov 28 12:52:56 crc kubenswrapper[4779]: I1128 12:52:56.469180 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-sr5r4" Nov 28 12:52:56 crc kubenswrapper[4779]: I1128 12:52:56.470942 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 28 12:52:56 crc kubenswrapper[4779]: I1128 12:52:56.483803 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-sr5r4"] Nov 28 12:52:56 crc kubenswrapper[4779]: I1128 12:52:56.516656 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92krt\" (UniqueName: \"kubernetes.io/projected/2c4a21b9-6b54-42cd-9dea-630957b1ba47-kube-api-access-92krt\") pod \"dnsmasq-dns-675f4bcbfc-7vgkf\" (UID: \"2c4a21b9-6b54-42cd-9dea-630957b1ba47\") " pod="openstack/dnsmasq-dns-675f4bcbfc-7vgkf" Nov 28 12:52:56 crc kubenswrapper[4779]: I1128 12:52:56.516742 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g55g\" (UniqueName: \"kubernetes.io/projected/6e93f61b-ad6b-4f49-8916-1b371c57865e-kube-api-access-8g55g\") pod \"dnsmasq-dns-78dd6ddcc-sr5r4\" (UID: \"6e93f61b-ad6b-4f49-8916-1b371c57865e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-sr5r4" Nov 28 12:52:56 crc kubenswrapper[4779]: I1128 12:52:56.516773 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c4a21b9-6b54-42cd-9dea-630957b1ba47-config\") pod \"dnsmasq-dns-675f4bcbfc-7vgkf\" (UID: \"2c4a21b9-6b54-42cd-9dea-630957b1ba47\") " pod="openstack/dnsmasq-dns-675f4bcbfc-7vgkf" Nov 28 12:52:56 crc kubenswrapper[4779]: I1128 12:52:56.516809 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e93f61b-ad6b-4f49-8916-1b371c57865e-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-sr5r4\" (UID: \"6e93f61b-ad6b-4f49-8916-1b371c57865e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-sr5r4" Nov 28 12:52:56 crc kubenswrapper[4779]: I1128 12:52:56.516826 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e93f61b-ad6b-4f49-8916-1b371c57865e-config\") pod \"dnsmasq-dns-78dd6ddcc-sr5r4\" (UID: \"6e93f61b-ad6b-4f49-8916-1b371c57865e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-sr5r4" Nov 28 12:52:56 crc kubenswrapper[4779]: I1128 12:52:56.617985 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92krt\" (UniqueName: \"kubernetes.io/projected/2c4a21b9-6b54-42cd-9dea-630957b1ba47-kube-api-access-92krt\") pod \"dnsmasq-dns-675f4bcbfc-7vgkf\" (UID: \"2c4a21b9-6b54-42cd-9dea-630957b1ba47\") " pod="openstack/dnsmasq-dns-675f4bcbfc-7vgkf" Nov 28 12:52:56 crc kubenswrapper[4779]: I1128 12:52:56.618058 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8g55g\" (UniqueName: \"kubernetes.io/projected/6e93f61b-ad6b-4f49-8916-1b371c57865e-kube-api-access-8g55g\") pod \"dnsmasq-dns-78dd6ddcc-sr5r4\" (UID: \"6e93f61b-ad6b-4f49-8916-1b371c57865e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-sr5r4" Nov 28 12:52:56 crc kubenswrapper[4779]: I1128 12:52:56.618117 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c4a21b9-6b54-42cd-9dea-630957b1ba47-config\") pod \"dnsmasq-dns-675f4bcbfc-7vgkf\" (UID: \"2c4a21b9-6b54-42cd-9dea-630957b1ba47\") " pod="openstack/dnsmasq-dns-675f4bcbfc-7vgkf" Nov 28 12:52:56 crc kubenswrapper[4779]: I1128 12:52:56.618169 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e93f61b-ad6b-4f49-8916-1b371c57865e-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-sr5r4\" (UID: \"6e93f61b-ad6b-4f49-8916-1b371c57865e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-sr5r4" Nov 28 12:52:56 crc kubenswrapper[4779]: I1128 12:52:56.618192 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e93f61b-ad6b-4f49-8916-1b371c57865e-config\") pod \"dnsmasq-dns-78dd6ddcc-sr5r4\" (UID: \"6e93f61b-ad6b-4f49-8916-1b371c57865e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-sr5r4" Nov 28 12:52:56 crc kubenswrapper[4779]: I1128 12:52:56.619293 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e93f61b-ad6b-4f49-8916-1b371c57865e-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-sr5r4\" (UID: \"6e93f61b-ad6b-4f49-8916-1b371c57865e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-sr5r4" Nov 28 12:52:56 crc kubenswrapper[4779]: I1128 12:52:56.619330 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e93f61b-ad6b-4f49-8916-1b371c57865e-config\") pod \"dnsmasq-dns-78dd6ddcc-sr5r4\" (UID: \"6e93f61b-ad6b-4f49-8916-1b371c57865e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-sr5r4" Nov 28 12:52:56 crc kubenswrapper[4779]: I1128 12:52:56.619625 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c4a21b9-6b54-42cd-9dea-630957b1ba47-config\") pod \"dnsmasq-dns-675f4bcbfc-7vgkf\" (UID: \"2c4a21b9-6b54-42cd-9dea-630957b1ba47\") " pod="openstack/dnsmasq-dns-675f4bcbfc-7vgkf" Nov 28 12:52:56 crc kubenswrapper[4779]: I1128 12:52:56.640141 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92krt\" (UniqueName: \"kubernetes.io/projected/2c4a21b9-6b54-42cd-9dea-630957b1ba47-kube-api-access-92krt\") pod \"dnsmasq-dns-675f4bcbfc-7vgkf\" (UID: \"2c4a21b9-6b54-42cd-9dea-630957b1ba47\") " pod="openstack/dnsmasq-dns-675f4bcbfc-7vgkf" Nov 28 12:52:56 crc kubenswrapper[4779]: I1128 12:52:56.641649 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8g55g\" (UniqueName: \"kubernetes.io/projected/6e93f61b-ad6b-4f49-8916-1b371c57865e-kube-api-access-8g55g\") pod \"dnsmasq-dns-78dd6ddcc-sr5r4\" (UID: \"6e93f61b-ad6b-4f49-8916-1b371c57865e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-sr5r4" Nov 28 12:52:56 crc kubenswrapper[4779]: I1128 12:52:56.714851 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-7vgkf" Nov 28 12:52:56 crc kubenswrapper[4779]: I1128 12:52:56.782613 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-sr5r4" Nov 28 12:52:57 crc kubenswrapper[4779]: I1128 12:52:57.136299 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-7vgkf"] Nov 28 12:52:57 crc kubenswrapper[4779]: W1128 12:52:57.141282 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c4a21b9_6b54_42cd_9dea_630957b1ba47.slice/crio-1dc6a688d3dfd2614051dcd59ad94e959027e84d18fa2fd23e240fb98960f487 WatchSource:0}: Error finding container 1dc6a688d3dfd2614051dcd59ad94e959027e84d18fa2fd23e240fb98960f487: Status 404 returned error can't find the container with id 1dc6a688d3dfd2614051dcd59ad94e959027e84d18fa2fd23e240fb98960f487 Nov 28 12:52:57 crc kubenswrapper[4779]: I1128 12:52:57.144809 4779 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 12:52:57 crc kubenswrapper[4779]: I1128 12:52:57.228474 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-sr5r4"] Nov 28 12:52:57 crc kubenswrapper[4779]: W1128 12:52:57.238517 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e93f61b_ad6b_4f49_8916_1b371c57865e.slice/crio-d29d51ac561c77f2a7cff8c95ab62520333ea4e59bdd799cf169bb628970630e WatchSource:0}: Error finding container d29d51ac561c77f2a7cff8c95ab62520333ea4e59bdd799cf169bb628970630e: Status 404 returned error can't find the container with id d29d51ac561c77f2a7cff8c95ab62520333ea4e59bdd799cf169bb628970630e Nov 28 12:52:57 crc kubenswrapper[4779]: I1128 12:52:57.838623 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-7vgkf" event={"ID":"2c4a21b9-6b54-42cd-9dea-630957b1ba47","Type":"ContainerStarted","Data":"1dc6a688d3dfd2614051dcd59ad94e959027e84d18fa2fd23e240fb98960f487"} Nov 28 12:52:57 crc kubenswrapper[4779]: I1128 12:52:57.840765 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-sr5r4" event={"ID":"6e93f61b-ad6b-4f49-8916-1b371c57865e","Type":"ContainerStarted","Data":"d29d51ac561c77f2a7cff8c95ab62520333ea4e59bdd799cf169bb628970630e"} Nov 28 12:52:59 crc kubenswrapper[4779]: I1128 12:52:59.475471 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-7vgkf"] Nov 28 12:52:59 crc kubenswrapper[4779]: I1128 12:52:59.503977 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-wcndb"] Nov 28 12:52:59 crc kubenswrapper[4779]: I1128 12:52:59.505033 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-wcndb" Nov 28 12:52:59 crc kubenswrapper[4779]: I1128 12:52:59.522708 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-wcndb"] Nov 28 12:52:59 crc kubenswrapper[4779]: I1128 12:52:59.556476 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bdf95638-8948-4749-b04d-5a58b43dbc7b-dns-svc\") pod \"dnsmasq-dns-666b6646f7-wcndb\" (UID: \"bdf95638-8948-4749-b04d-5a58b43dbc7b\") " pod="openstack/dnsmasq-dns-666b6646f7-wcndb" Nov 28 12:52:59 crc kubenswrapper[4779]: I1128 12:52:59.556550 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k57v\" (UniqueName: \"kubernetes.io/projected/bdf95638-8948-4749-b04d-5a58b43dbc7b-kube-api-access-2k57v\") pod \"dnsmasq-dns-666b6646f7-wcndb\" (UID: \"bdf95638-8948-4749-b04d-5a58b43dbc7b\") " pod="openstack/dnsmasq-dns-666b6646f7-wcndb" Nov 28 12:52:59 crc kubenswrapper[4779]: I1128 12:52:59.556600 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdf95638-8948-4749-b04d-5a58b43dbc7b-config\") pod \"dnsmasq-dns-666b6646f7-wcndb\" (UID: \"bdf95638-8948-4749-b04d-5a58b43dbc7b\") " pod="openstack/dnsmasq-dns-666b6646f7-wcndb" Nov 28 12:52:59 crc kubenswrapper[4779]: I1128 12:52:59.657714 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bdf95638-8948-4749-b04d-5a58b43dbc7b-dns-svc\") pod \"dnsmasq-dns-666b6646f7-wcndb\" (UID: \"bdf95638-8948-4749-b04d-5a58b43dbc7b\") " pod="openstack/dnsmasq-dns-666b6646f7-wcndb" Nov 28 12:52:59 crc kubenswrapper[4779]: I1128 12:52:59.657793 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2k57v\" (UniqueName: \"kubernetes.io/projected/bdf95638-8948-4749-b04d-5a58b43dbc7b-kube-api-access-2k57v\") pod \"dnsmasq-dns-666b6646f7-wcndb\" (UID: \"bdf95638-8948-4749-b04d-5a58b43dbc7b\") " pod="openstack/dnsmasq-dns-666b6646f7-wcndb" Nov 28 12:52:59 crc kubenswrapper[4779]: I1128 12:52:59.657834 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdf95638-8948-4749-b04d-5a58b43dbc7b-config\") pod \"dnsmasq-dns-666b6646f7-wcndb\" (UID: \"bdf95638-8948-4749-b04d-5a58b43dbc7b\") " pod="openstack/dnsmasq-dns-666b6646f7-wcndb" Nov 28 12:52:59 crc kubenswrapper[4779]: I1128 12:52:59.658964 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdf95638-8948-4749-b04d-5a58b43dbc7b-config\") pod \"dnsmasq-dns-666b6646f7-wcndb\" (UID: \"bdf95638-8948-4749-b04d-5a58b43dbc7b\") " pod="openstack/dnsmasq-dns-666b6646f7-wcndb" Nov 28 12:52:59 crc kubenswrapper[4779]: I1128 12:52:59.659077 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bdf95638-8948-4749-b04d-5a58b43dbc7b-dns-svc\") pod \"dnsmasq-dns-666b6646f7-wcndb\" (UID: \"bdf95638-8948-4749-b04d-5a58b43dbc7b\") " pod="openstack/dnsmasq-dns-666b6646f7-wcndb" Nov 28 12:52:59 crc kubenswrapper[4779]: I1128 12:52:59.687464 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2k57v\" (UniqueName: \"kubernetes.io/projected/bdf95638-8948-4749-b04d-5a58b43dbc7b-kube-api-access-2k57v\") pod \"dnsmasq-dns-666b6646f7-wcndb\" (UID: \"bdf95638-8948-4749-b04d-5a58b43dbc7b\") " pod="openstack/dnsmasq-dns-666b6646f7-wcndb" Nov 28 12:52:59 crc kubenswrapper[4779]: I1128 12:52:59.753579 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-sr5r4"] Nov 28 12:52:59 crc kubenswrapper[4779]: I1128 12:52:59.771038 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7bqlc"] Nov 28 12:52:59 crc kubenswrapper[4779]: I1128 12:52:59.773835 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-7bqlc" Nov 28 12:52:59 crc kubenswrapper[4779]: I1128 12:52:59.796077 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7bqlc"] Nov 28 12:52:59 crc kubenswrapper[4779]: I1128 12:52:59.841042 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-wcndb" Nov 28 12:52:59 crc kubenswrapper[4779]: I1128 12:52:59.860286 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5defa7a-9cf5-4dca-a5ed-465ef0801609-config\") pod \"dnsmasq-dns-57d769cc4f-7bqlc\" (UID: \"b5defa7a-9cf5-4dca-a5ed-465ef0801609\") " pod="openstack/dnsmasq-dns-57d769cc4f-7bqlc" Nov 28 12:52:59 crc kubenswrapper[4779]: I1128 12:52:59.860341 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64p7f\" (UniqueName: \"kubernetes.io/projected/b5defa7a-9cf5-4dca-a5ed-465ef0801609-kube-api-access-64p7f\") pod \"dnsmasq-dns-57d769cc4f-7bqlc\" (UID: \"b5defa7a-9cf5-4dca-a5ed-465ef0801609\") " pod="openstack/dnsmasq-dns-57d769cc4f-7bqlc" Nov 28 12:52:59 crc kubenswrapper[4779]: I1128 12:52:59.860359 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5defa7a-9cf5-4dca-a5ed-465ef0801609-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-7bqlc\" (UID: \"b5defa7a-9cf5-4dca-a5ed-465ef0801609\") " pod="openstack/dnsmasq-dns-57d769cc4f-7bqlc" Nov 28 12:52:59 crc kubenswrapper[4779]: I1128 12:52:59.961282 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5defa7a-9cf5-4dca-a5ed-465ef0801609-config\") pod \"dnsmasq-dns-57d769cc4f-7bqlc\" (UID: \"b5defa7a-9cf5-4dca-a5ed-465ef0801609\") " pod="openstack/dnsmasq-dns-57d769cc4f-7bqlc" Nov 28 12:52:59 crc kubenswrapper[4779]: I1128 12:52:59.961610 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64p7f\" (UniqueName: \"kubernetes.io/projected/b5defa7a-9cf5-4dca-a5ed-465ef0801609-kube-api-access-64p7f\") pod \"dnsmasq-dns-57d769cc4f-7bqlc\" (UID: \"b5defa7a-9cf5-4dca-a5ed-465ef0801609\") " pod="openstack/dnsmasq-dns-57d769cc4f-7bqlc" Nov 28 12:52:59 crc kubenswrapper[4779]: I1128 12:52:59.961639 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5defa7a-9cf5-4dca-a5ed-465ef0801609-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-7bqlc\" (UID: \"b5defa7a-9cf5-4dca-a5ed-465ef0801609\") " pod="openstack/dnsmasq-dns-57d769cc4f-7bqlc" Nov 28 12:52:59 crc kubenswrapper[4779]: I1128 12:52:59.962530 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5defa7a-9cf5-4dca-a5ed-465ef0801609-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-7bqlc\" (UID: \"b5defa7a-9cf5-4dca-a5ed-465ef0801609\") " pod="openstack/dnsmasq-dns-57d769cc4f-7bqlc" Nov 28 12:52:59 crc kubenswrapper[4779]: I1128 12:52:59.963022 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5defa7a-9cf5-4dca-a5ed-465ef0801609-config\") pod \"dnsmasq-dns-57d769cc4f-7bqlc\" (UID: \"b5defa7a-9cf5-4dca-a5ed-465ef0801609\") " pod="openstack/dnsmasq-dns-57d769cc4f-7bqlc" Nov 28 12:52:59 crc kubenswrapper[4779]: I1128 12:52:59.982384 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64p7f\" (UniqueName: \"kubernetes.io/projected/b5defa7a-9cf5-4dca-a5ed-465ef0801609-kube-api-access-64p7f\") pod \"dnsmasq-dns-57d769cc4f-7bqlc\" (UID: \"b5defa7a-9cf5-4dca-a5ed-465ef0801609\") " pod="openstack/dnsmasq-dns-57d769cc4f-7bqlc" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.095836 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-7bqlc" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.623268 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.624344 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.627388 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.627534 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.627645 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.628909 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.629423 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.629589 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-rfrf7" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.630073 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.644590 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.769680 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1c8c979a-2995-4080-a0b6-173e62faceee-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1c8c979a-2995-4080-a0b6-173e62faceee\") " pod="openstack/rabbitmq-server-0" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.769722 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1c8c979a-2995-4080-a0b6-173e62faceee-config-data\") pod \"rabbitmq-server-0\" (UID: \"1c8c979a-2995-4080-a0b6-173e62faceee\") " pod="openstack/rabbitmq-server-0" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.769749 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1c8c979a-2995-4080-a0b6-173e62faceee-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1c8c979a-2995-4080-a0b6-173e62faceee\") " pod="openstack/rabbitmq-server-0" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.769804 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1c8c979a-2995-4080-a0b6-173e62faceee-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1c8c979a-2995-4080-a0b6-173e62faceee\") " pod="openstack/rabbitmq-server-0" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.769822 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"1c8c979a-2995-4080-a0b6-173e62faceee\") " pod="openstack/rabbitmq-server-0" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.769839 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1c8c979a-2995-4080-a0b6-173e62faceee-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1c8c979a-2995-4080-a0b6-173e62faceee\") " pod="openstack/rabbitmq-server-0" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.769860 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1c8c979a-2995-4080-a0b6-173e62faceee-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1c8c979a-2995-4080-a0b6-173e62faceee\") " pod="openstack/rabbitmq-server-0" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.769979 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1c8c979a-2995-4080-a0b6-173e62faceee-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1c8c979a-2995-4080-a0b6-173e62faceee\") " pod="openstack/rabbitmq-server-0" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.770017 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1c8c979a-2995-4080-a0b6-173e62faceee-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1c8c979a-2995-4080-a0b6-173e62faceee\") " pod="openstack/rabbitmq-server-0" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.770054 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1c8c979a-2995-4080-a0b6-173e62faceee-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1c8c979a-2995-4080-a0b6-173e62faceee\") " pod="openstack/rabbitmq-server-0" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.772076 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsbgc\" (UniqueName: \"kubernetes.io/projected/1c8c979a-2995-4080-a0b6-173e62faceee-kube-api-access-fsbgc\") pod \"rabbitmq-server-0\" (UID: \"1c8c979a-2995-4080-a0b6-173e62faceee\") " pod="openstack/rabbitmq-server-0" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.871813 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.872945 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.873000 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1c8c979a-2995-4080-a0b6-173e62faceee-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1c8c979a-2995-4080-a0b6-173e62faceee\") " pod="openstack/rabbitmq-server-0" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.873039 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"1c8c979a-2995-4080-a0b6-173e62faceee\") " pod="openstack/rabbitmq-server-0" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.873060 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1c8c979a-2995-4080-a0b6-173e62faceee-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1c8c979a-2995-4080-a0b6-173e62faceee\") " pod="openstack/rabbitmq-server-0" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.873082 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1c8c979a-2995-4080-a0b6-173e62faceee-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1c8c979a-2995-4080-a0b6-173e62faceee\") " pod="openstack/rabbitmq-server-0" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.873120 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1c8c979a-2995-4080-a0b6-173e62faceee-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1c8c979a-2995-4080-a0b6-173e62faceee\") " pod="openstack/rabbitmq-server-0" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.873458 4779 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"1c8c979a-2995-4080-a0b6-173e62faceee\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-server-0" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.873140 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1c8c979a-2995-4080-a0b6-173e62faceee-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1c8c979a-2995-4080-a0b6-173e62faceee\") " pod="openstack/rabbitmq-server-0" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.873527 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1c8c979a-2995-4080-a0b6-173e62faceee-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1c8c979a-2995-4080-a0b6-173e62faceee\") " pod="openstack/rabbitmq-server-0" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.873556 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsbgc\" (UniqueName: \"kubernetes.io/projected/1c8c979a-2995-4080-a0b6-173e62faceee-kube-api-access-fsbgc\") pod \"rabbitmq-server-0\" (UID: \"1c8c979a-2995-4080-a0b6-173e62faceee\") " pod="openstack/rabbitmq-server-0" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.875382 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1c8c979a-2995-4080-a0b6-173e62faceee-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1c8c979a-2995-4080-a0b6-173e62faceee\") " pod="openstack/rabbitmq-server-0" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.875448 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1c8c979a-2995-4080-a0b6-173e62faceee-config-data\") pod \"rabbitmq-server-0\" (UID: \"1c8c979a-2995-4080-a0b6-173e62faceee\") " pod="openstack/rabbitmq-server-0" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.875500 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1c8c979a-2995-4080-a0b6-173e62faceee-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1c8c979a-2995-4080-a0b6-173e62faceee\") " pod="openstack/rabbitmq-server-0" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.893239 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1c8c979a-2995-4080-a0b6-173e62faceee-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1c8c979a-2995-4080-a0b6-173e62faceee\") " pod="openstack/rabbitmq-server-0" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.894192 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1c8c979a-2995-4080-a0b6-173e62faceee-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1c8c979a-2995-4080-a0b6-173e62faceee\") " pod="openstack/rabbitmq-server-0" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.894952 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.895364 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1c8c979a-2995-4080-a0b6-173e62faceee-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1c8c979a-2995-4080-a0b6-173e62faceee\") " pod="openstack/rabbitmq-server-0" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.895733 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.896508 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.897417 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1c8c979a-2995-4080-a0b6-173e62faceee-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1c8c979a-2995-4080-a0b6-173e62faceee\") " pod="openstack/rabbitmq-server-0" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.897823 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1c8c979a-2995-4080-a0b6-173e62faceee-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1c8c979a-2995-4080-a0b6-173e62faceee\") " pod="openstack/rabbitmq-server-0" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.898389 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1c8c979a-2995-4080-a0b6-173e62faceee-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1c8c979a-2995-4080-a0b6-173e62faceee\") " pod="openstack/rabbitmq-server-0" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.898794 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1c8c979a-2995-4080-a0b6-173e62faceee-config-data\") pod \"rabbitmq-server-0\" (UID: \"1c8c979a-2995-4080-a0b6-173e62faceee\") " pod="openstack/rabbitmq-server-0" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.904827 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.905215 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.905719 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1c8c979a-2995-4080-a0b6-173e62faceee-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1c8c979a-2995-4080-a0b6-173e62faceee\") " pod="openstack/rabbitmq-server-0" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.905943 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.906326 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-hd9sn" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.917047 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1c8c979a-2995-4080-a0b6-173e62faceee-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1c8c979a-2995-4080-a0b6-173e62faceee\") " pod="openstack/rabbitmq-server-0" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.921830 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsbgc\" (UniqueName: \"kubernetes.io/projected/1c8c979a-2995-4080-a0b6-173e62faceee-kube-api-access-fsbgc\") pod \"rabbitmq-server-0\" (UID: \"1c8c979a-2995-4080-a0b6-173e62faceee\") " pod="openstack/rabbitmq-server-0" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.921909 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.941118 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"1c8c979a-2995-4080-a0b6-173e62faceee\") " pod="openstack/rabbitmq-server-0" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.953519 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.976897 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/486d0b33-cc59-495a-ba1f-e51c47e0d37e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"486d0b33-cc59-495a-ba1f-e51c47e0d37e\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.976939 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/486d0b33-cc59-495a-ba1f-e51c47e0d37e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"486d0b33-cc59-495a-ba1f-e51c47e0d37e\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.976969 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/486d0b33-cc59-495a-ba1f-e51c47e0d37e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"486d0b33-cc59-495a-ba1f-e51c47e0d37e\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.976988 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/486d0b33-cc59-495a-ba1f-e51c47e0d37e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"486d0b33-cc59-495a-ba1f-e51c47e0d37e\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.977013 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/486d0b33-cc59-495a-ba1f-e51c47e0d37e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"486d0b33-cc59-495a-ba1f-e51c47e0d37e\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.977028 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvw96\" (UniqueName: \"kubernetes.io/projected/486d0b33-cc59-495a-ba1f-e51c47e0d37e-kube-api-access-nvw96\") pod \"rabbitmq-cell1-server-0\" (UID: \"486d0b33-cc59-495a-ba1f-e51c47e0d37e\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.977056 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/486d0b33-cc59-495a-ba1f-e51c47e0d37e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"486d0b33-cc59-495a-ba1f-e51c47e0d37e\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.977086 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/486d0b33-cc59-495a-ba1f-e51c47e0d37e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"486d0b33-cc59-495a-ba1f-e51c47e0d37e\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.977123 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/486d0b33-cc59-495a-ba1f-e51c47e0d37e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"486d0b33-cc59-495a-ba1f-e51c47e0d37e\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.977150 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/486d0b33-cc59-495a-ba1f-e51c47e0d37e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"486d0b33-cc59-495a-ba1f-e51c47e0d37e\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:53:00 crc kubenswrapper[4779]: I1128 12:53:00.977166 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"486d0b33-cc59-495a-ba1f-e51c47e0d37e\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:53:01 crc kubenswrapper[4779]: I1128 12:53:01.078633 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/486d0b33-cc59-495a-ba1f-e51c47e0d37e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"486d0b33-cc59-495a-ba1f-e51c47e0d37e\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:53:01 crc kubenswrapper[4779]: I1128 12:53:01.078676 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"486d0b33-cc59-495a-ba1f-e51c47e0d37e\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:53:01 crc kubenswrapper[4779]: I1128 12:53:01.078716 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/486d0b33-cc59-495a-ba1f-e51c47e0d37e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"486d0b33-cc59-495a-ba1f-e51c47e0d37e\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:53:01 crc kubenswrapper[4779]: I1128 12:53:01.078735 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/486d0b33-cc59-495a-ba1f-e51c47e0d37e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"486d0b33-cc59-495a-ba1f-e51c47e0d37e\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:53:01 crc kubenswrapper[4779]: I1128 12:53:01.078910 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/486d0b33-cc59-495a-ba1f-e51c47e0d37e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"486d0b33-cc59-495a-ba1f-e51c47e0d37e\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:53:01 crc kubenswrapper[4779]: I1128 12:53:01.078927 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/486d0b33-cc59-495a-ba1f-e51c47e0d37e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"486d0b33-cc59-495a-ba1f-e51c47e0d37e\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:53:01 crc kubenswrapper[4779]: I1128 12:53:01.078943 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvw96\" (UniqueName: \"kubernetes.io/projected/486d0b33-cc59-495a-ba1f-e51c47e0d37e-kube-api-access-nvw96\") pod \"rabbitmq-cell1-server-0\" (UID: \"486d0b33-cc59-495a-ba1f-e51c47e0d37e\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:53:01 crc kubenswrapper[4779]: I1128 12:53:01.078966 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/486d0b33-cc59-495a-ba1f-e51c47e0d37e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"486d0b33-cc59-495a-ba1f-e51c47e0d37e\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:53:01 crc kubenswrapper[4779]: I1128 12:53:01.078992 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/486d0b33-cc59-495a-ba1f-e51c47e0d37e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"486d0b33-cc59-495a-ba1f-e51c47e0d37e\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:53:01 crc kubenswrapper[4779]: I1128 12:53:01.079024 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/486d0b33-cc59-495a-ba1f-e51c47e0d37e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"486d0b33-cc59-495a-ba1f-e51c47e0d37e\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:53:01 crc kubenswrapper[4779]: I1128 12:53:01.079045 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/486d0b33-cc59-495a-ba1f-e51c47e0d37e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"486d0b33-cc59-495a-ba1f-e51c47e0d37e\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:53:01 crc kubenswrapper[4779]: I1128 12:53:01.079613 4779 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"486d0b33-cc59-495a-ba1f-e51c47e0d37e\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:53:01 crc kubenswrapper[4779]: I1128 12:53:01.079772 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/486d0b33-cc59-495a-ba1f-e51c47e0d37e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"486d0b33-cc59-495a-ba1f-e51c47e0d37e\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:53:01 crc kubenswrapper[4779]: I1128 12:53:01.079926 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/486d0b33-cc59-495a-ba1f-e51c47e0d37e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"486d0b33-cc59-495a-ba1f-e51c47e0d37e\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:53:01 crc kubenswrapper[4779]: I1128 12:53:01.080051 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/486d0b33-cc59-495a-ba1f-e51c47e0d37e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"486d0b33-cc59-495a-ba1f-e51c47e0d37e\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:53:01 crc kubenswrapper[4779]: I1128 12:53:01.080311 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/486d0b33-cc59-495a-ba1f-e51c47e0d37e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"486d0b33-cc59-495a-ba1f-e51c47e0d37e\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:53:01 crc kubenswrapper[4779]: I1128 12:53:01.081084 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/486d0b33-cc59-495a-ba1f-e51c47e0d37e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"486d0b33-cc59-495a-ba1f-e51c47e0d37e\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:53:01 crc kubenswrapper[4779]: I1128 12:53:01.082365 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/486d0b33-cc59-495a-ba1f-e51c47e0d37e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"486d0b33-cc59-495a-ba1f-e51c47e0d37e\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:53:01 crc kubenswrapper[4779]: I1128 12:53:01.083722 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/486d0b33-cc59-495a-ba1f-e51c47e0d37e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"486d0b33-cc59-495a-ba1f-e51c47e0d37e\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:53:01 crc kubenswrapper[4779]: I1128 12:53:01.084962 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/486d0b33-cc59-495a-ba1f-e51c47e0d37e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"486d0b33-cc59-495a-ba1f-e51c47e0d37e\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:53:01 crc kubenswrapper[4779]: I1128 12:53:01.097060 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/486d0b33-cc59-495a-ba1f-e51c47e0d37e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"486d0b33-cc59-495a-ba1f-e51c47e0d37e\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:53:01 crc kubenswrapper[4779]: I1128 12:53:01.110336 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvw96\" (UniqueName: \"kubernetes.io/projected/486d0b33-cc59-495a-ba1f-e51c47e0d37e-kube-api-access-nvw96\") pod \"rabbitmq-cell1-server-0\" (UID: \"486d0b33-cc59-495a-ba1f-e51c47e0d37e\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:53:01 crc kubenswrapper[4779]: I1128 12:53:01.119732 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"486d0b33-cc59-495a-ba1f-e51c47e0d37e\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:53:01 crc kubenswrapper[4779]: I1128 12:53:01.299685 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:53:02 crc kubenswrapper[4779]: I1128 12:53:02.514433 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 28 12:53:02 crc kubenswrapper[4779]: I1128 12:53:02.518640 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 28 12:53:02 crc kubenswrapper[4779]: I1128 12:53:02.527736 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 28 12:53:02 crc kubenswrapper[4779]: I1128 12:53:02.527737 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-mx6nf" Nov 28 12:53:02 crc kubenswrapper[4779]: I1128 12:53:02.529084 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 28 12:53:02 crc kubenswrapper[4779]: I1128 12:53:02.529485 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 28 12:53:02 crc kubenswrapper[4779]: I1128 12:53:02.536801 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 28 12:53:02 crc kubenswrapper[4779]: I1128 12:53:02.540333 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 28 12:53:02 crc kubenswrapper[4779]: I1128 12:53:02.599970 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bd0f63de-dfe7-471d-92d8-b41e260d970b-kolla-config\") pod \"openstack-galera-0\" (UID: \"bd0f63de-dfe7-471d-92d8-b41e260d970b\") " pod="openstack/openstack-galera-0" Nov 28 12:53:02 crc kubenswrapper[4779]: I1128 12:53:02.600504 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g24dq\" (UniqueName: \"kubernetes.io/projected/bd0f63de-dfe7-471d-92d8-b41e260d970b-kube-api-access-g24dq\") pod \"openstack-galera-0\" (UID: \"bd0f63de-dfe7-471d-92d8-b41e260d970b\") " pod="openstack/openstack-galera-0" Nov 28 12:53:02 crc kubenswrapper[4779]: I1128 12:53:02.600617 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bd0f63de-dfe7-471d-92d8-b41e260d970b-config-data-generated\") pod \"openstack-galera-0\" (UID: \"bd0f63de-dfe7-471d-92d8-b41e260d970b\") " pod="openstack/openstack-galera-0" Nov 28 12:53:02 crc kubenswrapper[4779]: I1128 12:53:02.600787 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bd0f63de-dfe7-471d-92d8-b41e260d970b-config-data-default\") pod \"openstack-galera-0\" (UID: \"bd0f63de-dfe7-471d-92d8-b41e260d970b\") " pod="openstack/openstack-galera-0" Nov 28 12:53:02 crc kubenswrapper[4779]: I1128 12:53:02.600850 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd0f63de-dfe7-471d-92d8-b41e260d970b-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"bd0f63de-dfe7-471d-92d8-b41e260d970b\") " pod="openstack/openstack-galera-0" Nov 28 12:53:02 crc kubenswrapper[4779]: I1128 12:53:02.600976 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd0f63de-dfe7-471d-92d8-b41e260d970b-operator-scripts\") pod \"openstack-galera-0\" (UID: \"bd0f63de-dfe7-471d-92d8-b41e260d970b\") " pod="openstack/openstack-galera-0" Nov 28 12:53:02 crc kubenswrapper[4779]: I1128 12:53:02.601152 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd0f63de-dfe7-471d-92d8-b41e260d970b-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"bd0f63de-dfe7-471d-92d8-b41e260d970b\") " pod="openstack/openstack-galera-0" Nov 28 12:53:02 crc kubenswrapper[4779]: I1128 12:53:02.601227 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"bd0f63de-dfe7-471d-92d8-b41e260d970b\") " pod="openstack/openstack-galera-0" Nov 28 12:53:02 crc kubenswrapper[4779]: I1128 12:53:02.703286 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bd0f63de-dfe7-471d-92d8-b41e260d970b-kolla-config\") pod \"openstack-galera-0\" (UID: \"bd0f63de-dfe7-471d-92d8-b41e260d970b\") " pod="openstack/openstack-galera-0" Nov 28 12:53:02 crc kubenswrapper[4779]: I1128 12:53:02.703362 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g24dq\" (UniqueName: \"kubernetes.io/projected/bd0f63de-dfe7-471d-92d8-b41e260d970b-kube-api-access-g24dq\") pod \"openstack-galera-0\" (UID: \"bd0f63de-dfe7-471d-92d8-b41e260d970b\") " pod="openstack/openstack-galera-0" Nov 28 12:53:02 crc kubenswrapper[4779]: I1128 12:53:02.703446 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bd0f63de-dfe7-471d-92d8-b41e260d970b-config-data-generated\") pod \"openstack-galera-0\" (UID: \"bd0f63de-dfe7-471d-92d8-b41e260d970b\") " pod="openstack/openstack-galera-0" Nov 28 12:53:02 crc kubenswrapper[4779]: I1128 12:53:02.703526 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bd0f63de-dfe7-471d-92d8-b41e260d970b-config-data-default\") pod \"openstack-galera-0\" (UID: \"bd0f63de-dfe7-471d-92d8-b41e260d970b\") " pod="openstack/openstack-galera-0" Nov 28 12:53:02 crc kubenswrapper[4779]: I1128 12:53:02.703563 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd0f63de-dfe7-471d-92d8-b41e260d970b-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"bd0f63de-dfe7-471d-92d8-b41e260d970b\") " pod="openstack/openstack-galera-0" Nov 28 12:53:02 crc kubenswrapper[4779]: I1128 12:53:02.703604 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd0f63de-dfe7-471d-92d8-b41e260d970b-operator-scripts\") pod \"openstack-galera-0\" (UID: \"bd0f63de-dfe7-471d-92d8-b41e260d970b\") " pod="openstack/openstack-galera-0" Nov 28 12:53:02 crc kubenswrapper[4779]: I1128 12:53:02.703675 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd0f63de-dfe7-471d-92d8-b41e260d970b-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"bd0f63de-dfe7-471d-92d8-b41e260d970b\") " pod="openstack/openstack-galera-0" Nov 28 12:53:02 crc kubenswrapper[4779]: I1128 12:53:02.703724 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"bd0f63de-dfe7-471d-92d8-b41e260d970b\") " pod="openstack/openstack-galera-0" Nov 28 12:53:02 crc kubenswrapper[4779]: I1128 12:53:02.703933 4779 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"bd0f63de-dfe7-471d-92d8-b41e260d970b\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/openstack-galera-0" Nov 28 12:53:02 crc kubenswrapper[4779]: I1128 12:53:02.704473 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bd0f63de-dfe7-471d-92d8-b41e260d970b-config-data-generated\") pod \"openstack-galera-0\" (UID: \"bd0f63de-dfe7-471d-92d8-b41e260d970b\") " pod="openstack/openstack-galera-0" Nov 28 12:53:02 crc kubenswrapper[4779]: I1128 12:53:02.704746 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bd0f63de-dfe7-471d-92d8-b41e260d970b-config-data-default\") pod \"openstack-galera-0\" (UID: \"bd0f63de-dfe7-471d-92d8-b41e260d970b\") " pod="openstack/openstack-galera-0" Nov 28 12:53:02 crc kubenswrapper[4779]: I1128 12:53:02.704924 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bd0f63de-dfe7-471d-92d8-b41e260d970b-kolla-config\") pod \"openstack-galera-0\" (UID: \"bd0f63de-dfe7-471d-92d8-b41e260d970b\") " pod="openstack/openstack-galera-0" Nov 28 12:53:02 crc kubenswrapper[4779]: I1128 12:53:02.705917 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd0f63de-dfe7-471d-92d8-b41e260d970b-operator-scripts\") pod \"openstack-galera-0\" (UID: \"bd0f63de-dfe7-471d-92d8-b41e260d970b\") " pod="openstack/openstack-galera-0" Nov 28 12:53:02 crc kubenswrapper[4779]: I1128 12:53:02.709371 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd0f63de-dfe7-471d-92d8-b41e260d970b-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"bd0f63de-dfe7-471d-92d8-b41e260d970b\") " pod="openstack/openstack-galera-0" Nov 28 12:53:02 crc kubenswrapper[4779]: I1128 12:53:02.726356 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd0f63de-dfe7-471d-92d8-b41e260d970b-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"bd0f63de-dfe7-471d-92d8-b41e260d970b\") " pod="openstack/openstack-galera-0" Nov 28 12:53:02 crc kubenswrapper[4779]: I1128 12:53:02.729852 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g24dq\" (UniqueName: \"kubernetes.io/projected/bd0f63de-dfe7-471d-92d8-b41e260d970b-kube-api-access-g24dq\") pod \"openstack-galera-0\" (UID: \"bd0f63de-dfe7-471d-92d8-b41e260d970b\") " pod="openstack/openstack-galera-0" Nov 28 12:53:02 crc kubenswrapper[4779]: I1128 12:53:02.738609 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"bd0f63de-dfe7-471d-92d8-b41e260d970b\") " pod="openstack/openstack-galera-0" Nov 28 12:53:02 crc kubenswrapper[4779]: I1128 12:53:02.884464 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 28 12:53:03 crc kubenswrapper[4779]: I1128 12:53:03.913844 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 28 12:53:03 crc kubenswrapper[4779]: I1128 12:53:03.915794 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 28 12:53:03 crc kubenswrapper[4779]: I1128 12:53:03.920698 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 28 12:53:03 crc kubenswrapper[4779]: I1128 12:53:03.920874 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 28 12:53:03 crc kubenswrapper[4779]: I1128 12:53:03.921368 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 28 12:53:03 crc kubenswrapper[4779]: I1128 12:53:03.921925 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-qrdzl" Nov 28 12:53:03 crc kubenswrapper[4779]: I1128 12:53:03.958973 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 28 12:53:04 crc kubenswrapper[4779]: I1128 12:53:04.023116 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c27e5f17-320d-472d-a3e7-6a0e9fae960b-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c27e5f17-320d-472d-a3e7-6a0e9fae960b\") " pod="openstack/openstack-cell1-galera-0" Nov 28 12:53:04 crc kubenswrapper[4779]: I1128 12:53:04.023453 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c27e5f17-320d-472d-a3e7-6a0e9fae960b-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c27e5f17-320d-472d-a3e7-6a0e9fae960b\") " pod="openstack/openstack-cell1-galera-0" Nov 28 12:53:04 crc kubenswrapper[4779]: I1128 12:53:04.023609 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c27e5f17-320d-472d-a3e7-6a0e9fae960b\") " pod="openstack/openstack-cell1-galera-0" Nov 28 12:53:04 crc kubenswrapper[4779]: I1128 12:53:04.023728 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52q86\" (UniqueName: \"kubernetes.io/projected/c27e5f17-320d-472d-a3e7-6a0e9fae960b-kube-api-access-52q86\") pod \"openstack-cell1-galera-0\" (UID: \"c27e5f17-320d-472d-a3e7-6a0e9fae960b\") " pod="openstack/openstack-cell1-galera-0" Nov 28 12:53:04 crc kubenswrapper[4779]: I1128 12:53:04.023875 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c27e5f17-320d-472d-a3e7-6a0e9fae960b-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c27e5f17-320d-472d-a3e7-6a0e9fae960b\") " pod="openstack/openstack-cell1-galera-0" Nov 28 12:53:04 crc kubenswrapper[4779]: I1128 12:53:04.024017 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c27e5f17-320d-472d-a3e7-6a0e9fae960b-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c27e5f17-320d-472d-a3e7-6a0e9fae960b\") " pod="openstack/openstack-cell1-galera-0" Nov 28 12:53:04 crc kubenswrapper[4779]: I1128 12:53:04.024151 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c27e5f17-320d-472d-a3e7-6a0e9fae960b-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c27e5f17-320d-472d-a3e7-6a0e9fae960b\") " pod="openstack/openstack-cell1-galera-0" Nov 28 12:53:04 crc kubenswrapper[4779]: I1128 12:53:04.024275 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c27e5f17-320d-472d-a3e7-6a0e9fae960b-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c27e5f17-320d-472d-a3e7-6a0e9fae960b\") " pod="openstack/openstack-cell1-galera-0" Nov 28 12:53:04 crc kubenswrapper[4779]: I1128 12:53:04.126158 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c27e5f17-320d-472d-a3e7-6a0e9fae960b-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c27e5f17-320d-472d-a3e7-6a0e9fae960b\") " pod="openstack/openstack-cell1-galera-0" Nov 28 12:53:04 crc kubenswrapper[4779]: I1128 12:53:04.126484 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c27e5f17-320d-472d-a3e7-6a0e9fae960b-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c27e5f17-320d-472d-a3e7-6a0e9fae960b\") " pod="openstack/openstack-cell1-galera-0" Nov 28 12:53:04 crc kubenswrapper[4779]: I1128 12:53:04.126614 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c27e5f17-320d-472d-a3e7-6a0e9fae960b-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c27e5f17-320d-472d-a3e7-6a0e9fae960b\") " pod="openstack/openstack-cell1-galera-0" Nov 28 12:53:04 crc kubenswrapper[4779]: I1128 12:53:04.126731 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c27e5f17-320d-472d-a3e7-6a0e9fae960b-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c27e5f17-320d-472d-a3e7-6a0e9fae960b\") " pod="openstack/openstack-cell1-galera-0" Nov 28 12:53:04 crc kubenswrapper[4779]: I1128 12:53:04.126853 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c27e5f17-320d-472d-a3e7-6a0e9fae960b-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c27e5f17-320d-472d-a3e7-6a0e9fae960b\") " pod="openstack/openstack-cell1-galera-0" Nov 28 12:53:04 crc kubenswrapper[4779]: I1128 12:53:04.126974 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c27e5f17-320d-472d-a3e7-6a0e9fae960b-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c27e5f17-320d-472d-a3e7-6a0e9fae960b\") " pod="openstack/openstack-cell1-galera-0" Nov 28 12:53:04 crc kubenswrapper[4779]: I1128 12:53:04.127116 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c27e5f17-320d-472d-a3e7-6a0e9fae960b\") " pod="openstack/openstack-cell1-galera-0" Nov 28 12:53:04 crc kubenswrapper[4779]: I1128 12:53:04.127236 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52q86\" (UniqueName: \"kubernetes.io/projected/c27e5f17-320d-472d-a3e7-6a0e9fae960b-kube-api-access-52q86\") pod \"openstack-cell1-galera-0\" (UID: \"c27e5f17-320d-472d-a3e7-6a0e9fae960b\") " pod="openstack/openstack-cell1-galera-0" Nov 28 12:53:04 crc kubenswrapper[4779]: I1128 12:53:04.127376 4779 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c27e5f17-320d-472d-a3e7-6a0e9fae960b\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/openstack-cell1-galera-0" Nov 28 12:53:04 crc kubenswrapper[4779]: I1128 12:53:04.127428 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c27e5f17-320d-472d-a3e7-6a0e9fae960b-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c27e5f17-320d-472d-a3e7-6a0e9fae960b\") " pod="openstack/openstack-cell1-galera-0" Nov 28 12:53:04 crc kubenswrapper[4779]: I1128 12:53:04.127863 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c27e5f17-320d-472d-a3e7-6a0e9fae960b-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c27e5f17-320d-472d-a3e7-6a0e9fae960b\") " pod="openstack/openstack-cell1-galera-0" Nov 28 12:53:04 crc kubenswrapper[4779]: I1128 12:53:04.127904 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c27e5f17-320d-472d-a3e7-6a0e9fae960b-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c27e5f17-320d-472d-a3e7-6a0e9fae960b\") " pod="openstack/openstack-cell1-galera-0" Nov 28 12:53:04 crc kubenswrapper[4779]: I1128 12:53:04.144893 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c27e5f17-320d-472d-a3e7-6a0e9fae960b-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c27e5f17-320d-472d-a3e7-6a0e9fae960b\") " pod="openstack/openstack-cell1-galera-0" Nov 28 12:53:04 crc kubenswrapper[4779]: I1128 12:53:04.148869 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52q86\" (UniqueName: \"kubernetes.io/projected/c27e5f17-320d-472d-a3e7-6a0e9fae960b-kube-api-access-52q86\") pod \"openstack-cell1-galera-0\" (UID: \"c27e5f17-320d-472d-a3e7-6a0e9fae960b\") " pod="openstack/openstack-cell1-galera-0" Nov 28 12:53:04 crc kubenswrapper[4779]: I1128 12:53:04.149171 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c27e5f17-320d-472d-a3e7-6a0e9fae960b-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c27e5f17-320d-472d-a3e7-6a0e9fae960b\") " pod="openstack/openstack-cell1-galera-0" Nov 28 12:53:04 crc kubenswrapper[4779]: I1128 12:53:04.161588 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c27e5f17-320d-472d-a3e7-6a0e9fae960b-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c27e5f17-320d-472d-a3e7-6a0e9fae960b\") " pod="openstack/openstack-cell1-galera-0" Nov 28 12:53:04 crc kubenswrapper[4779]: I1128 12:53:04.185310 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c27e5f17-320d-472d-a3e7-6a0e9fae960b\") " pod="openstack/openstack-cell1-galera-0" Nov 28 12:53:04 crc kubenswrapper[4779]: I1128 12:53:04.232175 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 28 12:53:04 crc kubenswrapper[4779]: I1128 12:53:04.233309 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 28 12:53:04 crc kubenswrapper[4779]: I1128 12:53:04.235068 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-qdbnc" Nov 28 12:53:04 crc kubenswrapper[4779]: I1128 12:53:04.235413 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 28 12:53:04 crc kubenswrapper[4779]: I1128 12:53:04.236640 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 28 12:53:04 crc kubenswrapper[4779]: I1128 12:53:04.241882 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 28 12:53:04 crc kubenswrapper[4779]: I1128 12:53:04.250745 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 28 12:53:04 crc kubenswrapper[4779]: I1128 12:53:04.330178 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/28783fa8-aac9-4041-aba2-ba78f5be6f66-kolla-config\") pod \"memcached-0\" (UID: \"28783fa8-aac9-4041-aba2-ba78f5be6f66\") " pod="openstack/memcached-0" Nov 28 12:53:04 crc kubenswrapper[4779]: I1128 12:53:04.330237 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28783fa8-aac9-4041-aba2-ba78f5be6f66-combined-ca-bundle\") pod \"memcached-0\" (UID: \"28783fa8-aac9-4041-aba2-ba78f5be6f66\") " pod="openstack/memcached-0" Nov 28 12:53:04 crc kubenswrapper[4779]: I1128 12:53:04.330338 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzg54\" (UniqueName: \"kubernetes.io/projected/28783fa8-aac9-4041-aba2-ba78f5be6f66-kube-api-access-jzg54\") pod \"memcached-0\" (UID: \"28783fa8-aac9-4041-aba2-ba78f5be6f66\") " pod="openstack/memcached-0" Nov 28 12:53:04 crc kubenswrapper[4779]: I1128 12:53:04.330374 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/28783fa8-aac9-4041-aba2-ba78f5be6f66-memcached-tls-certs\") pod \"memcached-0\" (UID: \"28783fa8-aac9-4041-aba2-ba78f5be6f66\") " pod="openstack/memcached-0" Nov 28 12:53:04 crc kubenswrapper[4779]: I1128 12:53:04.330575 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/28783fa8-aac9-4041-aba2-ba78f5be6f66-config-data\") pod \"memcached-0\" (UID: \"28783fa8-aac9-4041-aba2-ba78f5be6f66\") " pod="openstack/memcached-0" Nov 28 12:53:04 crc kubenswrapper[4779]: I1128 12:53:04.431943 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzg54\" (UniqueName: \"kubernetes.io/projected/28783fa8-aac9-4041-aba2-ba78f5be6f66-kube-api-access-jzg54\") pod \"memcached-0\" (UID: \"28783fa8-aac9-4041-aba2-ba78f5be6f66\") " pod="openstack/memcached-0" Nov 28 12:53:04 crc kubenswrapper[4779]: I1128 12:53:04.432417 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/28783fa8-aac9-4041-aba2-ba78f5be6f66-memcached-tls-certs\") pod \"memcached-0\" (UID: \"28783fa8-aac9-4041-aba2-ba78f5be6f66\") " pod="openstack/memcached-0" Nov 28 12:53:04 crc kubenswrapper[4779]: I1128 12:53:04.432454 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/28783fa8-aac9-4041-aba2-ba78f5be6f66-config-data\") pod \"memcached-0\" (UID: \"28783fa8-aac9-4041-aba2-ba78f5be6f66\") " pod="openstack/memcached-0" Nov 28 12:53:04 crc kubenswrapper[4779]: I1128 12:53:04.432495 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/28783fa8-aac9-4041-aba2-ba78f5be6f66-kolla-config\") pod \"memcached-0\" (UID: \"28783fa8-aac9-4041-aba2-ba78f5be6f66\") " pod="openstack/memcached-0" Nov 28 12:53:04 crc kubenswrapper[4779]: I1128 12:53:04.432513 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28783fa8-aac9-4041-aba2-ba78f5be6f66-combined-ca-bundle\") pod \"memcached-0\" (UID: \"28783fa8-aac9-4041-aba2-ba78f5be6f66\") " pod="openstack/memcached-0" Nov 28 12:53:04 crc kubenswrapper[4779]: I1128 12:53:04.433416 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/28783fa8-aac9-4041-aba2-ba78f5be6f66-kolla-config\") pod \"memcached-0\" (UID: \"28783fa8-aac9-4041-aba2-ba78f5be6f66\") " pod="openstack/memcached-0" Nov 28 12:53:04 crc kubenswrapper[4779]: I1128 12:53:04.433532 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/28783fa8-aac9-4041-aba2-ba78f5be6f66-config-data\") pod \"memcached-0\" (UID: \"28783fa8-aac9-4041-aba2-ba78f5be6f66\") " pod="openstack/memcached-0" Nov 28 12:53:04 crc kubenswrapper[4779]: I1128 12:53:04.436715 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28783fa8-aac9-4041-aba2-ba78f5be6f66-combined-ca-bundle\") pod \"memcached-0\" (UID: \"28783fa8-aac9-4041-aba2-ba78f5be6f66\") " pod="openstack/memcached-0" Nov 28 12:53:04 crc kubenswrapper[4779]: I1128 12:53:04.436739 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/28783fa8-aac9-4041-aba2-ba78f5be6f66-memcached-tls-certs\") pod \"memcached-0\" (UID: \"28783fa8-aac9-4041-aba2-ba78f5be6f66\") " pod="openstack/memcached-0" Nov 28 12:53:04 crc kubenswrapper[4779]: I1128 12:53:04.454573 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzg54\" (UniqueName: \"kubernetes.io/projected/28783fa8-aac9-4041-aba2-ba78f5be6f66-kube-api-access-jzg54\") pod \"memcached-0\" (UID: \"28783fa8-aac9-4041-aba2-ba78f5be6f66\") " pod="openstack/memcached-0" Nov 28 12:53:04 crc kubenswrapper[4779]: I1128 12:53:04.546432 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 28 12:53:06 crc kubenswrapper[4779]: I1128 12:53:06.497153 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 12:53:06 crc kubenswrapper[4779]: I1128 12:53:06.498191 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 28 12:53:06 crc kubenswrapper[4779]: I1128 12:53:06.500478 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-xdf8w" Nov 28 12:53:06 crc kubenswrapper[4779]: I1128 12:53:06.506908 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 12:53:06 crc kubenswrapper[4779]: I1128 12:53:06.571030 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxs72\" (UniqueName: \"kubernetes.io/projected/8482bdcc-fe9d-4ed6-8ade-a1319330b252-kube-api-access-fxs72\") pod \"kube-state-metrics-0\" (UID: \"8482bdcc-fe9d-4ed6-8ade-a1319330b252\") " pod="openstack/kube-state-metrics-0" Nov 28 12:53:06 crc kubenswrapper[4779]: I1128 12:53:06.671846 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxs72\" (UniqueName: \"kubernetes.io/projected/8482bdcc-fe9d-4ed6-8ade-a1319330b252-kube-api-access-fxs72\") pod \"kube-state-metrics-0\" (UID: \"8482bdcc-fe9d-4ed6-8ade-a1319330b252\") " pod="openstack/kube-state-metrics-0" Nov 28 12:53:06 crc kubenswrapper[4779]: I1128 12:53:06.705969 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxs72\" (UniqueName: \"kubernetes.io/projected/8482bdcc-fe9d-4ed6-8ade-a1319330b252-kube-api-access-fxs72\") pod \"kube-state-metrics-0\" (UID: \"8482bdcc-fe9d-4ed6-8ade-a1319330b252\") " pod="openstack/kube-state-metrics-0" Nov 28 12:53:06 crc kubenswrapper[4779]: I1128 12:53:06.864486 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.338975 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-7bg4l"] Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.339910 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7bg4l" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.341632 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.342719 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.352615 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-spndh" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.361759 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7bg4l"] Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.376647 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-c6d9j"] Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.378132 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-c6d9j" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.407357 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-c6d9j"] Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.441212 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5049f1f8-c081-4671-8d6a-9282a53dd6bd-var-run\") pod \"ovn-controller-7bg4l\" (UID: \"5049f1f8-c081-4671-8d6a-9282a53dd6bd\") " pod="openstack/ovn-controller-7bg4l" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.441264 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwn2w\" (UniqueName: \"kubernetes.io/projected/5049f1f8-c081-4671-8d6a-9282a53dd6bd-kube-api-access-xwn2w\") pod \"ovn-controller-7bg4l\" (UID: \"5049f1f8-c081-4671-8d6a-9282a53dd6bd\") " pod="openstack/ovn-controller-7bg4l" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.441292 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5049f1f8-c081-4671-8d6a-9282a53dd6bd-scripts\") pod \"ovn-controller-7bg4l\" (UID: \"5049f1f8-c081-4671-8d6a-9282a53dd6bd\") " pod="openstack/ovn-controller-7bg4l" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.441316 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5049f1f8-c081-4671-8d6a-9282a53dd6bd-var-run-ovn\") pod \"ovn-controller-7bg4l\" (UID: \"5049f1f8-c081-4671-8d6a-9282a53dd6bd\") " pod="openstack/ovn-controller-7bg4l" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.441338 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5049f1f8-c081-4671-8d6a-9282a53dd6bd-ovn-controller-tls-certs\") pod \"ovn-controller-7bg4l\" (UID: \"5049f1f8-c081-4671-8d6a-9282a53dd6bd\") " pod="openstack/ovn-controller-7bg4l" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.441384 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5049f1f8-c081-4671-8d6a-9282a53dd6bd-combined-ca-bundle\") pod \"ovn-controller-7bg4l\" (UID: \"5049f1f8-c081-4671-8d6a-9282a53dd6bd\") " pod="openstack/ovn-controller-7bg4l" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.441412 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5049f1f8-c081-4671-8d6a-9282a53dd6bd-var-log-ovn\") pod \"ovn-controller-7bg4l\" (UID: \"5049f1f8-c081-4671-8d6a-9282a53dd6bd\") " pod="openstack/ovn-controller-7bg4l" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.542804 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5049f1f8-c081-4671-8d6a-9282a53dd6bd-scripts\") pod \"ovn-controller-7bg4l\" (UID: \"5049f1f8-c081-4671-8d6a-9282a53dd6bd\") " pod="openstack/ovn-controller-7bg4l" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.542867 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a9ef6128-c3cf-4c5a-80ff-e0c4c263637d-var-lib\") pod \"ovn-controller-ovs-c6d9j\" (UID: \"a9ef6128-c3cf-4c5a-80ff-e0c4c263637d\") " pod="openstack/ovn-controller-ovs-c6d9j" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.542894 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5049f1f8-c081-4671-8d6a-9282a53dd6bd-var-run-ovn\") pod \"ovn-controller-7bg4l\" (UID: \"5049f1f8-c081-4671-8d6a-9282a53dd6bd\") " pod="openstack/ovn-controller-7bg4l" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.542922 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pplfj\" (UniqueName: \"kubernetes.io/projected/a9ef6128-c3cf-4c5a-80ff-e0c4c263637d-kube-api-access-pplfj\") pod \"ovn-controller-ovs-c6d9j\" (UID: \"a9ef6128-c3cf-4c5a-80ff-e0c4c263637d\") " pod="openstack/ovn-controller-ovs-c6d9j" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.542948 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5049f1f8-c081-4671-8d6a-9282a53dd6bd-ovn-controller-tls-certs\") pod \"ovn-controller-7bg4l\" (UID: \"5049f1f8-c081-4671-8d6a-9282a53dd6bd\") " pod="openstack/ovn-controller-7bg4l" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.543073 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a9ef6128-c3cf-4c5a-80ff-e0c4c263637d-var-log\") pod \"ovn-controller-ovs-c6d9j\" (UID: \"a9ef6128-c3cf-4c5a-80ff-e0c4c263637d\") " pod="openstack/ovn-controller-ovs-c6d9j" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.543256 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5049f1f8-c081-4671-8d6a-9282a53dd6bd-combined-ca-bundle\") pod \"ovn-controller-7bg4l\" (UID: \"5049f1f8-c081-4671-8d6a-9282a53dd6bd\") " pod="openstack/ovn-controller-7bg4l" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.543501 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5049f1f8-c081-4671-8d6a-9282a53dd6bd-var-log-ovn\") pod \"ovn-controller-7bg4l\" (UID: \"5049f1f8-c081-4671-8d6a-9282a53dd6bd\") " pod="openstack/ovn-controller-7bg4l" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.543538 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a9ef6128-c3cf-4c5a-80ff-e0c4c263637d-etc-ovs\") pod \"ovn-controller-ovs-c6d9j\" (UID: \"a9ef6128-c3cf-4c5a-80ff-e0c4c263637d\") " pod="openstack/ovn-controller-ovs-c6d9j" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.543606 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a9ef6128-c3cf-4c5a-80ff-e0c4c263637d-scripts\") pod \"ovn-controller-ovs-c6d9j\" (UID: \"a9ef6128-c3cf-4c5a-80ff-e0c4c263637d\") " pod="openstack/ovn-controller-ovs-c6d9j" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.543656 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5049f1f8-c081-4671-8d6a-9282a53dd6bd-var-run\") pod \"ovn-controller-7bg4l\" (UID: \"5049f1f8-c081-4671-8d6a-9282a53dd6bd\") " pod="openstack/ovn-controller-7bg4l" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.543711 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a9ef6128-c3cf-4c5a-80ff-e0c4c263637d-var-run\") pod \"ovn-controller-ovs-c6d9j\" (UID: \"a9ef6128-c3cf-4c5a-80ff-e0c4c263637d\") " pod="openstack/ovn-controller-ovs-c6d9j" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.543734 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwn2w\" (UniqueName: \"kubernetes.io/projected/5049f1f8-c081-4671-8d6a-9282a53dd6bd-kube-api-access-xwn2w\") pod \"ovn-controller-7bg4l\" (UID: \"5049f1f8-c081-4671-8d6a-9282a53dd6bd\") " pod="openstack/ovn-controller-7bg4l" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.543970 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5049f1f8-c081-4671-8d6a-9282a53dd6bd-var-run-ovn\") pod \"ovn-controller-7bg4l\" (UID: \"5049f1f8-c081-4671-8d6a-9282a53dd6bd\") " pod="openstack/ovn-controller-7bg4l" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.544600 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5049f1f8-c081-4671-8d6a-9282a53dd6bd-var-log-ovn\") pod \"ovn-controller-7bg4l\" (UID: \"5049f1f8-c081-4671-8d6a-9282a53dd6bd\") " pod="openstack/ovn-controller-7bg4l" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.544596 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5049f1f8-c081-4671-8d6a-9282a53dd6bd-var-run\") pod \"ovn-controller-7bg4l\" (UID: \"5049f1f8-c081-4671-8d6a-9282a53dd6bd\") " pod="openstack/ovn-controller-7bg4l" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.545942 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5049f1f8-c081-4671-8d6a-9282a53dd6bd-scripts\") pod \"ovn-controller-7bg4l\" (UID: \"5049f1f8-c081-4671-8d6a-9282a53dd6bd\") " pod="openstack/ovn-controller-7bg4l" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.549923 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5049f1f8-c081-4671-8d6a-9282a53dd6bd-combined-ca-bundle\") pod \"ovn-controller-7bg4l\" (UID: \"5049f1f8-c081-4671-8d6a-9282a53dd6bd\") " pod="openstack/ovn-controller-7bg4l" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.550233 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5049f1f8-c081-4671-8d6a-9282a53dd6bd-ovn-controller-tls-certs\") pod \"ovn-controller-7bg4l\" (UID: \"5049f1f8-c081-4671-8d6a-9282a53dd6bd\") " pod="openstack/ovn-controller-7bg4l" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.566462 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwn2w\" (UniqueName: \"kubernetes.io/projected/5049f1f8-c081-4671-8d6a-9282a53dd6bd-kube-api-access-xwn2w\") pod \"ovn-controller-7bg4l\" (UID: \"5049f1f8-c081-4671-8d6a-9282a53dd6bd\") " pod="openstack/ovn-controller-7bg4l" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.644828 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a9ef6128-c3cf-4c5a-80ff-e0c4c263637d-scripts\") pod \"ovn-controller-ovs-c6d9j\" (UID: \"a9ef6128-c3cf-4c5a-80ff-e0c4c263637d\") " pod="openstack/ovn-controller-ovs-c6d9j" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.644885 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a9ef6128-c3cf-4c5a-80ff-e0c4c263637d-var-run\") pod \"ovn-controller-ovs-c6d9j\" (UID: \"a9ef6128-c3cf-4c5a-80ff-e0c4c263637d\") " pod="openstack/ovn-controller-ovs-c6d9j" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.644919 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a9ef6128-c3cf-4c5a-80ff-e0c4c263637d-var-lib\") pod \"ovn-controller-ovs-c6d9j\" (UID: \"a9ef6128-c3cf-4c5a-80ff-e0c4c263637d\") " pod="openstack/ovn-controller-ovs-c6d9j" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.644936 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pplfj\" (UniqueName: \"kubernetes.io/projected/a9ef6128-c3cf-4c5a-80ff-e0c4c263637d-kube-api-access-pplfj\") pod \"ovn-controller-ovs-c6d9j\" (UID: \"a9ef6128-c3cf-4c5a-80ff-e0c4c263637d\") " pod="openstack/ovn-controller-ovs-c6d9j" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.644964 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a9ef6128-c3cf-4c5a-80ff-e0c4c263637d-var-log\") pod \"ovn-controller-ovs-c6d9j\" (UID: \"a9ef6128-c3cf-4c5a-80ff-e0c4c263637d\") " pod="openstack/ovn-controller-ovs-c6d9j" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.645015 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a9ef6128-c3cf-4c5a-80ff-e0c4c263637d-etc-ovs\") pod \"ovn-controller-ovs-c6d9j\" (UID: \"a9ef6128-c3cf-4c5a-80ff-e0c4c263637d\") " pod="openstack/ovn-controller-ovs-c6d9j" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.645264 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a9ef6128-c3cf-4c5a-80ff-e0c4c263637d-etc-ovs\") pod \"ovn-controller-ovs-c6d9j\" (UID: \"a9ef6128-c3cf-4c5a-80ff-e0c4c263637d\") " pod="openstack/ovn-controller-ovs-c6d9j" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.645264 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a9ef6128-c3cf-4c5a-80ff-e0c4c263637d-var-run\") pod \"ovn-controller-ovs-c6d9j\" (UID: \"a9ef6128-c3cf-4c5a-80ff-e0c4c263637d\") " pod="openstack/ovn-controller-ovs-c6d9j" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.645352 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a9ef6128-c3cf-4c5a-80ff-e0c4c263637d-var-log\") pod \"ovn-controller-ovs-c6d9j\" (UID: \"a9ef6128-c3cf-4c5a-80ff-e0c4c263637d\") " pod="openstack/ovn-controller-ovs-c6d9j" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.645357 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a9ef6128-c3cf-4c5a-80ff-e0c4c263637d-var-lib\") pod \"ovn-controller-ovs-c6d9j\" (UID: \"a9ef6128-c3cf-4c5a-80ff-e0c4c263637d\") " pod="openstack/ovn-controller-ovs-c6d9j" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.646619 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a9ef6128-c3cf-4c5a-80ff-e0c4c263637d-scripts\") pod \"ovn-controller-ovs-c6d9j\" (UID: \"a9ef6128-c3cf-4c5a-80ff-e0c4c263637d\") " pod="openstack/ovn-controller-ovs-c6d9j" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.664031 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7bg4l" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.674297 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pplfj\" (UniqueName: \"kubernetes.io/projected/a9ef6128-c3cf-4c5a-80ff-e0c4c263637d-kube-api-access-pplfj\") pod \"ovn-controller-ovs-c6d9j\" (UID: \"a9ef6128-c3cf-4c5a-80ff-e0c4c263637d\") " pod="openstack/ovn-controller-ovs-c6d9j" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.699246 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-c6d9j" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.829602 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.831338 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.836794 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.837141 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-ckgqx" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.837308 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.837463 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.838581 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.838893 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.948426 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/aa122564-e2c8-4ceb-b66d-1b677aaa4b21-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"aa122564-e2c8-4ceb-b66d-1b677aaa4b21\") " pod="openstack/ovsdbserver-nb-0" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.948475 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbwqp\" (UniqueName: \"kubernetes.io/projected/aa122564-e2c8-4ceb-b66d-1b677aaa4b21-kube-api-access-hbwqp\") pod \"ovsdbserver-nb-0\" (UID: \"aa122564-e2c8-4ceb-b66d-1b677aaa4b21\") " pod="openstack/ovsdbserver-nb-0" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.948519 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa122564-e2c8-4ceb-b66d-1b677aaa4b21-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"aa122564-e2c8-4ceb-b66d-1b677aaa4b21\") " pod="openstack/ovsdbserver-nb-0" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.948542 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aa122564-e2c8-4ceb-b66d-1b677aaa4b21-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"aa122564-e2c8-4ceb-b66d-1b677aaa4b21\") " pod="openstack/ovsdbserver-nb-0" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.948577 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa122564-e2c8-4ceb-b66d-1b677aaa4b21-config\") pod \"ovsdbserver-nb-0\" (UID: \"aa122564-e2c8-4ceb-b66d-1b677aaa4b21\") " pod="openstack/ovsdbserver-nb-0" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.948596 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa122564-e2c8-4ceb-b66d-1b677aaa4b21-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"aa122564-e2c8-4ceb-b66d-1b677aaa4b21\") " pod="openstack/ovsdbserver-nb-0" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.948615 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"aa122564-e2c8-4ceb-b66d-1b677aaa4b21\") " pod="openstack/ovsdbserver-nb-0" Nov 28 12:53:09 crc kubenswrapper[4779]: I1128 12:53:09.948643 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa122564-e2c8-4ceb-b66d-1b677aaa4b21-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"aa122564-e2c8-4ceb-b66d-1b677aaa4b21\") " pod="openstack/ovsdbserver-nb-0" Nov 28 12:53:10 crc kubenswrapper[4779]: I1128 12:53:10.050218 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa122564-e2c8-4ceb-b66d-1b677aaa4b21-config\") pod \"ovsdbserver-nb-0\" (UID: \"aa122564-e2c8-4ceb-b66d-1b677aaa4b21\") " pod="openstack/ovsdbserver-nb-0" Nov 28 12:53:10 crc kubenswrapper[4779]: I1128 12:53:10.050264 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa122564-e2c8-4ceb-b66d-1b677aaa4b21-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"aa122564-e2c8-4ceb-b66d-1b677aaa4b21\") " pod="openstack/ovsdbserver-nb-0" Nov 28 12:53:10 crc kubenswrapper[4779]: I1128 12:53:10.050283 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"aa122564-e2c8-4ceb-b66d-1b677aaa4b21\") " pod="openstack/ovsdbserver-nb-0" Nov 28 12:53:10 crc kubenswrapper[4779]: I1128 12:53:10.050315 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa122564-e2c8-4ceb-b66d-1b677aaa4b21-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"aa122564-e2c8-4ceb-b66d-1b677aaa4b21\") " pod="openstack/ovsdbserver-nb-0" Nov 28 12:53:10 crc kubenswrapper[4779]: I1128 12:53:10.050364 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/aa122564-e2c8-4ceb-b66d-1b677aaa4b21-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"aa122564-e2c8-4ceb-b66d-1b677aaa4b21\") " pod="openstack/ovsdbserver-nb-0" Nov 28 12:53:10 crc kubenswrapper[4779]: I1128 12:53:10.050382 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbwqp\" (UniqueName: \"kubernetes.io/projected/aa122564-e2c8-4ceb-b66d-1b677aaa4b21-kube-api-access-hbwqp\") pod \"ovsdbserver-nb-0\" (UID: \"aa122564-e2c8-4ceb-b66d-1b677aaa4b21\") " pod="openstack/ovsdbserver-nb-0" Nov 28 12:53:10 crc kubenswrapper[4779]: I1128 12:53:10.050416 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa122564-e2c8-4ceb-b66d-1b677aaa4b21-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"aa122564-e2c8-4ceb-b66d-1b677aaa4b21\") " pod="openstack/ovsdbserver-nb-0" Nov 28 12:53:10 crc kubenswrapper[4779]: I1128 12:53:10.050433 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aa122564-e2c8-4ceb-b66d-1b677aaa4b21-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"aa122564-e2c8-4ceb-b66d-1b677aaa4b21\") " pod="openstack/ovsdbserver-nb-0" Nov 28 12:53:10 crc kubenswrapper[4779]: I1128 12:53:10.050669 4779 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"aa122564-e2c8-4ceb-b66d-1b677aaa4b21\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/ovsdbserver-nb-0" Nov 28 12:53:10 crc kubenswrapper[4779]: I1128 12:53:10.051463 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/aa122564-e2c8-4ceb-b66d-1b677aaa4b21-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"aa122564-e2c8-4ceb-b66d-1b677aaa4b21\") " pod="openstack/ovsdbserver-nb-0" Nov 28 12:53:10 crc kubenswrapper[4779]: I1128 12:53:10.051490 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa122564-e2c8-4ceb-b66d-1b677aaa4b21-config\") pod \"ovsdbserver-nb-0\" (UID: \"aa122564-e2c8-4ceb-b66d-1b677aaa4b21\") " pod="openstack/ovsdbserver-nb-0" Nov 28 12:53:10 crc kubenswrapper[4779]: I1128 12:53:10.052226 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aa122564-e2c8-4ceb-b66d-1b677aaa4b21-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"aa122564-e2c8-4ceb-b66d-1b677aaa4b21\") " pod="openstack/ovsdbserver-nb-0" Nov 28 12:53:10 crc kubenswrapper[4779]: I1128 12:53:10.056676 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa122564-e2c8-4ceb-b66d-1b677aaa4b21-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"aa122564-e2c8-4ceb-b66d-1b677aaa4b21\") " pod="openstack/ovsdbserver-nb-0" Nov 28 12:53:10 crc kubenswrapper[4779]: I1128 12:53:10.067327 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa122564-e2c8-4ceb-b66d-1b677aaa4b21-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"aa122564-e2c8-4ceb-b66d-1b677aaa4b21\") " pod="openstack/ovsdbserver-nb-0" Nov 28 12:53:10 crc kubenswrapper[4779]: I1128 12:53:10.081695 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa122564-e2c8-4ceb-b66d-1b677aaa4b21-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"aa122564-e2c8-4ceb-b66d-1b677aaa4b21\") " pod="openstack/ovsdbserver-nb-0" Nov 28 12:53:10 crc kubenswrapper[4779]: I1128 12:53:10.097568 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbwqp\" (UniqueName: \"kubernetes.io/projected/aa122564-e2c8-4ceb-b66d-1b677aaa4b21-kube-api-access-hbwqp\") pod \"ovsdbserver-nb-0\" (UID: \"aa122564-e2c8-4ceb-b66d-1b677aaa4b21\") " pod="openstack/ovsdbserver-nb-0" Nov 28 12:53:10 crc kubenswrapper[4779]: I1128 12:53:10.114166 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"aa122564-e2c8-4ceb-b66d-1b677aaa4b21\") " pod="openstack/ovsdbserver-nb-0" Nov 28 12:53:10 crc kubenswrapper[4779]: I1128 12:53:10.168013 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 28 12:53:13 crc kubenswrapper[4779]: I1128 12:53:13.587541 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 28 12:53:13 crc kubenswrapper[4779]: I1128 12:53:13.589437 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 28 12:53:13 crc kubenswrapper[4779]: I1128 12:53:13.593610 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-rdfks" Nov 28 12:53:13 crc kubenswrapper[4779]: I1128 12:53:13.593904 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 28 12:53:13 crc kubenswrapper[4779]: I1128 12:53:13.594185 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 28 12:53:13 crc kubenswrapper[4779]: I1128 12:53:13.594297 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 28 12:53:13 crc kubenswrapper[4779]: I1128 12:53:13.594381 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 28 12:53:13 crc kubenswrapper[4779]: I1128 12:53:13.721409 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5vq5\" (UniqueName: \"kubernetes.io/projected/7312815e-950e-48e2-bcbe-c74717279168-kube-api-access-m5vq5\") pod \"ovsdbserver-sb-0\" (UID: \"7312815e-950e-48e2-bcbe-c74717279168\") " pod="openstack/ovsdbserver-sb-0" Nov 28 12:53:13 crc kubenswrapper[4779]: I1128 12:53:13.721772 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7312815e-950e-48e2-bcbe-c74717279168-config\") pod \"ovsdbserver-sb-0\" (UID: \"7312815e-950e-48e2-bcbe-c74717279168\") " pod="openstack/ovsdbserver-sb-0" Nov 28 12:53:13 crc kubenswrapper[4779]: I1128 12:53:13.721806 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7312815e-950e-48e2-bcbe-c74717279168-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"7312815e-950e-48e2-bcbe-c74717279168\") " pod="openstack/ovsdbserver-sb-0" Nov 28 12:53:13 crc kubenswrapper[4779]: I1128 12:53:13.721871 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7312815e-950e-48e2-bcbe-c74717279168-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"7312815e-950e-48e2-bcbe-c74717279168\") " pod="openstack/ovsdbserver-sb-0" Nov 28 12:53:13 crc kubenswrapper[4779]: I1128 12:53:13.721997 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7312815e-950e-48e2-bcbe-c74717279168-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7312815e-950e-48e2-bcbe-c74717279168\") " pod="openstack/ovsdbserver-sb-0" Nov 28 12:53:13 crc kubenswrapper[4779]: I1128 12:53:13.722077 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7312815e-950e-48e2-bcbe-c74717279168-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"7312815e-950e-48e2-bcbe-c74717279168\") " pod="openstack/ovsdbserver-sb-0" Nov 28 12:53:13 crc kubenswrapper[4779]: I1128 12:53:13.722174 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7312815e-950e-48e2-bcbe-c74717279168-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7312815e-950e-48e2-bcbe-c74717279168\") " pod="openstack/ovsdbserver-sb-0" Nov 28 12:53:13 crc kubenswrapper[4779]: I1128 12:53:13.722230 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"7312815e-950e-48e2-bcbe-c74717279168\") " pod="openstack/ovsdbserver-sb-0" Nov 28 12:53:13 crc kubenswrapper[4779]: I1128 12:53:13.824151 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7312815e-950e-48e2-bcbe-c74717279168-config\") pod \"ovsdbserver-sb-0\" (UID: \"7312815e-950e-48e2-bcbe-c74717279168\") " pod="openstack/ovsdbserver-sb-0" Nov 28 12:53:13 crc kubenswrapper[4779]: I1128 12:53:13.824222 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7312815e-950e-48e2-bcbe-c74717279168-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"7312815e-950e-48e2-bcbe-c74717279168\") " pod="openstack/ovsdbserver-sb-0" Nov 28 12:53:13 crc kubenswrapper[4779]: I1128 12:53:13.824327 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7312815e-950e-48e2-bcbe-c74717279168-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"7312815e-950e-48e2-bcbe-c74717279168\") " pod="openstack/ovsdbserver-sb-0" Nov 28 12:53:13 crc kubenswrapper[4779]: I1128 12:53:13.824408 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7312815e-950e-48e2-bcbe-c74717279168-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7312815e-950e-48e2-bcbe-c74717279168\") " pod="openstack/ovsdbserver-sb-0" Nov 28 12:53:13 crc kubenswrapper[4779]: I1128 12:53:13.824458 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7312815e-950e-48e2-bcbe-c74717279168-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"7312815e-950e-48e2-bcbe-c74717279168\") " pod="openstack/ovsdbserver-sb-0" Nov 28 12:53:13 crc kubenswrapper[4779]: I1128 12:53:13.824510 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7312815e-950e-48e2-bcbe-c74717279168-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7312815e-950e-48e2-bcbe-c74717279168\") " pod="openstack/ovsdbserver-sb-0" Nov 28 12:53:13 crc kubenswrapper[4779]: I1128 12:53:13.824559 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"7312815e-950e-48e2-bcbe-c74717279168\") " pod="openstack/ovsdbserver-sb-0" Nov 28 12:53:13 crc kubenswrapper[4779]: I1128 12:53:13.824603 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5vq5\" (UniqueName: \"kubernetes.io/projected/7312815e-950e-48e2-bcbe-c74717279168-kube-api-access-m5vq5\") pod \"ovsdbserver-sb-0\" (UID: \"7312815e-950e-48e2-bcbe-c74717279168\") " pod="openstack/ovsdbserver-sb-0" Nov 28 12:53:13 crc kubenswrapper[4779]: I1128 12:53:13.825956 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7312815e-950e-48e2-bcbe-c74717279168-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"7312815e-950e-48e2-bcbe-c74717279168\") " pod="openstack/ovsdbserver-sb-0" Nov 28 12:53:13 crc kubenswrapper[4779]: I1128 12:53:13.826392 4779 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"7312815e-950e-48e2-bcbe-c74717279168\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/ovsdbserver-sb-0" Nov 28 12:53:13 crc kubenswrapper[4779]: I1128 12:53:13.826687 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7312815e-950e-48e2-bcbe-c74717279168-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"7312815e-950e-48e2-bcbe-c74717279168\") " pod="openstack/ovsdbserver-sb-0" Nov 28 12:53:13 crc kubenswrapper[4779]: I1128 12:53:13.827273 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7312815e-950e-48e2-bcbe-c74717279168-config\") pod \"ovsdbserver-sb-0\" (UID: \"7312815e-950e-48e2-bcbe-c74717279168\") " pod="openstack/ovsdbserver-sb-0" Nov 28 12:53:13 crc kubenswrapper[4779]: I1128 12:53:13.834902 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7312815e-950e-48e2-bcbe-c74717279168-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"7312815e-950e-48e2-bcbe-c74717279168\") " pod="openstack/ovsdbserver-sb-0" Nov 28 12:53:13 crc kubenswrapper[4779]: I1128 12:53:13.838171 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7312815e-950e-48e2-bcbe-c74717279168-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7312815e-950e-48e2-bcbe-c74717279168\") " pod="openstack/ovsdbserver-sb-0" Nov 28 12:53:13 crc kubenswrapper[4779]: I1128 12:53:13.840909 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7312815e-950e-48e2-bcbe-c74717279168-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7312815e-950e-48e2-bcbe-c74717279168\") " pod="openstack/ovsdbserver-sb-0" Nov 28 12:53:13 crc kubenswrapper[4779]: I1128 12:53:13.849440 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5vq5\" (UniqueName: \"kubernetes.io/projected/7312815e-950e-48e2-bcbe-c74717279168-kube-api-access-m5vq5\") pod \"ovsdbserver-sb-0\" (UID: \"7312815e-950e-48e2-bcbe-c74717279168\") " pod="openstack/ovsdbserver-sb-0" Nov 28 12:53:13 crc kubenswrapper[4779]: I1128 12:53:13.863488 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"7312815e-950e-48e2-bcbe-c74717279168\") " pod="openstack/ovsdbserver-sb-0" Nov 28 12:53:13 crc kubenswrapper[4779]: I1128 12:53:13.910628 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 28 12:53:15 crc kubenswrapper[4779]: I1128 12:53:15.686415 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-wcndb"] Nov 28 12:53:16 crc kubenswrapper[4779]: E1128 12:53:16.345767 4779 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 28 12:53:16 crc kubenswrapper[4779]: E1128 12:53:16.345937 4779 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8g55g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-sr5r4_openstack(6e93f61b-ad6b-4f49-8916-1b371c57865e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 12:53:16 crc kubenswrapper[4779]: W1128 12:53:16.347005 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdf95638_8948_4749_b04d_5a58b43dbc7b.slice/crio-390e8ef0dd3ec50117d43627ac149a50ea0ff31d015605378fd56700e80aee3c WatchSource:0}: Error finding container 390e8ef0dd3ec50117d43627ac149a50ea0ff31d015605378fd56700e80aee3c: Status 404 returned error can't find the container with id 390e8ef0dd3ec50117d43627ac149a50ea0ff31d015605378fd56700e80aee3c Nov 28 12:53:16 crc kubenswrapper[4779]: E1128 12:53:16.347381 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-sr5r4" podUID="6e93f61b-ad6b-4f49-8916-1b371c57865e" Nov 28 12:53:16 crc kubenswrapper[4779]: E1128 12:53:16.385594 4779 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 28 12:53:16 crc kubenswrapper[4779]: E1128 12:53:16.385742 4779 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-92krt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-7vgkf_openstack(2c4a21b9-6b54-42cd-9dea-630957b1ba47): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 12:53:16 crc kubenswrapper[4779]: E1128 12:53:16.393302 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-7vgkf" podUID="2c4a21b9-6b54-42cd-9dea-630957b1ba47" Nov 28 12:53:17 crc kubenswrapper[4779]: I1128 12:53:17.030161 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 12:53:17 crc kubenswrapper[4779]: I1128 12:53:17.046950 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-wcndb" event={"ID":"bdf95638-8948-4749-b04d-5a58b43dbc7b","Type":"ContainerStarted","Data":"390e8ef0dd3ec50117d43627ac149a50ea0ff31d015605378fd56700e80aee3c"} Nov 28 12:53:17 crc kubenswrapper[4779]: I1128 12:53:17.060199 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 12:53:17 crc kubenswrapper[4779]: W1128 12:53:17.079334 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8482bdcc_fe9d_4ed6_8ade_a1319330b252.slice/crio-173be10794ffd53f43c10a6e498e0317e04e24dcb2bd9dbd0cb83b1c10cf4c6a WatchSource:0}: Error finding container 173be10794ffd53f43c10a6e498e0317e04e24dcb2bd9dbd0cb83b1c10cf4c6a: Status 404 returned error can't find the container with id 173be10794ffd53f43c10a6e498e0317e04e24dcb2bd9dbd0cb83b1c10cf4c6a Nov 28 12:53:17 crc kubenswrapper[4779]: I1128 12:53:17.082735 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 28 12:53:17 crc kubenswrapper[4779]: I1128 12:53:17.236401 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 28 12:53:17 crc kubenswrapper[4779]: I1128 12:53:17.274466 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 12:53:17 crc kubenswrapper[4779]: I1128 12:53:17.409729 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7bg4l"] Nov 28 12:53:17 crc kubenswrapper[4779]: I1128 12:53:17.426442 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 28 12:53:17 crc kubenswrapper[4779]: I1128 12:53:17.440066 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7bqlc"] Nov 28 12:53:17 crc kubenswrapper[4779]: I1128 12:53:17.556868 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 28 12:53:17 crc kubenswrapper[4779]: I1128 12:53:17.558415 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-sr5r4" Nov 28 12:53:17 crc kubenswrapper[4779]: I1128 12:53:17.596169 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-7vgkf" Nov 28 12:53:17 crc kubenswrapper[4779]: I1128 12:53:17.665941 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-c6d9j"] Nov 28 12:53:17 crc kubenswrapper[4779]: I1128 12:53:17.715584 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92krt\" (UniqueName: \"kubernetes.io/projected/2c4a21b9-6b54-42cd-9dea-630957b1ba47-kube-api-access-92krt\") pod \"2c4a21b9-6b54-42cd-9dea-630957b1ba47\" (UID: \"2c4a21b9-6b54-42cd-9dea-630957b1ba47\") " Nov 28 12:53:17 crc kubenswrapper[4779]: I1128 12:53:17.715627 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e93f61b-ad6b-4f49-8916-1b371c57865e-dns-svc\") pod \"6e93f61b-ad6b-4f49-8916-1b371c57865e\" (UID: \"6e93f61b-ad6b-4f49-8916-1b371c57865e\") " Nov 28 12:53:17 crc kubenswrapper[4779]: I1128 12:53:17.715662 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8g55g\" (UniqueName: \"kubernetes.io/projected/6e93f61b-ad6b-4f49-8916-1b371c57865e-kube-api-access-8g55g\") pod \"6e93f61b-ad6b-4f49-8916-1b371c57865e\" (UID: \"6e93f61b-ad6b-4f49-8916-1b371c57865e\") " Nov 28 12:53:17 crc kubenswrapper[4779]: I1128 12:53:17.715778 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c4a21b9-6b54-42cd-9dea-630957b1ba47-config\") pod \"2c4a21b9-6b54-42cd-9dea-630957b1ba47\" (UID: \"2c4a21b9-6b54-42cd-9dea-630957b1ba47\") " Nov 28 12:53:17 crc kubenswrapper[4779]: I1128 12:53:17.715820 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e93f61b-ad6b-4f49-8916-1b371c57865e-config\") pod \"6e93f61b-ad6b-4f49-8916-1b371c57865e\" (UID: \"6e93f61b-ad6b-4f49-8916-1b371c57865e\") " Nov 28 12:53:17 crc kubenswrapper[4779]: I1128 12:53:17.716360 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c4a21b9-6b54-42cd-9dea-630957b1ba47-config" (OuterVolumeSpecName: "config") pod "2c4a21b9-6b54-42cd-9dea-630957b1ba47" (UID: "2c4a21b9-6b54-42cd-9dea-630957b1ba47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:53:17 crc kubenswrapper[4779]: I1128 12:53:17.716380 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e93f61b-ad6b-4f49-8916-1b371c57865e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6e93f61b-ad6b-4f49-8916-1b371c57865e" (UID: "6e93f61b-ad6b-4f49-8916-1b371c57865e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:53:17 crc kubenswrapper[4779]: I1128 12:53:17.716594 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e93f61b-ad6b-4f49-8916-1b371c57865e-config" (OuterVolumeSpecName: "config") pod "6e93f61b-ad6b-4f49-8916-1b371c57865e" (UID: "6e93f61b-ad6b-4f49-8916-1b371c57865e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:53:17 crc kubenswrapper[4779]: I1128 12:53:17.722521 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c4a21b9-6b54-42cd-9dea-630957b1ba47-kube-api-access-92krt" (OuterVolumeSpecName: "kube-api-access-92krt") pod "2c4a21b9-6b54-42cd-9dea-630957b1ba47" (UID: "2c4a21b9-6b54-42cd-9dea-630957b1ba47"). InnerVolumeSpecName "kube-api-access-92krt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:53:17 crc kubenswrapper[4779]: I1128 12:53:17.722614 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e93f61b-ad6b-4f49-8916-1b371c57865e-kube-api-access-8g55g" (OuterVolumeSpecName: "kube-api-access-8g55g") pod "6e93f61b-ad6b-4f49-8916-1b371c57865e" (UID: "6e93f61b-ad6b-4f49-8916-1b371c57865e"). InnerVolumeSpecName "kube-api-access-8g55g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:53:17 crc kubenswrapper[4779]: W1128 12:53:17.734327 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9ef6128_c3cf_4c5a_80ff_e0c4c263637d.slice/crio-b6aafb0c4eb04aa614e20d50d89665e24750decdd584fda8afa7c0b333dcbb2f WatchSource:0}: Error finding container b6aafb0c4eb04aa614e20d50d89665e24750decdd584fda8afa7c0b333dcbb2f: Status 404 returned error can't find the container with id b6aafb0c4eb04aa614e20d50d89665e24750decdd584fda8afa7c0b333dcbb2f Nov 28 12:53:17 crc kubenswrapper[4779]: I1128 12:53:17.817158 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8g55g\" (UniqueName: \"kubernetes.io/projected/6e93f61b-ad6b-4f49-8916-1b371c57865e-kube-api-access-8g55g\") on node \"crc\" DevicePath \"\"" Nov 28 12:53:17 crc kubenswrapper[4779]: I1128 12:53:17.817190 4779 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c4a21b9-6b54-42cd-9dea-630957b1ba47-config\") on node \"crc\" DevicePath \"\"" Nov 28 12:53:17 crc kubenswrapper[4779]: I1128 12:53:17.817199 4779 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e93f61b-ad6b-4f49-8916-1b371c57865e-config\") on node \"crc\" DevicePath \"\"" Nov 28 12:53:17 crc kubenswrapper[4779]: I1128 12:53:17.817209 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92krt\" (UniqueName: \"kubernetes.io/projected/2c4a21b9-6b54-42cd-9dea-630957b1ba47-kube-api-access-92krt\") on node \"crc\" DevicePath \"\"" Nov 28 12:53:17 crc kubenswrapper[4779]: I1128 12:53:17.817218 4779 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e93f61b-ad6b-4f49-8916-1b371c57865e-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 12:53:18 crc kubenswrapper[4779]: I1128 12:53:18.057385 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-7vgkf" Nov 28 12:53:18 crc kubenswrapper[4779]: I1128 12:53:18.057383 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-7vgkf" event={"ID":"2c4a21b9-6b54-42cd-9dea-630957b1ba47","Type":"ContainerDied","Data":"1dc6a688d3dfd2614051dcd59ad94e959027e84d18fa2fd23e240fb98960f487"} Nov 28 12:53:18 crc kubenswrapper[4779]: I1128 12:53:18.059712 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7bg4l" event={"ID":"5049f1f8-c081-4671-8d6a-9282a53dd6bd","Type":"ContainerStarted","Data":"1af4ea7f33c4667739fe2a7c8bf73de4ae028c60174b5dd8679fb0266d7d8262"} Nov 28 12:53:18 crc kubenswrapper[4779]: I1128 12:53:18.063931 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-sr5r4" Nov 28 12:53:18 crc kubenswrapper[4779]: I1128 12:53:18.063930 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-sr5r4" event={"ID":"6e93f61b-ad6b-4f49-8916-1b371c57865e","Type":"ContainerDied","Data":"d29d51ac561c77f2a7cff8c95ab62520333ea4e59bdd799cf169bb628970630e"} Nov 28 12:53:18 crc kubenswrapper[4779]: I1128 12:53:18.065831 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"28783fa8-aac9-4041-aba2-ba78f5be6f66","Type":"ContainerStarted","Data":"d67131df04731a31c520c6a7b19c3857f27ad9b8a3569ebe4003ae3142ec2275"} Nov 28 12:53:18 crc kubenswrapper[4779]: I1128 12:53:18.067726 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"486d0b33-cc59-495a-ba1f-e51c47e0d37e","Type":"ContainerStarted","Data":"1ba9d72ed5e744bd85732a0b99869643440eeff5f16e16d5c32a71aa5427148d"} Nov 28 12:53:18 crc kubenswrapper[4779]: I1128 12:53:18.070277 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"8482bdcc-fe9d-4ed6-8ade-a1319330b252","Type":"ContainerStarted","Data":"173be10794ffd53f43c10a6e498e0317e04e24dcb2bd9dbd0cb83b1c10cf4c6a"} Nov 28 12:53:18 crc kubenswrapper[4779]: I1128 12:53:18.071618 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"aa122564-e2c8-4ceb-b66d-1b677aaa4b21","Type":"ContainerStarted","Data":"fec916e4bd653a12ea8ca5b8ff1f14fb0e58586eaad51276e61d7eee01bd4cc2"} Nov 28 12:53:18 crc kubenswrapper[4779]: I1128 12:53:18.074401 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"bd0f63de-dfe7-471d-92d8-b41e260d970b","Type":"ContainerStarted","Data":"5b45767a2283f9be281be991081b36961d36e2dd70f1abfe75073ce91d5cbc2a"} Nov 28 12:53:18 crc kubenswrapper[4779]: I1128 12:53:18.077291 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c27e5f17-320d-472d-a3e7-6a0e9fae960b","Type":"ContainerStarted","Data":"fc2b6dac65fbb544435c556320c898704b52c62d128081eb7b31b4b24fac582a"} Nov 28 12:53:18 crc kubenswrapper[4779]: I1128 12:53:18.093306 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1c8c979a-2995-4080-a0b6-173e62faceee","Type":"ContainerStarted","Data":"25b313164b8d335e28fed852f3a3ed9d335636280c71f57d26e606166bc4fcbb"} Nov 28 12:53:18 crc kubenswrapper[4779]: I1128 12:53:18.095860 4779 generic.go:334] "Generic (PLEG): container finished" podID="b5defa7a-9cf5-4dca-a5ed-465ef0801609" containerID="9fd42989e6b77c0052a1d260b586fe7b14e11213256e50dc41140c29e6db731d" exitCode=0 Nov 28 12:53:18 crc kubenswrapper[4779]: I1128 12:53:18.095953 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-7bqlc" event={"ID":"b5defa7a-9cf5-4dca-a5ed-465ef0801609","Type":"ContainerDied","Data":"9fd42989e6b77c0052a1d260b586fe7b14e11213256e50dc41140c29e6db731d"} Nov 28 12:53:18 crc kubenswrapper[4779]: I1128 12:53:18.095990 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-7bqlc" event={"ID":"b5defa7a-9cf5-4dca-a5ed-465ef0801609","Type":"ContainerStarted","Data":"ad502463711eed7f85949acded8e59cb61091aeb15026f571b6f1ee5572db478"} Nov 28 12:53:18 crc kubenswrapper[4779]: I1128 12:53:18.104129 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-7vgkf"] Nov 28 12:53:18 crc kubenswrapper[4779]: I1128 12:53:18.106862 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-c6d9j" event={"ID":"a9ef6128-c3cf-4c5a-80ff-e0c4c263637d","Type":"ContainerStarted","Data":"b6aafb0c4eb04aa614e20d50d89665e24750decdd584fda8afa7c0b333dcbb2f"} Nov 28 12:53:18 crc kubenswrapper[4779]: I1128 12:53:18.110979 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-7vgkf"] Nov 28 12:53:18 crc kubenswrapper[4779]: I1128 12:53:18.125078 4779 generic.go:334] "Generic (PLEG): container finished" podID="bdf95638-8948-4749-b04d-5a58b43dbc7b" containerID="81b115e606113ecdab4b919e5cfcd75ffaa9504ee6300b1864eecb86f10bee60" exitCode=0 Nov 28 12:53:18 crc kubenswrapper[4779]: I1128 12:53:18.125159 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-wcndb" event={"ID":"bdf95638-8948-4749-b04d-5a58b43dbc7b","Type":"ContainerDied","Data":"81b115e606113ecdab4b919e5cfcd75ffaa9504ee6300b1864eecb86f10bee60"} Nov 28 12:53:18 crc kubenswrapper[4779]: I1128 12:53:18.137574 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-sr5r4"] Nov 28 12:53:18 crc kubenswrapper[4779]: I1128 12:53:18.150860 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-sr5r4"] Nov 28 12:53:18 crc kubenswrapper[4779]: I1128 12:53:18.644785 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 28 12:53:18 crc kubenswrapper[4779]: W1128 12:53:18.871428 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7312815e_950e_48e2_bcbe_c74717279168.slice/crio-d5714cf72e150154214b2070bef76dc8b74786a6b919fa55a92c1a9f169690ac WatchSource:0}: Error finding container d5714cf72e150154214b2070bef76dc8b74786a6b919fa55a92c1a9f169690ac: Status 404 returned error can't find the container with id d5714cf72e150154214b2070bef76dc8b74786a6b919fa55a92c1a9f169690ac Nov 28 12:53:19 crc kubenswrapper[4779]: I1128 12:53:19.138577 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"7312815e-950e-48e2-bcbe-c74717279168","Type":"ContainerStarted","Data":"d5714cf72e150154214b2070bef76dc8b74786a6b919fa55a92c1a9f169690ac"} Nov 28 12:53:19 crc kubenswrapper[4779]: I1128 12:53:19.735789 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c4a21b9-6b54-42cd-9dea-630957b1ba47" path="/var/lib/kubelet/pods/2c4a21b9-6b54-42cd-9dea-630957b1ba47/volumes" Nov 28 12:53:19 crc kubenswrapper[4779]: I1128 12:53:19.736263 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e93f61b-ad6b-4f49-8916-1b371c57865e" path="/var/lib/kubelet/pods/6e93f61b-ad6b-4f49-8916-1b371c57865e/volumes" Nov 28 12:53:26 crc kubenswrapper[4779]: E1128 12:53:26.573925 4779 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Nov 28 12:53:26 crc kubenswrapper[4779]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/bdf95638-8948-4749-b04d-5a58b43dbc7b/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Nov 28 12:53:26 crc kubenswrapper[4779]: > podSandboxID="390e8ef0dd3ec50117d43627ac149a50ea0ff31d015605378fd56700e80aee3c" Nov 28 12:53:26 crc kubenswrapper[4779]: E1128 12:53:26.574646 4779 kuberuntime_manager.go:1274] "Unhandled Error" err=< Nov 28 12:53:26 crc kubenswrapper[4779]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2k57v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-wcndb_openstack(bdf95638-8948-4749-b04d-5a58b43dbc7b): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/bdf95638-8948-4749-b04d-5a58b43dbc7b/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Nov 28 12:53:26 crc kubenswrapper[4779]: > logger="UnhandledError" Nov 28 12:53:26 crc kubenswrapper[4779]: E1128 12:53:26.575747 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/bdf95638-8948-4749-b04d-5a58b43dbc7b/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-666b6646f7-wcndb" podUID="bdf95638-8948-4749-b04d-5a58b43dbc7b" Nov 28 12:53:27 crc kubenswrapper[4779]: I1128 12:53:27.207490 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"8482bdcc-fe9d-4ed6-8ade-a1319330b252","Type":"ContainerStarted","Data":"fa711c66a66d58096bd0a0f030fab0b17df3dbdbb30777cf257fdf1b61e4c27e"} Nov 28 12:53:27 crc kubenswrapper[4779]: I1128 12:53:27.208460 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 28 12:53:27 crc kubenswrapper[4779]: I1128 12:53:27.209807 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"7312815e-950e-48e2-bcbe-c74717279168","Type":"ContainerStarted","Data":"5c2944eaed53c068e80a284615332e23b62baee984ea4c563ee7ca51230b48dc"} Nov 28 12:53:27 crc kubenswrapper[4779]: I1128 12:53:27.211280 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"aa122564-e2c8-4ceb-b66d-1b677aaa4b21","Type":"ContainerStarted","Data":"f4cbf49cab624a02fee2735b82241febd6586f9b275f77692c28a8eed792b128"} Nov 28 12:53:27 crc kubenswrapper[4779]: I1128 12:53:27.213044 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"bd0f63de-dfe7-471d-92d8-b41e260d970b","Type":"ContainerStarted","Data":"a1dba365dd7d79ba823ae95715f8a720d6254d016c67e6bdb12ab3dd4cdaaa4d"} Nov 28 12:53:27 crc kubenswrapper[4779]: I1128 12:53:27.214455 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-7bqlc" event={"ID":"b5defa7a-9cf5-4dca-a5ed-465ef0801609","Type":"ContainerStarted","Data":"da1791f2d0c4c99fb925cf07c9e0ed972c6d8538a0acca4eb851288218b0b685"} Nov 28 12:53:27 crc kubenswrapper[4779]: I1128 12:53:27.214867 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-7bqlc" Nov 28 12:53:27 crc kubenswrapper[4779]: I1128 12:53:27.215812 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c27e5f17-320d-472d-a3e7-6a0e9fae960b","Type":"ContainerStarted","Data":"71a141a6c20fd6c507b350536184d6e6a954484f621e1733ba8f094f7ecc0586"} Nov 28 12:53:27 crc kubenswrapper[4779]: I1128 12:53:27.222792 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7bg4l" event={"ID":"5049f1f8-c081-4671-8d6a-9282a53dd6bd","Type":"ContainerStarted","Data":"6a8239abbd7b8fbdc5579ca4017a1f7eb80c8c570d63609e9d1a325b1e2e7eb0"} Nov 28 12:53:27 crc kubenswrapper[4779]: I1128 12:53:27.223444 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-7bg4l" Nov 28 12:53:27 crc kubenswrapper[4779]: I1128 12:53:27.226343 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-c6d9j" event={"ID":"a9ef6128-c3cf-4c5a-80ff-e0c4c263637d","Type":"ContainerStarted","Data":"c3c644bc28f174fd113066f52bdf32788c2c0a3fcfe144281c4fcb0fb31980ac"} Nov 28 12:53:27 crc kubenswrapper[4779]: I1128 12:53:27.230298 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=11.846701443 podStartE2EDuration="21.230279182s" podCreationTimestamp="2025-11-28 12:53:06 +0000 UTC" firstStartedPulling="2025-11-28 12:53:17.106391475 +0000 UTC m=+1057.672066829" lastFinishedPulling="2025-11-28 12:53:26.489969194 +0000 UTC m=+1067.055644568" observedRunningTime="2025-11-28 12:53:27.225316109 +0000 UTC m=+1067.790991463" watchObservedRunningTime="2025-11-28 12:53:27.230279182 +0000 UTC m=+1067.795954536" Nov 28 12:53:27 crc kubenswrapper[4779]: I1128 12:53:27.231460 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"28783fa8-aac9-4041-aba2-ba78f5be6f66","Type":"ContainerStarted","Data":"fb04cd98ad55438260a46b44b8e1c5569ae4e5953b1da93c2b2e6c04fc949409"} Nov 28 12:53:27 crc kubenswrapper[4779]: I1128 12:53:27.231611 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 28 12:53:27 crc kubenswrapper[4779]: I1128 12:53:27.288021 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-7bg4l" podStartSLOduration=10.061890963 podStartE2EDuration="18.28800199s" podCreationTimestamp="2025-11-28 12:53:09 +0000 UTC" firstStartedPulling="2025-11-28 12:53:17.441326725 +0000 UTC m=+1058.007002079" lastFinishedPulling="2025-11-28 12:53:25.667437752 +0000 UTC m=+1066.233113106" observedRunningTime="2025-11-28 12:53:27.283365515 +0000 UTC m=+1067.849040879" watchObservedRunningTime="2025-11-28 12:53:27.28800199 +0000 UTC m=+1067.853677354" Nov 28 12:53:27 crc kubenswrapper[4779]: I1128 12:53:27.297185 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-7bqlc" podStartSLOduration=28.297170685 podStartE2EDuration="28.297170685s" podCreationTimestamp="2025-11-28 12:52:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:53:27.296860107 +0000 UTC m=+1067.862535461" watchObservedRunningTime="2025-11-28 12:53:27.297170685 +0000 UTC m=+1067.862846059" Nov 28 12:53:27 crc kubenswrapper[4779]: I1128 12:53:27.352374 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=14.305252086 podStartE2EDuration="23.352356935s" podCreationTimestamp="2025-11-28 12:53:04 +0000 UTC" firstStartedPulling="2025-11-28 12:53:17.274685547 +0000 UTC m=+1057.840360891" lastFinishedPulling="2025-11-28 12:53:26.321790346 +0000 UTC m=+1066.887465740" observedRunningTime="2025-11-28 12:53:27.334835155 +0000 UTC m=+1067.900510509" watchObservedRunningTime="2025-11-28 12:53:27.352356935 +0000 UTC m=+1067.918032299" Nov 28 12:53:28 crc kubenswrapper[4779]: I1128 12:53:28.238683 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1c8c979a-2995-4080-a0b6-173e62faceee","Type":"ContainerStarted","Data":"83a10acad2ad96fbf02da7dec091f283aa84123110fb7c2a468f72da9c94c337"} Nov 28 12:53:28 crc kubenswrapper[4779]: I1128 12:53:28.240214 4779 generic.go:334] "Generic (PLEG): container finished" podID="a9ef6128-c3cf-4c5a-80ff-e0c4c263637d" containerID="c3c644bc28f174fd113066f52bdf32788c2c0a3fcfe144281c4fcb0fb31980ac" exitCode=0 Nov 28 12:53:28 crc kubenswrapper[4779]: I1128 12:53:28.240276 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-c6d9j" event={"ID":"a9ef6128-c3cf-4c5a-80ff-e0c4c263637d","Type":"ContainerDied","Data":"c3c644bc28f174fd113066f52bdf32788c2c0a3fcfe144281c4fcb0fb31980ac"} Nov 28 12:53:28 crc kubenswrapper[4779]: I1128 12:53:28.241823 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"486d0b33-cc59-495a-ba1f-e51c47e0d37e","Type":"ContainerStarted","Data":"dd6084584efb63ab6484ff18173ba4f693e06a7fcd6ab961420fb6eb533c5733"} Nov 28 12:53:28 crc kubenswrapper[4779]: I1128 12:53:28.253153 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-wcndb" event={"ID":"bdf95638-8948-4749-b04d-5a58b43dbc7b","Type":"ContainerStarted","Data":"0cfe64363d1966463bf72066df4f2cff29542d59f74c0e7cac6c8ca808c6c3ec"} Nov 28 12:53:28 crc kubenswrapper[4779]: I1128 12:53:28.283812 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-wcndb" podStartSLOduration=28.686172855 podStartE2EDuration="29.283793437s" podCreationTimestamp="2025-11-28 12:52:59 +0000 UTC" firstStartedPulling="2025-11-28 12:53:16.354739624 +0000 UTC m=+1056.920414988" lastFinishedPulling="2025-11-28 12:53:16.952360206 +0000 UTC m=+1057.518035570" observedRunningTime="2025-11-28 12:53:28.279791069 +0000 UTC m=+1068.845466423" watchObservedRunningTime="2025-11-28 12:53:28.283793437 +0000 UTC m=+1068.849468791" Nov 28 12:53:29 crc kubenswrapper[4779]: I1128 12:53:29.268385 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-c6d9j" event={"ID":"a9ef6128-c3cf-4c5a-80ff-e0c4c263637d","Type":"ContainerStarted","Data":"f8e59738d6a18e88f76049df456246191399f548cbf81ee27cf7e33afc63b88d"} Nov 28 12:53:29 crc kubenswrapper[4779]: I1128 12:53:29.841962 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-wcndb" Nov 28 12:53:34 crc kubenswrapper[4779]: I1128 12:53:34.548708 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 28 12:53:34 crc kubenswrapper[4779]: I1128 12:53:34.844172 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-wcndb" Nov 28 12:53:35 crc kubenswrapper[4779]: I1128 12:53:35.097293 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-7bqlc" Nov 28 12:53:35 crc kubenswrapper[4779]: I1128 12:53:35.156728 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-wcndb"] Nov 28 12:53:35 crc kubenswrapper[4779]: I1128 12:53:35.316856 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-wcndb" podUID="bdf95638-8948-4749-b04d-5a58b43dbc7b" containerName="dnsmasq-dns" containerID="cri-o://0cfe64363d1966463bf72066df4f2cff29542d59f74c0e7cac6c8ca808c6c3ec" gracePeriod=10 Nov 28 12:53:35 crc kubenswrapper[4779]: I1128 12:53:35.575049 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-szlhd"] Nov 28 12:53:35 crc kubenswrapper[4779]: I1128 12:53:35.576320 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-szlhd" Nov 28 12:53:35 crc kubenswrapper[4779]: I1128 12:53:35.580045 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 28 12:53:35 crc kubenswrapper[4779]: I1128 12:53:35.586012 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-szlhd"] Nov 28 12:53:35 crc kubenswrapper[4779]: I1128 12:53:35.691496 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-9xctb"] Nov 28 12:53:35 crc kubenswrapper[4779]: I1128 12:53:35.693477 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-9xctb" Nov 28 12:53:35 crc kubenswrapper[4779]: I1128 12:53:35.695700 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 28 12:53:35 crc kubenswrapper[4779]: I1128 12:53:35.699869 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-9xctb"] Nov 28 12:53:35 crc kubenswrapper[4779]: I1128 12:53:35.775743 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e43d521-a73a-4d72-8270-bb959b5d0a53-combined-ca-bundle\") pod \"ovn-controller-metrics-szlhd\" (UID: \"2e43d521-a73a-4d72-8270-bb959b5d0a53\") " pod="openstack/ovn-controller-metrics-szlhd" Nov 28 12:53:35 crc kubenswrapper[4779]: I1128 12:53:35.775783 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2e43d521-a73a-4d72-8270-bb959b5d0a53-ovs-rundir\") pod \"ovn-controller-metrics-szlhd\" (UID: \"2e43d521-a73a-4d72-8270-bb959b5d0a53\") " pod="openstack/ovn-controller-metrics-szlhd" Nov 28 12:53:35 crc kubenswrapper[4779]: I1128 12:53:35.775813 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e43d521-a73a-4d72-8270-bb959b5d0a53-config\") pod \"ovn-controller-metrics-szlhd\" (UID: \"2e43d521-a73a-4d72-8270-bb959b5d0a53\") " pod="openstack/ovn-controller-metrics-szlhd" Nov 28 12:53:35 crc kubenswrapper[4779]: I1128 12:53:35.775854 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e43d521-a73a-4d72-8270-bb959b5d0a53-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-szlhd\" (UID: \"2e43d521-a73a-4d72-8270-bb959b5d0a53\") " pod="openstack/ovn-controller-metrics-szlhd" Nov 28 12:53:35 crc kubenswrapper[4779]: I1128 12:53:35.775914 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2e43d521-a73a-4d72-8270-bb959b5d0a53-ovn-rundir\") pod \"ovn-controller-metrics-szlhd\" (UID: \"2e43d521-a73a-4d72-8270-bb959b5d0a53\") " pod="openstack/ovn-controller-metrics-szlhd" Nov 28 12:53:35 crc kubenswrapper[4779]: I1128 12:53:35.775933 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lll9h\" (UniqueName: \"kubernetes.io/projected/2e43d521-a73a-4d72-8270-bb959b5d0a53-kube-api-access-lll9h\") pod \"ovn-controller-metrics-szlhd\" (UID: \"2e43d521-a73a-4d72-8270-bb959b5d0a53\") " pod="openstack/ovn-controller-metrics-szlhd" Nov 28 12:53:35 crc kubenswrapper[4779]: I1128 12:53:35.819403 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-9xctb"] Nov 28 12:53:35 crc kubenswrapper[4779]: E1128 12:53:35.821266 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc kube-api-access-kgt5g ovsdbserver-nb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-7fd796d7df-9xctb" podUID="8171df9a-7a8d-49c4-afba-e94decdc1f86" Nov 28 12:53:35 crc kubenswrapper[4779]: I1128 12:53:35.855931 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-9dg9f"] Nov 28 12:53:35 crc kubenswrapper[4779]: I1128 12:53:35.857078 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-9dg9f" Nov 28 12:53:35 crc kubenswrapper[4779]: I1128 12:53:35.859123 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 28 12:53:35 crc kubenswrapper[4779]: I1128 12:53:35.864956 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-9dg9f"] Nov 28 12:53:35 crc kubenswrapper[4779]: I1128 12:53:35.877976 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgt5g\" (UniqueName: \"kubernetes.io/projected/8171df9a-7a8d-49c4-afba-e94decdc1f86-kube-api-access-kgt5g\") pod \"dnsmasq-dns-7fd796d7df-9xctb\" (UID: \"8171df9a-7a8d-49c4-afba-e94decdc1f86\") " pod="openstack/dnsmasq-dns-7fd796d7df-9xctb" Nov 28 12:53:35 crc kubenswrapper[4779]: I1128 12:53:35.878025 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2e43d521-a73a-4d72-8270-bb959b5d0a53-ovn-rundir\") pod \"ovn-controller-metrics-szlhd\" (UID: \"2e43d521-a73a-4d72-8270-bb959b5d0a53\") " pod="openstack/ovn-controller-metrics-szlhd" Nov 28 12:53:35 crc kubenswrapper[4779]: I1128 12:53:35.878050 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lll9h\" (UniqueName: \"kubernetes.io/projected/2e43d521-a73a-4d72-8270-bb959b5d0a53-kube-api-access-lll9h\") pod \"ovn-controller-metrics-szlhd\" (UID: \"2e43d521-a73a-4d72-8270-bb959b5d0a53\") " pod="openstack/ovn-controller-metrics-szlhd" Nov 28 12:53:35 crc kubenswrapper[4779]: I1128 12:53:35.878075 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e43d521-a73a-4d72-8270-bb959b5d0a53-combined-ca-bundle\") pod \"ovn-controller-metrics-szlhd\" (UID: \"2e43d521-a73a-4d72-8270-bb959b5d0a53\") " pod="openstack/ovn-controller-metrics-szlhd" Nov 28 12:53:35 crc kubenswrapper[4779]: I1128 12:53:35.878104 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2e43d521-a73a-4d72-8270-bb959b5d0a53-ovs-rundir\") pod \"ovn-controller-metrics-szlhd\" (UID: \"2e43d521-a73a-4d72-8270-bb959b5d0a53\") " pod="openstack/ovn-controller-metrics-szlhd" Nov 28 12:53:35 crc kubenswrapper[4779]: I1128 12:53:35.878135 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e43d521-a73a-4d72-8270-bb959b5d0a53-config\") pod \"ovn-controller-metrics-szlhd\" (UID: \"2e43d521-a73a-4d72-8270-bb959b5d0a53\") " pod="openstack/ovn-controller-metrics-szlhd" Nov 28 12:53:35 crc kubenswrapper[4779]: I1128 12:53:35.878172 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8171df9a-7a8d-49c4-afba-e94decdc1f86-config\") pod \"dnsmasq-dns-7fd796d7df-9xctb\" (UID: \"8171df9a-7a8d-49c4-afba-e94decdc1f86\") " pod="openstack/dnsmasq-dns-7fd796d7df-9xctb" Nov 28 12:53:35 crc kubenswrapper[4779]: I1128 12:53:35.878195 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e43d521-a73a-4d72-8270-bb959b5d0a53-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-szlhd\" (UID: \"2e43d521-a73a-4d72-8270-bb959b5d0a53\") " pod="openstack/ovn-controller-metrics-szlhd" Nov 28 12:53:35 crc kubenswrapper[4779]: I1128 12:53:35.878241 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8171df9a-7a8d-49c4-afba-e94decdc1f86-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-9xctb\" (UID: \"8171df9a-7a8d-49c4-afba-e94decdc1f86\") " pod="openstack/dnsmasq-dns-7fd796d7df-9xctb" Nov 28 12:53:35 crc kubenswrapper[4779]: I1128 12:53:35.878269 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8171df9a-7a8d-49c4-afba-e94decdc1f86-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-9xctb\" (UID: \"8171df9a-7a8d-49c4-afba-e94decdc1f86\") " pod="openstack/dnsmasq-dns-7fd796d7df-9xctb" Nov 28 12:53:35 crc kubenswrapper[4779]: I1128 12:53:35.886612 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2e43d521-a73a-4d72-8270-bb959b5d0a53-ovn-rundir\") pod \"ovn-controller-metrics-szlhd\" (UID: \"2e43d521-a73a-4d72-8270-bb959b5d0a53\") " pod="openstack/ovn-controller-metrics-szlhd" Nov 28 12:53:35 crc kubenswrapper[4779]: I1128 12:53:35.887236 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e43d521-a73a-4d72-8270-bb959b5d0a53-config\") pod \"ovn-controller-metrics-szlhd\" (UID: \"2e43d521-a73a-4d72-8270-bb959b5d0a53\") " pod="openstack/ovn-controller-metrics-szlhd" Nov 28 12:53:35 crc kubenswrapper[4779]: I1128 12:53:35.887787 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2e43d521-a73a-4d72-8270-bb959b5d0a53-ovs-rundir\") pod \"ovn-controller-metrics-szlhd\" (UID: \"2e43d521-a73a-4d72-8270-bb959b5d0a53\") " pod="openstack/ovn-controller-metrics-szlhd" Nov 28 12:53:35 crc kubenswrapper[4779]: I1128 12:53:35.893313 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e43d521-a73a-4d72-8270-bb959b5d0a53-combined-ca-bundle\") pod \"ovn-controller-metrics-szlhd\" (UID: \"2e43d521-a73a-4d72-8270-bb959b5d0a53\") " pod="openstack/ovn-controller-metrics-szlhd" Nov 28 12:53:35 crc kubenswrapper[4779]: I1128 12:53:35.900201 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e43d521-a73a-4d72-8270-bb959b5d0a53-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-szlhd\" (UID: \"2e43d521-a73a-4d72-8270-bb959b5d0a53\") " pod="openstack/ovn-controller-metrics-szlhd" Nov 28 12:53:35 crc kubenswrapper[4779]: I1128 12:53:35.904018 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lll9h\" (UniqueName: \"kubernetes.io/projected/2e43d521-a73a-4d72-8270-bb959b5d0a53-kube-api-access-lll9h\") pod \"ovn-controller-metrics-szlhd\" (UID: \"2e43d521-a73a-4d72-8270-bb959b5d0a53\") " pod="openstack/ovn-controller-metrics-szlhd" Nov 28 12:53:35 crc kubenswrapper[4779]: I1128 12:53:35.979823 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40b28a82-461d-43d0-a8b7-35730dbff017-config\") pod \"dnsmasq-dns-86db49b7ff-9dg9f\" (UID: \"40b28a82-461d-43d0-a8b7-35730dbff017\") " pod="openstack/dnsmasq-dns-86db49b7ff-9dg9f" Nov 28 12:53:35 crc kubenswrapper[4779]: I1128 12:53:35.979880 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8171df9a-7a8d-49c4-afba-e94decdc1f86-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-9xctb\" (UID: \"8171df9a-7a8d-49c4-afba-e94decdc1f86\") " pod="openstack/dnsmasq-dns-7fd796d7df-9xctb" Nov 28 12:53:35 crc kubenswrapper[4779]: I1128 12:53:35.979912 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8171df9a-7a8d-49c4-afba-e94decdc1f86-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-9xctb\" (UID: \"8171df9a-7a8d-49c4-afba-e94decdc1f86\") " pod="openstack/dnsmasq-dns-7fd796d7df-9xctb" Nov 28 12:53:35 crc kubenswrapper[4779]: I1128 12:53:35.979935 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40b28a82-461d-43d0-a8b7-35730dbff017-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-9dg9f\" (UID: \"40b28a82-461d-43d0-a8b7-35730dbff017\") " pod="openstack/dnsmasq-dns-86db49b7ff-9dg9f" Nov 28 12:53:35 crc kubenswrapper[4779]: I1128 12:53:35.979955 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgt5g\" (UniqueName: \"kubernetes.io/projected/8171df9a-7a8d-49c4-afba-e94decdc1f86-kube-api-access-kgt5g\") pod \"dnsmasq-dns-7fd796d7df-9xctb\" (UID: \"8171df9a-7a8d-49c4-afba-e94decdc1f86\") " pod="openstack/dnsmasq-dns-7fd796d7df-9xctb" Nov 28 12:53:35 crc kubenswrapper[4779]: I1128 12:53:35.979988 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/40b28a82-461d-43d0-a8b7-35730dbff017-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-9dg9f\" (UID: \"40b28a82-461d-43d0-a8b7-35730dbff017\") " pod="openstack/dnsmasq-dns-86db49b7ff-9dg9f" Nov 28 12:53:35 crc kubenswrapper[4779]: I1128 12:53:35.980031 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whctb\" (UniqueName: \"kubernetes.io/projected/40b28a82-461d-43d0-a8b7-35730dbff017-kube-api-access-whctb\") pod \"dnsmasq-dns-86db49b7ff-9dg9f\" (UID: \"40b28a82-461d-43d0-a8b7-35730dbff017\") " pod="openstack/dnsmasq-dns-86db49b7ff-9dg9f" Nov 28 12:53:35 crc kubenswrapper[4779]: I1128 12:53:35.980265 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40b28a82-461d-43d0-a8b7-35730dbff017-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-9dg9f\" (UID: \"40b28a82-461d-43d0-a8b7-35730dbff017\") " pod="openstack/dnsmasq-dns-86db49b7ff-9dg9f" Nov 28 12:53:35 crc kubenswrapper[4779]: I1128 12:53:35.980287 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8171df9a-7a8d-49c4-afba-e94decdc1f86-config\") pod \"dnsmasq-dns-7fd796d7df-9xctb\" (UID: \"8171df9a-7a8d-49c4-afba-e94decdc1f86\") " pod="openstack/dnsmasq-dns-7fd796d7df-9xctb" Nov 28 12:53:35 crc kubenswrapper[4779]: I1128 12:53:35.980943 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8171df9a-7a8d-49c4-afba-e94decdc1f86-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-9xctb\" (UID: \"8171df9a-7a8d-49c4-afba-e94decdc1f86\") " pod="openstack/dnsmasq-dns-7fd796d7df-9xctb" Nov 28 12:53:35 crc kubenswrapper[4779]: I1128 12:53:35.981009 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8171df9a-7a8d-49c4-afba-e94decdc1f86-config\") pod \"dnsmasq-dns-7fd796d7df-9xctb\" (UID: \"8171df9a-7a8d-49c4-afba-e94decdc1f86\") " pod="openstack/dnsmasq-dns-7fd796d7df-9xctb" Nov 28 12:53:35 crc kubenswrapper[4779]: I1128 12:53:35.981533 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8171df9a-7a8d-49c4-afba-e94decdc1f86-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-9xctb\" (UID: \"8171df9a-7a8d-49c4-afba-e94decdc1f86\") " pod="openstack/dnsmasq-dns-7fd796d7df-9xctb" Nov 28 12:53:36 crc kubenswrapper[4779]: I1128 12:53:36.006998 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgt5g\" (UniqueName: \"kubernetes.io/projected/8171df9a-7a8d-49c4-afba-e94decdc1f86-kube-api-access-kgt5g\") pod \"dnsmasq-dns-7fd796d7df-9xctb\" (UID: \"8171df9a-7a8d-49c4-afba-e94decdc1f86\") " pod="openstack/dnsmasq-dns-7fd796d7df-9xctb" Nov 28 12:53:36 crc kubenswrapper[4779]: I1128 12:53:36.082193 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40b28a82-461d-43d0-a8b7-35730dbff017-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-9dg9f\" (UID: \"40b28a82-461d-43d0-a8b7-35730dbff017\") " pod="openstack/dnsmasq-dns-86db49b7ff-9dg9f" Nov 28 12:53:36 crc kubenswrapper[4779]: I1128 12:53:36.082334 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/40b28a82-461d-43d0-a8b7-35730dbff017-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-9dg9f\" (UID: \"40b28a82-461d-43d0-a8b7-35730dbff017\") " pod="openstack/dnsmasq-dns-86db49b7ff-9dg9f" Nov 28 12:53:36 crc kubenswrapper[4779]: I1128 12:53:36.082469 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whctb\" (UniqueName: \"kubernetes.io/projected/40b28a82-461d-43d0-a8b7-35730dbff017-kube-api-access-whctb\") pod \"dnsmasq-dns-86db49b7ff-9dg9f\" (UID: \"40b28a82-461d-43d0-a8b7-35730dbff017\") " pod="openstack/dnsmasq-dns-86db49b7ff-9dg9f" Nov 28 12:53:36 crc kubenswrapper[4779]: I1128 12:53:36.082523 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40b28a82-461d-43d0-a8b7-35730dbff017-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-9dg9f\" (UID: \"40b28a82-461d-43d0-a8b7-35730dbff017\") " pod="openstack/dnsmasq-dns-86db49b7ff-9dg9f" Nov 28 12:53:36 crc kubenswrapper[4779]: I1128 12:53:36.082606 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40b28a82-461d-43d0-a8b7-35730dbff017-config\") pod \"dnsmasq-dns-86db49b7ff-9dg9f\" (UID: \"40b28a82-461d-43d0-a8b7-35730dbff017\") " pod="openstack/dnsmasq-dns-86db49b7ff-9dg9f" Nov 28 12:53:36 crc kubenswrapper[4779]: I1128 12:53:36.083305 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/40b28a82-461d-43d0-a8b7-35730dbff017-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-9dg9f\" (UID: \"40b28a82-461d-43d0-a8b7-35730dbff017\") " pod="openstack/dnsmasq-dns-86db49b7ff-9dg9f" Nov 28 12:53:36 crc kubenswrapper[4779]: I1128 12:53:36.083448 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40b28a82-461d-43d0-a8b7-35730dbff017-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-9dg9f\" (UID: \"40b28a82-461d-43d0-a8b7-35730dbff017\") " pod="openstack/dnsmasq-dns-86db49b7ff-9dg9f" Nov 28 12:53:36 crc kubenswrapper[4779]: I1128 12:53:36.084189 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40b28a82-461d-43d0-a8b7-35730dbff017-config\") pod \"dnsmasq-dns-86db49b7ff-9dg9f\" (UID: \"40b28a82-461d-43d0-a8b7-35730dbff017\") " pod="openstack/dnsmasq-dns-86db49b7ff-9dg9f" Nov 28 12:53:36 crc kubenswrapper[4779]: I1128 12:53:36.084255 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40b28a82-461d-43d0-a8b7-35730dbff017-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-9dg9f\" (UID: \"40b28a82-461d-43d0-a8b7-35730dbff017\") " pod="openstack/dnsmasq-dns-86db49b7ff-9dg9f" Nov 28 12:53:36 crc kubenswrapper[4779]: I1128 12:53:36.106215 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whctb\" (UniqueName: \"kubernetes.io/projected/40b28a82-461d-43d0-a8b7-35730dbff017-kube-api-access-whctb\") pod \"dnsmasq-dns-86db49b7ff-9dg9f\" (UID: \"40b28a82-461d-43d0-a8b7-35730dbff017\") " pod="openstack/dnsmasq-dns-86db49b7ff-9dg9f" Nov 28 12:53:36 crc kubenswrapper[4779]: I1128 12:53:36.170751 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-9dg9f" Nov 28 12:53:36 crc kubenswrapper[4779]: I1128 12:53:36.203894 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-szlhd" Nov 28 12:53:36 crc kubenswrapper[4779]: I1128 12:53:36.322898 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-9xctb" Nov 28 12:53:36 crc kubenswrapper[4779]: I1128 12:53:36.334761 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-9xctb" Nov 28 12:53:36 crc kubenswrapper[4779]: I1128 12:53:36.488175 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8171df9a-7a8d-49c4-afba-e94decdc1f86-ovsdbserver-nb\") pod \"8171df9a-7a8d-49c4-afba-e94decdc1f86\" (UID: \"8171df9a-7a8d-49c4-afba-e94decdc1f86\") " Nov 28 12:53:36 crc kubenswrapper[4779]: I1128 12:53:36.488653 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgt5g\" (UniqueName: \"kubernetes.io/projected/8171df9a-7a8d-49c4-afba-e94decdc1f86-kube-api-access-kgt5g\") pod \"8171df9a-7a8d-49c4-afba-e94decdc1f86\" (UID: \"8171df9a-7a8d-49c4-afba-e94decdc1f86\") " Nov 28 12:53:36 crc kubenswrapper[4779]: I1128 12:53:36.488909 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8171df9a-7a8d-49c4-afba-e94decdc1f86-config\") pod \"8171df9a-7a8d-49c4-afba-e94decdc1f86\" (UID: \"8171df9a-7a8d-49c4-afba-e94decdc1f86\") " Nov 28 12:53:36 crc kubenswrapper[4779]: I1128 12:53:36.489010 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8171df9a-7a8d-49c4-afba-e94decdc1f86-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8171df9a-7a8d-49c4-afba-e94decdc1f86" (UID: "8171df9a-7a8d-49c4-afba-e94decdc1f86"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:53:36 crc kubenswrapper[4779]: I1128 12:53:36.489326 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8171df9a-7a8d-49c4-afba-e94decdc1f86-dns-svc\") pod \"8171df9a-7a8d-49c4-afba-e94decdc1f86\" (UID: \"8171df9a-7a8d-49c4-afba-e94decdc1f86\") " Nov 28 12:53:36 crc kubenswrapper[4779]: I1128 12:53:36.489433 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8171df9a-7a8d-49c4-afba-e94decdc1f86-config" (OuterVolumeSpecName: "config") pod "8171df9a-7a8d-49c4-afba-e94decdc1f86" (UID: "8171df9a-7a8d-49c4-afba-e94decdc1f86"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:53:36 crc kubenswrapper[4779]: I1128 12:53:36.489747 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8171df9a-7a8d-49c4-afba-e94decdc1f86-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8171df9a-7a8d-49c4-afba-e94decdc1f86" (UID: "8171df9a-7a8d-49c4-afba-e94decdc1f86"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:53:36 crc kubenswrapper[4779]: I1128 12:53:36.490397 4779 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8171df9a-7a8d-49c4-afba-e94decdc1f86-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 12:53:36 crc kubenswrapper[4779]: I1128 12:53:36.490436 4779 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8171df9a-7a8d-49c4-afba-e94decdc1f86-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 12:53:36 crc kubenswrapper[4779]: I1128 12:53:36.490459 4779 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8171df9a-7a8d-49c4-afba-e94decdc1f86-config\") on node \"crc\" DevicePath \"\"" Nov 28 12:53:36 crc kubenswrapper[4779]: I1128 12:53:36.504711 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8171df9a-7a8d-49c4-afba-e94decdc1f86-kube-api-access-kgt5g" (OuterVolumeSpecName: "kube-api-access-kgt5g") pod "8171df9a-7a8d-49c4-afba-e94decdc1f86" (UID: "8171df9a-7a8d-49c4-afba-e94decdc1f86"). InnerVolumeSpecName "kube-api-access-kgt5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:53:36 crc kubenswrapper[4779]: I1128 12:53:36.591936 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgt5g\" (UniqueName: \"kubernetes.io/projected/8171df9a-7a8d-49c4-afba-e94decdc1f86-kube-api-access-kgt5g\") on node \"crc\" DevicePath \"\"" Nov 28 12:53:36 crc kubenswrapper[4779]: I1128 12:53:36.869605 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 28 12:53:36 crc kubenswrapper[4779]: I1128 12:53:36.895434 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-9dg9f"] Nov 28 12:53:36 crc kubenswrapper[4779]: I1128 12:53:36.924685 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-8b5gg"] Nov 28 12:53:36 crc kubenswrapper[4779]: I1128 12:53:36.926584 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-8b5gg" Nov 28 12:53:36 crc kubenswrapper[4779]: I1128 12:53:36.936010 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-8b5gg"] Nov 28 12:53:37 crc kubenswrapper[4779]: I1128 12:53:37.103166 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwxxf\" (UniqueName: \"kubernetes.io/projected/8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba-kube-api-access-nwxxf\") pod \"dnsmasq-dns-698758b865-8b5gg\" (UID: \"8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba\") " pod="openstack/dnsmasq-dns-698758b865-8b5gg" Nov 28 12:53:37 crc kubenswrapper[4779]: I1128 12:53:37.103463 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba-config\") pod \"dnsmasq-dns-698758b865-8b5gg\" (UID: \"8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba\") " pod="openstack/dnsmasq-dns-698758b865-8b5gg" Nov 28 12:53:37 crc kubenswrapper[4779]: I1128 12:53:37.103496 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-8b5gg\" (UID: \"8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba\") " pod="openstack/dnsmasq-dns-698758b865-8b5gg" Nov 28 12:53:37 crc kubenswrapper[4779]: I1128 12:53:37.103547 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-8b5gg\" (UID: \"8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba\") " pod="openstack/dnsmasq-dns-698758b865-8b5gg" Nov 28 12:53:37 crc kubenswrapper[4779]: I1128 12:53:37.103648 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba-dns-svc\") pod \"dnsmasq-dns-698758b865-8b5gg\" (UID: \"8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba\") " pod="openstack/dnsmasq-dns-698758b865-8b5gg" Nov 28 12:53:37 crc kubenswrapper[4779]: I1128 12:53:37.205416 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-8b5gg\" (UID: \"8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba\") " pod="openstack/dnsmasq-dns-698758b865-8b5gg" Nov 28 12:53:37 crc kubenswrapper[4779]: I1128 12:53:37.205500 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-8b5gg\" (UID: \"8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba\") " pod="openstack/dnsmasq-dns-698758b865-8b5gg" Nov 28 12:53:37 crc kubenswrapper[4779]: I1128 12:53:37.205532 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba-dns-svc\") pod \"dnsmasq-dns-698758b865-8b5gg\" (UID: \"8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba\") " pod="openstack/dnsmasq-dns-698758b865-8b5gg" Nov 28 12:53:37 crc kubenswrapper[4779]: I1128 12:53:37.205590 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwxxf\" (UniqueName: \"kubernetes.io/projected/8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba-kube-api-access-nwxxf\") pod \"dnsmasq-dns-698758b865-8b5gg\" (UID: \"8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba\") " pod="openstack/dnsmasq-dns-698758b865-8b5gg" Nov 28 12:53:37 crc kubenswrapper[4779]: I1128 12:53:37.205637 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba-config\") pod \"dnsmasq-dns-698758b865-8b5gg\" (UID: \"8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba\") " pod="openstack/dnsmasq-dns-698758b865-8b5gg" Nov 28 12:53:37 crc kubenswrapper[4779]: I1128 12:53:37.206397 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-8b5gg\" (UID: \"8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba\") " pod="openstack/dnsmasq-dns-698758b865-8b5gg" Nov 28 12:53:37 crc kubenswrapper[4779]: I1128 12:53:37.206592 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-8b5gg\" (UID: \"8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba\") " pod="openstack/dnsmasq-dns-698758b865-8b5gg" Nov 28 12:53:37 crc kubenswrapper[4779]: I1128 12:53:37.206638 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba-config\") pod \"dnsmasq-dns-698758b865-8b5gg\" (UID: \"8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba\") " pod="openstack/dnsmasq-dns-698758b865-8b5gg" Nov 28 12:53:37 crc kubenswrapper[4779]: I1128 12:53:37.206685 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba-dns-svc\") pod \"dnsmasq-dns-698758b865-8b5gg\" (UID: \"8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba\") " pod="openstack/dnsmasq-dns-698758b865-8b5gg" Nov 28 12:53:37 crc kubenswrapper[4779]: I1128 12:53:37.240930 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwxxf\" (UniqueName: \"kubernetes.io/projected/8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba-kube-api-access-nwxxf\") pod \"dnsmasq-dns-698758b865-8b5gg\" (UID: \"8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba\") " pod="openstack/dnsmasq-dns-698758b865-8b5gg" Nov 28 12:53:37 crc kubenswrapper[4779]: I1128 12:53:37.332599 4779 generic.go:334] "Generic (PLEG): container finished" podID="bd0f63de-dfe7-471d-92d8-b41e260d970b" containerID="a1dba365dd7d79ba823ae95715f8a720d6254d016c67e6bdb12ab3dd4cdaaa4d" exitCode=0 Nov 28 12:53:37 crc kubenswrapper[4779]: I1128 12:53:37.332709 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"bd0f63de-dfe7-471d-92d8-b41e260d970b","Type":"ContainerDied","Data":"a1dba365dd7d79ba823ae95715f8a720d6254d016c67e6bdb12ab3dd4cdaaa4d"} Nov 28 12:53:37 crc kubenswrapper[4779]: I1128 12:53:37.334850 4779 generic.go:334] "Generic (PLEG): container finished" podID="c27e5f17-320d-472d-a3e7-6a0e9fae960b" containerID="71a141a6c20fd6c507b350536184d6e6a954484f621e1733ba8f094f7ecc0586" exitCode=0 Nov 28 12:53:37 crc kubenswrapper[4779]: I1128 12:53:37.334895 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c27e5f17-320d-472d-a3e7-6a0e9fae960b","Type":"ContainerDied","Data":"71a141a6c20fd6c507b350536184d6e6a954484f621e1733ba8f094f7ecc0586"} Nov 28 12:53:37 crc kubenswrapper[4779]: I1128 12:53:37.339004 4779 generic.go:334] "Generic (PLEG): container finished" podID="bdf95638-8948-4749-b04d-5a58b43dbc7b" containerID="0cfe64363d1966463bf72066df4f2cff29542d59f74c0e7cac6c8ca808c6c3ec" exitCode=0 Nov 28 12:53:37 crc kubenswrapper[4779]: I1128 12:53:37.339082 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-9xctb" Nov 28 12:53:37 crc kubenswrapper[4779]: I1128 12:53:37.339303 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-wcndb" event={"ID":"bdf95638-8948-4749-b04d-5a58b43dbc7b","Type":"ContainerDied","Data":"0cfe64363d1966463bf72066df4f2cff29542d59f74c0e7cac6c8ca808c6c3ec"} Nov 28 12:53:37 crc kubenswrapper[4779]: I1128 12:53:37.393268 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-9xctb"] Nov 28 12:53:37 crc kubenswrapper[4779]: I1128 12:53:37.397460 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-9xctb"] Nov 28 12:53:37 crc kubenswrapper[4779]: I1128 12:53:37.416738 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-8b5gg" Nov 28 12:53:37 crc kubenswrapper[4779]: I1128 12:53:37.702769 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-wcndb" Nov 28 12:53:37 crc kubenswrapper[4779]: I1128 12:53:37.744896 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8171df9a-7a8d-49c4-afba-e94decdc1f86" path="/var/lib/kubelet/pods/8171df9a-7a8d-49c4-afba-e94decdc1f86/volumes" Nov 28 12:53:37 crc kubenswrapper[4779]: I1128 12:53:37.814215 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2k57v\" (UniqueName: \"kubernetes.io/projected/bdf95638-8948-4749-b04d-5a58b43dbc7b-kube-api-access-2k57v\") pod \"bdf95638-8948-4749-b04d-5a58b43dbc7b\" (UID: \"bdf95638-8948-4749-b04d-5a58b43dbc7b\") " Nov 28 12:53:37 crc kubenswrapper[4779]: I1128 12:53:37.814533 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bdf95638-8948-4749-b04d-5a58b43dbc7b-dns-svc\") pod \"bdf95638-8948-4749-b04d-5a58b43dbc7b\" (UID: \"bdf95638-8948-4749-b04d-5a58b43dbc7b\") " Nov 28 12:53:37 crc kubenswrapper[4779]: I1128 12:53:37.814589 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdf95638-8948-4749-b04d-5a58b43dbc7b-config\") pod \"bdf95638-8948-4749-b04d-5a58b43dbc7b\" (UID: \"bdf95638-8948-4749-b04d-5a58b43dbc7b\") " Nov 28 12:53:37 crc kubenswrapper[4779]: I1128 12:53:37.817809 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdf95638-8948-4749-b04d-5a58b43dbc7b-kube-api-access-2k57v" (OuterVolumeSpecName: "kube-api-access-2k57v") pod "bdf95638-8948-4749-b04d-5a58b43dbc7b" (UID: "bdf95638-8948-4749-b04d-5a58b43dbc7b"). InnerVolumeSpecName "kube-api-access-2k57v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:53:37 crc kubenswrapper[4779]: I1128 12:53:37.894705 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdf95638-8948-4749-b04d-5a58b43dbc7b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bdf95638-8948-4749-b04d-5a58b43dbc7b" (UID: "bdf95638-8948-4749-b04d-5a58b43dbc7b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:53:37 crc kubenswrapper[4779]: I1128 12:53:37.895511 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdf95638-8948-4749-b04d-5a58b43dbc7b-config" (OuterVolumeSpecName: "config") pod "bdf95638-8948-4749-b04d-5a58b43dbc7b" (UID: "bdf95638-8948-4749-b04d-5a58b43dbc7b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:53:37 crc kubenswrapper[4779]: I1128 12:53:37.916489 4779 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bdf95638-8948-4749-b04d-5a58b43dbc7b-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 12:53:37 crc kubenswrapper[4779]: I1128 12:53:37.916510 4779 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdf95638-8948-4749-b04d-5a58b43dbc7b-config\") on node \"crc\" DevicePath \"\"" Nov 28 12:53:37 crc kubenswrapper[4779]: I1128 12:53:37.916520 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2k57v\" (UniqueName: \"kubernetes.io/projected/bdf95638-8948-4749-b04d-5a58b43dbc7b-kube-api-access-2k57v\") on node \"crc\" DevicePath \"\"" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.009537 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Nov 28 12:53:38 crc kubenswrapper[4779]: E1128 12:53:38.010065 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdf95638-8948-4749-b04d-5a58b43dbc7b" containerName="init" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.010078 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdf95638-8948-4749-b04d-5a58b43dbc7b" containerName="init" Nov 28 12:53:38 crc kubenswrapper[4779]: E1128 12:53:38.010102 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdf95638-8948-4749-b04d-5a58b43dbc7b" containerName="dnsmasq-dns" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.010109 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdf95638-8948-4749-b04d-5a58b43dbc7b" containerName="dnsmasq-dns" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.010249 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdf95638-8948-4749-b04d-5a58b43dbc7b" containerName="dnsmasq-dns" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.022002 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.025956 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.026583 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.028362 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.038655 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-tk9hs" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.042278 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.089017 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-szlhd"] Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.147076 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"265ee755-a70e-4f35-a40a-ef525a3c5088\") " pod="openstack/swift-storage-0" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.147141 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qh2z\" (UniqueName: \"kubernetes.io/projected/265ee755-a70e-4f35-a40a-ef525a3c5088-kube-api-access-4qh2z\") pod \"swift-storage-0\" (UID: \"265ee755-a70e-4f35-a40a-ef525a3c5088\") " pod="openstack/swift-storage-0" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.147165 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/265ee755-a70e-4f35-a40a-ef525a3c5088-lock\") pod \"swift-storage-0\" (UID: \"265ee755-a70e-4f35-a40a-ef525a3c5088\") " pod="openstack/swift-storage-0" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.147184 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/265ee755-a70e-4f35-a40a-ef525a3c5088-cache\") pod \"swift-storage-0\" (UID: \"265ee755-a70e-4f35-a40a-ef525a3c5088\") " pod="openstack/swift-storage-0" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.147219 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/265ee755-a70e-4f35-a40a-ef525a3c5088-etc-swift\") pod \"swift-storage-0\" (UID: \"265ee755-a70e-4f35-a40a-ef525a3c5088\") " pod="openstack/swift-storage-0" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.165685 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-8b5gg"] Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.218697 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-9dg9f"] Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.248819 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"265ee755-a70e-4f35-a40a-ef525a3c5088\") " pod="openstack/swift-storage-0" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.248879 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qh2z\" (UniqueName: \"kubernetes.io/projected/265ee755-a70e-4f35-a40a-ef525a3c5088-kube-api-access-4qh2z\") pod \"swift-storage-0\" (UID: \"265ee755-a70e-4f35-a40a-ef525a3c5088\") " pod="openstack/swift-storage-0" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.248905 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/265ee755-a70e-4f35-a40a-ef525a3c5088-lock\") pod \"swift-storage-0\" (UID: \"265ee755-a70e-4f35-a40a-ef525a3c5088\") " pod="openstack/swift-storage-0" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.248930 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/265ee755-a70e-4f35-a40a-ef525a3c5088-cache\") pod \"swift-storage-0\" (UID: \"265ee755-a70e-4f35-a40a-ef525a3c5088\") " pod="openstack/swift-storage-0" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.248963 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/265ee755-a70e-4f35-a40a-ef525a3c5088-etc-swift\") pod \"swift-storage-0\" (UID: \"265ee755-a70e-4f35-a40a-ef525a3c5088\") " pod="openstack/swift-storage-0" Nov 28 12:53:38 crc kubenswrapper[4779]: E1128 12:53:38.249124 4779 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 28 12:53:38 crc kubenswrapper[4779]: E1128 12:53:38.249143 4779 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 28 12:53:38 crc kubenswrapper[4779]: E1128 12:53:38.249184 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/265ee755-a70e-4f35-a40a-ef525a3c5088-etc-swift podName:265ee755-a70e-4f35-a40a-ef525a3c5088 nodeName:}" failed. No retries permitted until 2025-11-28 12:53:38.749169194 +0000 UTC m=+1079.314844548 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/265ee755-a70e-4f35-a40a-ef525a3c5088-etc-swift") pod "swift-storage-0" (UID: "265ee755-a70e-4f35-a40a-ef525a3c5088") : configmap "swift-ring-files" not found Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.249793 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/265ee755-a70e-4f35-a40a-ef525a3c5088-lock\") pod \"swift-storage-0\" (UID: \"265ee755-a70e-4f35-a40a-ef525a3c5088\") " pod="openstack/swift-storage-0" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.250004 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/265ee755-a70e-4f35-a40a-ef525a3c5088-cache\") pod \"swift-storage-0\" (UID: \"265ee755-a70e-4f35-a40a-ef525a3c5088\") " pod="openstack/swift-storage-0" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.252890 4779 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"265ee755-a70e-4f35-a40a-ef525a3c5088\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/swift-storage-0" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.267948 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qh2z\" (UniqueName: \"kubernetes.io/projected/265ee755-a70e-4f35-a40a-ef525a3c5088-kube-api-access-4qh2z\") pod \"swift-storage-0\" (UID: \"265ee755-a70e-4f35-a40a-ef525a3c5088\") " pod="openstack/swift-storage-0" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.276025 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"265ee755-a70e-4f35-a40a-ef525a3c5088\") " pod="openstack/swift-storage-0" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.350007 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"aa122564-e2c8-4ceb-b66d-1b677aaa4b21","Type":"ContainerStarted","Data":"b7aba89f8ef6cf89ce30bc68f24b0d018df8bd7b22b27eaf916a3dcd168dc50f"} Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.365296 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-8b5gg" event={"ID":"8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba","Type":"ContainerStarted","Data":"194cfbd6e22911e2766414ffc9838e28cb46029d28c5939dbe26b067217e4b21"} Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.368028 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-szlhd" event={"ID":"2e43d521-a73a-4d72-8270-bb959b5d0a53","Type":"ContainerStarted","Data":"f117822129b7e425f90e428761a73bb0d4bb2a89d4207e43ae62adb632006614"} Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.368104 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-szlhd" event={"ID":"2e43d521-a73a-4d72-8270-bb959b5d0a53","Type":"ContainerStarted","Data":"87b0fcc7a6473005a0d4106c949eb95dd1fa1a0121fdb3975f143a5dfa1f1452"} Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.374982 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=10.156513999 podStartE2EDuration="30.374963006s" podCreationTimestamp="2025-11-28 12:53:08 +0000 UTC" firstStartedPulling="2025-11-28 12:53:17.572208584 +0000 UTC m=+1058.137883938" lastFinishedPulling="2025-11-28 12:53:37.790657591 +0000 UTC m=+1078.356332945" observedRunningTime="2025-11-28 12:53:38.37249532 +0000 UTC m=+1078.938170684" watchObservedRunningTime="2025-11-28 12:53:38.374963006 +0000 UTC m=+1078.940638370" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.376651 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-wcndb" event={"ID":"bdf95638-8948-4749-b04d-5a58b43dbc7b","Type":"ContainerDied","Data":"390e8ef0dd3ec50117d43627ac149a50ea0ff31d015605378fd56700e80aee3c"} Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.376658 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-wcndb" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.376746 4779 scope.go:117] "RemoveContainer" containerID="0cfe64363d1966463bf72066df4f2cff29542d59f74c0e7cac6c8ca808c6c3ec" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.385800 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"7312815e-950e-48e2-bcbe-c74717279168","Type":"ContainerStarted","Data":"5e2b4c42c63336b791588ab54865ccffd4e27221446fad6860844fc22a3f4cf8"} Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.387635 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-9dg9f" event={"ID":"40b28a82-461d-43d0-a8b7-35730dbff017","Type":"ContainerStarted","Data":"687aed71f5b863fd9ba69b84a6e0c280c858a023f0cf071b9224f745de6d27e6"} Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.389821 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-szlhd" podStartSLOduration=3.389807604 podStartE2EDuration="3.389807604s" podCreationTimestamp="2025-11-28 12:53:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:53:38.388586392 +0000 UTC m=+1078.954261756" watchObservedRunningTime="2025-11-28 12:53:38.389807604 +0000 UTC m=+1078.955482958" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.408678 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"bd0f63de-dfe7-471d-92d8-b41e260d970b","Type":"ContainerStarted","Data":"36eedfacf6bcb09530eaa1d6faaf440f8f129f1795adf8d5ebec23e900c7c7f6"} Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.429748 4779 scope.go:117] "RemoveContainer" containerID="81b115e606113ecdab4b919e5cfcd75ffaa9504ee6300b1864eecb86f10bee60" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.439659 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c27e5f17-320d-472d-a3e7-6a0e9fae960b","Type":"ContainerStarted","Data":"adea7a761555eb92fc3c05c999297c69365674e425346f438aec0e4cdcdfec5e"} Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.443153 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-c6d9j" event={"ID":"a9ef6128-c3cf-4c5a-80ff-e0c4c263637d","Type":"ContainerStarted","Data":"5a4ca21bc1e5d0bd291e8cfb30d1fcf7eaa148dc1ec3a111f78895c4f7f67f96"} Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.443886 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-c6d9j" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.443909 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-c6d9j" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.455376 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=7.557517339 podStartE2EDuration="26.455353632s" podCreationTimestamp="2025-11-28 12:53:12 +0000 UTC" firstStartedPulling="2025-11-28 12:53:18.874521808 +0000 UTC m=+1059.440197152" lastFinishedPulling="2025-11-28 12:53:37.772358091 +0000 UTC m=+1078.338033445" observedRunningTime="2025-11-28 12:53:38.432361675 +0000 UTC m=+1078.998037039" watchObservedRunningTime="2025-11-28 12:53:38.455353632 +0000 UTC m=+1079.021028986" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.491347 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-wcndb"] Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.504403 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-wcndb"] Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.520939 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=28.282100861 podStartE2EDuration="37.520921939s" podCreationTimestamp="2025-11-28 12:53:01 +0000 UTC" firstStartedPulling="2025-11-28 12:53:17.206054307 +0000 UTC m=+1057.771729661" lastFinishedPulling="2025-11-28 12:53:26.444875345 +0000 UTC m=+1067.010550739" observedRunningTime="2025-11-28 12:53:38.487613916 +0000 UTC m=+1079.053289270" watchObservedRunningTime="2025-11-28 12:53:38.520921939 +0000 UTC m=+1079.086597293" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.528192 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-55p99"] Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.529242 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-55p99" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.531003 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-c6d9j" podStartSLOduration=20.825473068 podStartE2EDuration="29.530985489s" podCreationTimestamp="2025-11-28 12:53:09 +0000 UTC" firstStartedPulling="2025-11-28 12:53:17.736350394 +0000 UTC m=+1058.302025748" lastFinishedPulling="2025-11-28 12:53:26.441862815 +0000 UTC m=+1067.007538169" observedRunningTime="2025-11-28 12:53:38.519164032 +0000 UTC m=+1079.084839386" watchObservedRunningTime="2025-11-28 12:53:38.530985489 +0000 UTC m=+1079.096660843" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.533901 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.534140 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.534294 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.555747 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-55p99"] Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.569157 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-55p99"] Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.571170 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=27.706844129 podStartE2EDuration="36.571152586s" podCreationTimestamp="2025-11-28 12:53:02 +0000 UTC" firstStartedPulling="2025-11-28 12:53:17.457401376 +0000 UTC m=+1058.023076730" lastFinishedPulling="2025-11-28 12:53:26.321709823 +0000 UTC m=+1066.887385187" observedRunningTime="2025-11-28 12:53:38.545287063 +0000 UTC m=+1079.110962417" watchObservedRunningTime="2025-11-28 12:53:38.571152586 +0000 UTC m=+1079.136827940" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.580168 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-25kzk"] Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.581072 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-25kzk" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.590765 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-25kzk"] Nov 28 12:53:38 crc kubenswrapper[4779]: E1128 12:53:38.609633 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-5gknm ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-55p99" podUID="4bb42d5c-a247-48a2-89f9-22ea311b083e" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.658455 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hrtj\" (UniqueName: \"kubernetes.io/projected/5e769641-0f27-4979-9823-dff8fe453054-kube-api-access-8hrtj\") pod \"swift-ring-rebalance-25kzk\" (UID: \"5e769641-0f27-4979-9823-dff8fe453054\") " pod="openstack/swift-ring-rebalance-25kzk" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.658499 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5e769641-0f27-4979-9823-dff8fe453054-scripts\") pod \"swift-ring-rebalance-25kzk\" (UID: \"5e769641-0f27-4979-9823-dff8fe453054\") " pod="openstack/swift-ring-rebalance-25kzk" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.658523 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4bb42d5c-a247-48a2-89f9-22ea311b083e-scripts\") pod \"swift-ring-rebalance-55p99\" (UID: \"4bb42d5c-a247-48a2-89f9-22ea311b083e\") " pod="openstack/swift-ring-rebalance-55p99" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.658547 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5e769641-0f27-4979-9823-dff8fe453054-etc-swift\") pod \"swift-ring-rebalance-25kzk\" (UID: \"5e769641-0f27-4979-9823-dff8fe453054\") " pod="openstack/swift-ring-rebalance-25kzk" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.658569 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5e769641-0f27-4979-9823-dff8fe453054-ring-data-devices\") pod \"swift-ring-rebalance-25kzk\" (UID: \"5e769641-0f27-4979-9823-dff8fe453054\") " pod="openstack/swift-ring-rebalance-25kzk" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.658784 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5e769641-0f27-4979-9823-dff8fe453054-swiftconf\") pod \"swift-ring-rebalance-25kzk\" (UID: \"5e769641-0f27-4979-9823-dff8fe453054\") " pod="openstack/swift-ring-rebalance-25kzk" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.658835 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4bb42d5c-a247-48a2-89f9-22ea311b083e-ring-data-devices\") pod \"swift-ring-rebalance-55p99\" (UID: \"4bb42d5c-a247-48a2-89f9-22ea311b083e\") " pod="openstack/swift-ring-rebalance-55p99" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.658871 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e769641-0f27-4979-9823-dff8fe453054-combined-ca-bundle\") pod \"swift-ring-rebalance-25kzk\" (UID: \"5e769641-0f27-4979-9823-dff8fe453054\") " pod="openstack/swift-ring-rebalance-25kzk" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.658944 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4bb42d5c-a247-48a2-89f9-22ea311b083e-dispersionconf\") pod \"swift-ring-rebalance-55p99\" (UID: \"4bb42d5c-a247-48a2-89f9-22ea311b083e\") " pod="openstack/swift-ring-rebalance-55p99" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.658994 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4bb42d5c-a247-48a2-89f9-22ea311b083e-swiftconf\") pod \"swift-ring-rebalance-55p99\" (UID: \"4bb42d5c-a247-48a2-89f9-22ea311b083e\") " pod="openstack/swift-ring-rebalance-55p99" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.659078 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bb42d5c-a247-48a2-89f9-22ea311b083e-combined-ca-bundle\") pod \"swift-ring-rebalance-55p99\" (UID: \"4bb42d5c-a247-48a2-89f9-22ea311b083e\") " pod="openstack/swift-ring-rebalance-55p99" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.659120 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gknm\" (UniqueName: \"kubernetes.io/projected/4bb42d5c-a247-48a2-89f9-22ea311b083e-kube-api-access-5gknm\") pod \"swift-ring-rebalance-55p99\" (UID: \"4bb42d5c-a247-48a2-89f9-22ea311b083e\") " pod="openstack/swift-ring-rebalance-55p99" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.659669 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4bb42d5c-a247-48a2-89f9-22ea311b083e-etc-swift\") pod \"swift-ring-rebalance-55p99\" (UID: \"4bb42d5c-a247-48a2-89f9-22ea311b083e\") " pod="openstack/swift-ring-rebalance-55p99" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.659701 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5e769641-0f27-4979-9823-dff8fe453054-dispersionconf\") pod \"swift-ring-rebalance-25kzk\" (UID: \"5e769641-0f27-4979-9823-dff8fe453054\") " pod="openstack/swift-ring-rebalance-25kzk" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.761620 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4bb42d5c-a247-48a2-89f9-22ea311b083e-swiftconf\") pod \"swift-ring-rebalance-55p99\" (UID: \"4bb42d5c-a247-48a2-89f9-22ea311b083e\") " pod="openstack/swift-ring-rebalance-55p99" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.761678 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bb42d5c-a247-48a2-89f9-22ea311b083e-combined-ca-bundle\") pod \"swift-ring-rebalance-55p99\" (UID: \"4bb42d5c-a247-48a2-89f9-22ea311b083e\") " pod="openstack/swift-ring-rebalance-55p99" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.761699 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gknm\" (UniqueName: \"kubernetes.io/projected/4bb42d5c-a247-48a2-89f9-22ea311b083e-kube-api-access-5gknm\") pod \"swift-ring-rebalance-55p99\" (UID: \"4bb42d5c-a247-48a2-89f9-22ea311b083e\") " pod="openstack/swift-ring-rebalance-55p99" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.761741 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/265ee755-a70e-4f35-a40a-ef525a3c5088-etc-swift\") pod \"swift-storage-0\" (UID: \"265ee755-a70e-4f35-a40a-ef525a3c5088\") " pod="openstack/swift-storage-0" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.761759 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4bb42d5c-a247-48a2-89f9-22ea311b083e-etc-swift\") pod \"swift-ring-rebalance-55p99\" (UID: \"4bb42d5c-a247-48a2-89f9-22ea311b083e\") " pod="openstack/swift-ring-rebalance-55p99" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.761778 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5e769641-0f27-4979-9823-dff8fe453054-dispersionconf\") pod \"swift-ring-rebalance-25kzk\" (UID: \"5e769641-0f27-4979-9823-dff8fe453054\") " pod="openstack/swift-ring-rebalance-25kzk" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.761827 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hrtj\" (UniqueName: \"kubernetes.io/projected/5e769641-0f27-4979-9823-dff8fe453054-kube-api-access-8hrtj\") pod \"swift-ring-rebalance-25kzk\" (UID: \"5e769641-0f27-4979-9823-dff8fe453054\") " pod="openstack/swift-ring-rebalance-25kzk" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.761843 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5e769641-0f27-4979-9823-dff8fe453054-scripts\") pod \"swift-ring-rebalance-25kzk\" (UID: \"5e769641-0f27-4979-9823-dff8fe453054\") " pod="openstack/swift-ring-rebalance-25kzk" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.761863 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4bb42d5c-a247-48a2-89f9-22ea311b083e-scripts\") pod \"swift-ring-rebalance-55p99\" (UID: \"4bb42d5c-a247-48a2-89f9-22ea311b083e\") " pod="openstack/swift-ring-rebalance-55p99" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.761901 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5e769641-0f27-4979-9823-dff8fe453054-etc-swift\") pod \"swift-ring-rebalance-25kzk\" (UID: \"5e769641-0f27-4979-9823-dff8fe453054\") " pod="openstack/swift-ring-rebalance-25kzk" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.761918 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5e769641-0f27-4979-9823-dff8fe453054-ring-data-devices\") pod \"swift-ring-rebalance-25kzk\" (UID: \"5e769641-0f27-4979-9823-dff8fe453054\") " pod="openstack/swift-ring-rebalance-25kzk" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.761977 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5e769641-0f27-4979-9823-dff8fe453054-swiftconf\") pod \"swift-ring-rebalance-25kzk\" (UID: \"5e769641-0f27-4979-9823-dff8fe453054\") " pod="openstack/swift-ring-rebalance-25kzk" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.761998 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4bb42d5c-a247-48a2-89f9-22ea311b083e-ring-data-devices\") pod \"swift-ring-rebalance-55p99\" (UID: \"4bb42d5c-a247-48a2-89f9-22ea311b083e\") " pod="openstack/swift-ring-rebalance-55p99" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.762016 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e769641-0f27-4979-9823-dff8fe453054-combined-ca-bundle\") pod \"swift-ring-rebalance-25kzk\" (UID: \"5e769641-0f27-4979-9823-dff8fe453054\") " pod="openstack/swift-ring-rebalance-25kzk" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.762072 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4bb42d5c-a247-48a2-89f9-22ea311b083e-dispersionconf\") pod \"swift-ring-rebalance-55p99\" (UID: \"4bb42d5c-a247-48a2-89f9-22ea311b083e\") " pod="openstack/swift-ring-rebalance-55p99" Nov 28 12:53:38 crc kubenswrapper[4779]: E1128 12:53:38.762764 4779 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 28 12:53:38 crc kubenswrapper[4779]: E1128 12:53:38.762790 4779 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 28 12:53:38 crc kubenswrapper[4779]: E1128 12:53:38.762838 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/265ee755-a70e-4f35-a40a-ef525a3c5088-etc-swift podName:265ee755-a70e-4f35-a40a-ef525a3c5088 nodeName:}" failed. No retries permitted until 2025-11-28 12:53:39.762822735 +0000 UTC m=+1080.328498089 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/265ee755-a70e-4f35-a40a-ef525a3c5088-etc-swift") pod "swift-storage-0" (UID: "265ee755-a70e-4f35-a40a-ef525a3c5088") : configmap "swift-ring-files" not found Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.763254 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5e769641-0f27-4979-9823-dff8fe453054-scripts\") pod \"swift-ring-rebalance-25kzk\" (UID: \"5e769641-0f27-4979-9823-dff8fe453054\") " pod="openstack/swift-ring-rebalance-25kzk" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.763439 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4bb42d5c-a247-48a2-89f9-22ea311b083e-scripts\") pod \"swift-ring-rebalance-55p99\" (UID: \"4bb42d5c-a247-48a2-89f9-22ea311b083e\") " pod="openstack/swift-ring-rebalance-55p99" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.763615 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4bb42d5c-a247-48a2-89f9-22ea311b083e-ring-data-devices\") pod \"swift-ring-rebalance-55p99\" (UID: \"4bb42d5c-a247-48a2-89f9-22ea311b083e\") " pod="openstack/swift-ring-rebalance-55p99" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.763712 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5e769641-0f27-4979-9823-dff8fe453054-ring-data-devices\") pod \"swift-ring-rebalance-25kzk\" (UID: \"5e769641-0f27-4979-9823-dff8fe453054\") " pod="openstack/swift-ring-rebalance-25kzk" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.763829 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4bb42d5c-a247-48a2-89f9-22ea311b083e-etc-swift\") pod \"swift-ring-rebalance-55p99\" (UID: \"4bb42d5c-a247-48a2-89f9-22ea311b083e\") " pod="openstack/swift-ring-rebalance-55p99" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.763845 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5e769641-0f27-4979-9823-dff8fe453054-etc-swift\") pod \"swift-ring-rebalance-25kzk\" (UID: \"5e769641-0f27-4979-9823-dff8fe453054\") " pod="openstack/swift-ring-rebalance-25kzk" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.766655 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4bb42d5c-a247-48a2-89f9-22ea311b083e-dispersionconf\") pod \"swift-ring-rebalance-55p99\" (UID: \"4bb42d5c-a247-48a2-89f9-22ea311b083e\") " pod="openstack/swift-ring-rebalance-55p99" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.767386 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5e769641-0f27-4979-9823-dff8fe453054-swiftconf\") pod \"swift-ring-rebalance-25kzk\" (UID: \"5e769641-0f27-4979-9823-dff8fe453054\") " pod="openstack/swift-ring-rebalance-25kzk" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.768393 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bb42d5c-a247-48a2-89f9-22ea311b083e-combined-ca-bundle\") pod \"swift-ring-rebalance-55p99\" (UID: \"4bb42d5c-a247-48a2-89f9-22ea311b083e\") " pod="openstack/swift-ring-rebalance-55p99" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.768733 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4bb42d5c-a247-48a2-89f9-22ea311b083e-swiftconf\") pod \"swift-ring-rebalance-55p99\" (UID: \"4bb42d5c-a247-48a2-89f9-22ea311b083e\") " pod="openstack/swift-ring-rebalance-55p99" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.768974 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e769641-0f27-4979-9823-dff8fe453054-combined-ca-bundle\") pod \"swift-ring-rebalance-25kzk\" (UID: \"5e769641-0f27-4979-9823-dff8fe453054\") " pod="openstack/swift-ring-rebalance-25kzk" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.783843 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gknm\" (UniqueName: \"kubernetes.io/projected/4bb42d5c-a247-48a2-89f9-22ea311b083e-kube-api-access-5gknm\") pod \"swift-ring-rebalance-55p99\" (UID: \"4bb42d5c-a247-48a2-89f9-22ea311b083e\") " pod="openstack/swift-ring-rebalance-55p99" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.786286 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hrtj\" (UniqueName: \"kubernetes.io/projected/5e769641-0f27-4979-9823-dff8fe453054-kube-api-access-8hrtj\") pod \"swift-ring-rebalance-25kzk\" (UID: \"5e769641-0f27-4979-9823-dff8fe453054\") " pod="openstack/swift-ring-rebalance-25kzk" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.791343 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5e769641-0f27-4979-9823-dff8fe453054-dispersionconf\") pod \"swift-ring-rebalance-25kzk\" (UID: \"5e769641-0f27-4979-9823-dff8fe453054\") " pod="openstack/swift-ring-rebalance-25kzk" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.913190 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 28 12:53:38 crc kubenswrapper[4779]: I1128 12:53:38.916953 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-25kzk" Nov 28 12:53:39 crc kubenswrapper[4779]: I1128 12:53:39.463617 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-25kzk"] Nov 28 12:53:39 crc kubenswrapper[4779]: I1128 12:53:39.468693 4779 generic.go:334] "Generic (PLEG): container finished" podID="40b28a82-461d-43d0-a8b7-35730dbff017" containerID="b789adbbc15d54d80e0330a742084a3dd7f63e61f373be2a56337eedc437cad4" exitCode=0 Nov 28 12:53:39 crc kubenswrapper[4779]: I1128 12:53:39.468766 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-9dg9f" event={"ID":"40b28a82-461d-43d0-a8b7-35730dbff017","Type":"ContainerDied","Data":"b789adbbc15d54d80e0330a742084a3dd7f63e61f373be2a56337eedc437cad4"} Nov 28 12:53:39 crc kubenswrapper[4779]: I1128 12:53:39.474793 4779 generic.go:334] "Generic (PLEG): container finished" podID="8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba" containerID="adea18a5be93bc202a7808de0b66f45b7a429a25fb8c0845f0bdc63c4a4ae0a2" exitCode=0 Nov 28 12:53:39 crc kubenswrapper[4779]: I1128 12:53:39.474848 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-8b5gg" event={"ID":"8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba","Type":"ContainerDied","Data":"adea18a5be93bc202a7808de0b66f45b7a429a25fb8c0845f0bdc63c4a4ae0a2"} Nov 28 12:53:39 crc kubenswrapper[4779]: I1128 12:53:39.486393 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-55p99" Nov 28 12:53:39 crc kubenswrapper[4779]: I1128 12:53:39.556324 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-55p99" Nov 28 12:53:39 crc kubenswrapper[4779]: I1128 12:53:39.675537 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4bb42d5c-a247-48a2-89f9-22ea311b083e-etc-swift\") pod \"4bb42d5c-a247-48a2-89f9-22ea311b083e\" (UID: \"4bb42d5c-a247-48a2-89f9-22ea311b083e\") " Nov 28 12:53:39 crc kubenswrapper[4779]: I1128 12:53:39.675847 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4bb42d5c-a247-48a2-89f9-22ea311b083e-ring-data-devices\") pod \"4bb42d5c-a247-48a2-89f9-22ea311b083e\" (UID: \"4bb42d5c-a247-48a2-89f9-22ea311b083e\") " Nov 28 12:53:39 crc kubenswrapper[4779]: I1128 12:53:39.675900 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bb42d5c-a247-48a2-89f9-22ea311b083e-combined-ca-bundle\") pod \"4bb42d5c-a247-48a2-89f9-22ea311b083e\" (UID: \"4bb42d5c-a247-48a2-89f9-22ea311b083e\") " Nov 28 12:53:39 crc kubenswrapper[4779]: I1128 12:53:39.675924 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4bb42d5c-a247-48a2-89f9-22ea311b083e-scripts\") pod \"4bb42d5c-a247-48a2-89f9-22ea311b083e\" (UID: \"4bb42d5c-a247-48a2-89f9-22ea311b083e\") " Nov 28 12:53:39 crc kubenswrapper[4779]: I1128 12:53:39.675987 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4bb42d5c-a247-48a2-89f9-22ea311b083e-swiftconf\") pod \"4bb42d5c-a247-48a2-89f9-22ea311b083e\" (UID: \"4bb42d5c-a247-48a2-89f9-22ea311b083e\") " Nov 28 12:53:39 crc kubenswrapper[4779]: I1128 12:53:39.676105 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gknm\" (UniqueName: \"kubernetes.io/projected/4bb42d5c-a247-48a2-89f9-22ea311b083e-kube-api-access-5gknm\") pod \"4bb42d5c-a247-48a2-89f9-22ea311b083e\" (UID: \"4bb42d5c-a247-48a2-89f9-22ea311b083e\") " Nov 28 12:53:39 crc kubenswrapper[4779]: I1128 12:53:39.676134 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4bb42d5c-a247-48a2-89f9-22ea311b083e-dispersionconf\") pod \"4bb42d5c-a247-48a2-89f9-22ea311b083e\" (UID: \"4bb42d5c-a247-48a2-89f9-22ea311b083e\") " Nov 28 12:53:39 crc kubenswrapper[4779]: I1128 12:53:39.676244 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bb42d5c-a247-48a2-89f9-22ea311b083e-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "4bb42d5c-a247-48a2-89f9-22ea311b083e" (UID: "4bb42d5c-a247-48a2-89f9-22ea311b083e"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:53:39 crc kubenswrapper[4779]: I1128 12:53:39.676401 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb42d5c-a247-48a2-89f9-22ea311b083e-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "4bb42d5c-a247-48a2-89f9-22ea311b083e" (UID: "4bb42d5c-a247-48a2-89f9-22ea311b083e"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:53:39 crc kubenswrapper[4779]: I1128 12:53:39.676511 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb42d5c-a247-48a2-89f9-22ea311b083e-scripts" (OuterVolumeSpecName: "scripts") pod "4bb42d5c-a247-48a2-89f9-22ea311b083e" (UID: "4bb42d5c-a247-48a2-89f9-22ea311b083e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:53:39 crc kubenswrapper[4779]: I1128 12:53:39.676691 4779 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4bb42d5c-a247-48a2-89f9-22ea311b083e-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:53:39 crc kubenswrapper[4779]: I1128 12:53:39.676707 4779 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4bb42d5c-a247-48a2-89f9-22ea311b083e-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 28 12:53:39 crc kubenswrapper[4779]: I1128 12:53:39.676718 4779 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4bb42d5c-a247-48a2-89f9-22ea311b083e-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 28 12:53:39 crc kubenswrapper[4779]: I1128 12:53:39.681043 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bb42d5c-a247-48a2-89f9-22ea311b083e-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "4bb42d5c-a247-48a2-89f9-22ea311b083e" (UID: "4bb42d5c-a247-48a2-89f9-22ea311b083e"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:53:39 crc kubenswrapper[4779]: I1128 12:53:39.681426 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bb42d5c-a247-48a2-89f9-22ea311b083e-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "4bb42d5c-a247-48a2-89f9-22ea311b083e" (UID: "4bb42d5c-a247-48a2-89f9-22ea311b083e"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:53:39 crc kubenswrapper[4779]: I1128 12:53:39.682562 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bb42d5c-a247-48a2-89f9-22ea311b083e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4bb42d5c-a247-48a2-89f9-22ea311b083e" (UID: "4bb42d5c-a247-48a2-89f9-22ea311b083e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:53:39 crc kubenswrapper[4779]: I1128 12:53:39.683233 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb42d5c-a247-48a2-89f9-22ea311b083e-kube-api-access-5gknm" (OuterVolumeSpecName: "kube-api-access-5gknm") pod "4bb42d5c-a247-48a2-89f9-22ea311b083e" (UID: "4bb42d5c-a247-48a2-89f9-22ea311b083e"). InnerVolumeSpecName "kube-api-access-5gknm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:53:39 crc kubenswrapper[4779]: I1128 12:53:39.744180 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdf95638-8948-4749-b04d-5a58b43dbc7b" path="/var/lib/kubelet/pods/bdf95638-8948-4749-b04d-5a58b43dbc7b/volumes" Nov 28 12:53:39 crc kubenswrapper[4779]: I1128 12:53:39.751896 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-9dg9f" Nov 28 12:53:39 crc kubenswrapper[4779]: I1128 12:53:39.777712 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/265ee755-a70e-4f35-a40a-ef525a3c5088-etc-swift\") pod \"swift-storage-0\" (UID: \"265ee755-a70e-4f35-a40a-ef525a3c5088\") " pod="openstack/swift-storage-0" Nov 28 12:53:39 crc kubenswrapper[4779]: I1128 12:53:39.777850 4779 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bb42d5c-a247-48a2-89f9-22ea311b083e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:53:39 crc kubenswrapper[4779]: I1128 12:53:39.777868 4779 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4bb42d5c-a247-48a2-89f9-22ea311b083e-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 28 12:53:39 crc kubenswrapper[4779]: I1128 12:53:39.777882 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gknm\" (UniqueName: \"kubernetes.io/projected/4bb42d5c-a247-48a2-89f9-22ea311b083e-kube-api-access-5gknm\") on node \"crc\" DevicePath \"\"" Nov 28 12:53:39 crc kubenswrapper[4779]: I1128 12:53:39.777897 4779 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4bb42d5c-a247-48a2-89f9-22ea311b083e-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 28 12:53:39 crc kubenswrapper[4779]: E1128 12:53:39.777920 4779 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 28 12:53:39 crc kubenswrapper[4779]: E1128 12:53:39.777942 4779 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 28 12:53:39 crc kubenswrapper[4779]: E1128 12:53:39.777992 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/265ee755-a70e-4f35-a40a-ef525a3c5088-etc-swift podName:265ee755-a70e-4f35-a40a-ef525a3c5088 nodeName:}" failed. No retries permitted until 2025-11-28 12:53:41.77797559 +0000 UTC m=+1082.343650944 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/265ee755-a70e-4f35-a40a-ef525a3c5088-etc-swift") pod "swift-storage-0" (UID: "265ee755-a70e-4f35-a40a-ef525a3c5088") : configmap "swift-ring-files" not found Nov 28 12:53:39 crc kubenswrapper[4779]: I1128 12:53:39.879045 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40b28a82-461d-43d0-a8b7-35730dbff017-config\") pod \"40b28a82-461d-43d0-a8b7-35730dbff017\" (UID: \"40b28a82-461d-43d0-a8b7-35730dbff017\") " Nov 28 12:53:39 crc kubenswrapper[4779]: I1128 12:53:39.879161 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40b28a82-461d-43d0-a8b7-35730dbff017-dns-svc\") pod \"40b28a82-461d-43d0-a8b7-35730dbff017\" (UID: \"40b28a82-461d-43d0-a8b7-35730dbff017\") " Nov 28 12:53:39 crc kubenswrapper[4779]: I1128 12:53:39.879282 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/40b28a82-461d-43d0-a8b7-35730dbff017-ovsdbserver-sb\") pod \"40b28a82-461d-43d0-a8b7-35730dbff017\" (UID: \"40b28a82-461d-43d0-a8b7-35730dbff017\") " Nov 28 12:53:39 crc kubenswrapper[4779]: I1128 12:53:39.879328 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whctb\" (UniqueName: \"kubernetes.io/projected/40b28a82-461d-43d0-a8b7-35730dbff017-kube-api-access-whctb\") pod \"40b28a82-461d-43d0-a8b7-35730dbff017\" (UID: \"40b28a82-461d-43d0-a8b7-35730dbff017\") " Nov 28 12:53:39 crc kubenswrapper[4779]: I1128 12:53:39.879383 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40b28a82-461d-43d0-a8b7-35730dbff017-ovsdbserver-nb\") pod \"40b28a82-461d-43d0-a8b7-35730dbff017\" (UID: \"40b28a82-461d-43d0-a8b7-35730dbff017\") " Nov 28 12:53:39 crc kubenswrapper[4779]: I1128 12:53:39.883792 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40b28a82-461d-43d0-a8b7-35730dbff017-kube-api-access-whctb" (OuterVolumeSpecName: "kube-api-access-whctb") pod "40b28a82-461d-43d0-a8b7-35730dbff017" (UID: "40b28a82-461d-43d0-a8b7-35730dbff017"). InnerVolumeSpecName "kube-api-access-whctb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:53:39 crc kubenswrapper[4779]: I1128 12:53:39.896841 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40b28a82-461d-43d0-a8b7-35730dbff017-config" (OuterVolumeSpecName: "config") pod "40b28a82-461d-43d0-a8b7-35730dbff017" (UID: "40b28a82-461d-43d0-a8b7-35730dbff017"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:53:39 crc kubenswrapper[4779]: I1128 12:53:39.898281 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40b28a82-461d-43d0-a8b7-35730dbff017-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "40b28a82-461d-43d0-a8b7-35730dbff017" (UID: "40b28a82-461d-43d0-a8b7-35730dbff017"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:53:39 crc kubenswrapper[4779]: I1128 12:53:39.899743 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40b28a82-461d-43d0-a8b7-35730dbff017-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "40b28a82-461d-43d0-a8b7-35730dbff017" (UID: "40b28a82-461d-43d0-a8b7-35730dbff017"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:53:39 crc kubenswrapper[4779]: I1128 12:53:39.920004 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40b28a82-461d-43d0-a8b7-35730dbff017-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "40b28a82-461d-43d0-a8b7-35730dbff017" (UID: "40b28a82-461d-43d0-a8b7-35730dbff017"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:53:39 crc kubenswrapper[4779]: I1128 12:53:39.981431 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whctb\" (UniqueName: \"kubernetes.io/projected/40b28a82-461d-43d0-a8b7-35730dbff017-kube-api-access-whctb\") on node \"crc\" DevicePath \"\"" Nov 28 12:53:39 crc kubenswrapper[4779]: I1128 12:53:39.981462 4779 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40b28a82-461d-43d0-a8b7-35730dbff017-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 12:53:39 crc kubenswrapper[4779]: I1128 12:53:39.981472 4779 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40b28a82-461d-43d0-a8b7-35730dbff017-config\") on node \"crc\" DevicePath \"\"" Nov 28 12:53:39 crc kubenswrapper[4779]: I1128 12:53:39.981480 4779 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40b28a82-461d-43d0-a8b7-35730dbff017-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 12:53:39 crc kubenswrapper[4779]: I1128 12:53:39.981489 4779 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/40b28a82-461d-43d0-a8b7-35730dbff017-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 12:53:40 crc kubenswrapper[4779]: I1128 12:53:40.168652 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 28 12:53:40 crc kubenswrapper[4779]: I1128 12:53:40.169779 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 28 12:53:40 crc kubenswrapper[4779]: I1128 12:53:40.223501 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 28 12:53:40 crc kubenswrapper[4779]: I1128 12:53:40.483705 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-25kzk" event={"ID":"5e769641-0f27-4979-9823-dff8fe453054","Type":"ContainerStarted","Data":"57eb440fe0db38db8e3f7677ca6d4080fb38cedb1e894df7872d396e3a2c99e2"} Nov 28 12:53:40 crc kubenswrapper[4779]: I1128 12:53:40.486323 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-9dg9f" Nov 28 12:53:40 crc kubenswrapper[4779]: I1128 12:53:40.486314 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-9dg9f" event={"ID":"40b28a82-461d-43d0-a8b7-35730dbff017","Type":"ContainerDied","Data":"687aed71f5b863fd9ba69b84a6e0c280c858a023f0cf071b9224f745de6d27e6"} Nov 28 12:53:40 crc kubenswrapper[4779]: I1128 12:53:40.486376 4779 scope.go:117] "RemoveContainer" containerID="b789adbbc15d54d80e0330a742084a3dd7f63e61f373be2a56337eedc437cad4" Nov 28 12:53:40 crc kubenswrapper[4779]: I1128 12:53:40.490251 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-55p99" Nov 28 12:53:40 crc kubenswrapper[4779]: I1128 12:53:40.490334 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-8b5gg" event={"ID":"8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba","Type":"ContainerStarted","Data":"802074bc68ba8fb39b3da0987f20e2b07ad90d6e52adc2870fa58e01a7e66fc7"} Nov 28 12:53:40 crc kubenswrapper[4779]: I1128 12:53:40.522938 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-8b5gg" podStartSLOduration=4.522918402 podStartE2EDuration="4.522918402s" podCreationTimestamp="2025-11-28 12:53:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:53:40.519794228 +0000 UTC m=+1081.085469582" watchObservedRunningTime="2025-11-28 12:53:40.522918402 +0000 UTC m=+1081.088593756" Nov 28 12:53:40 crc kubenswrapper[4779]: I1128 12:53:40.530623 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 28 12:53:40 crc kubenswrapper[4779]: I1128 12:53:40.551467 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-55p99"] Nov 28 12:53:40 crc kubenswrapper[4779]: I1128 12:53:40.561368 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-55p99"] Nov 28 12:53:40 crc kubenswrapper[4779]: I1128 12:53:40.580386 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-9dg9f"] Nov 28 12:53:40 crc kubenswrapper[4779]: I1128 12:53:40.615624 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-9dg9f"] Nov 28 12:53:40 crc kubenswrapper[4779]: E1128 12:53:40.909135 4779 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.107:41498->38.102.83.107:35311: write tcp 38.102.83.107:41498->38.102.83.107:35311: write: broken pipe Nov 28 12:53:40 crc kubenswrapper[4779]: I1128 12:53:40.912442 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 28 12:53:40 crc kubenswrapper[4779]: I1128 12:53:40.960062 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 28 12:53:41 crc kubenswrapper[4779]: I1128 12:53:41.497673 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-8b5gg" Nov 28 12:53:41 crc kubenswrapper[4779]: I1128 12:53:41.534882 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 28 12:53:41 crc kubenswrapper[4779]: I1128 12:53:41.686366 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 28 12:53:41 crc kubenswrapper[4779]: E1128 12:53:41.686758 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40b28a82-461d-43d0-a8b7-35730dbff017" containerName="init" Nov 28 12:53:41 crc kubenswrapper[4779]: I1128 12:53:41.686777 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="40b28a82-461d-43d0-a8b7-35730dbff017" containerName="init" Nov 28 12:53:41 crc kubenswrapper[4779]: I1128 12:53:41.686992 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="40b28a82-461d-43d0-a8b7-35730dbff017" containerName="init" Nov 28 12:53:41 crc kubenswrapper[4779]: I1128 12:53:41.688538 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 28 12:53:41 crc kubenswrapper[4779]: I1128 12:53:41.692024 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-7tpsq" Nov 28 12:53:41 crc kubenswrapper[4779]: I1128 12:53:41.693597 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 28 12:53:41 crc kubenswrapper[4779]: I1128 12:53:41.694420 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 28 12:53:41 crc kubenswrapper[4779]: I1128 12:53:41.694679 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 28 12:53:41 crc kubenswrapper[4779]: I1128 12:53:41.696940 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 28 12:53:41 crc kubenswrapper[4779]: I1128 12:53:41.714354 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d78bd78f-4723-4bf3-99ee-95509a0100af-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d78bd78f-4723-4bf3-99ee-95509a0100af\") " pod="openstack/ovn-northd-0" Nov 28 12:53:41 crc kubenswrapper[4779]: I1128 12:53:41.714390 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d78bd78f-4723-4bf3-99ee-95509a0100af-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d78bd78f-4723-4bf3-99ee-95509a0100af\") " pod="openstack/ovn-northd-0" Nov 28 12:53:41 crc kubenswrapper[4779]: I1128 12:53:41.714413 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d78bd78f-4723-4bf3-99ee-95509a0100af-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d78bd78f-4723-4bf3-99ee-95509a0100af\") " pod="openstack/ovn-northd-0" Nov 28 12:53:41 crc kubenswrapper[4779]: I1128 12:53:41.714456 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d78bd78f-4723-4bf3-99ee-95509a0100af-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d78bd78f-4723-4bf3-99ee-95509a0100af\") " pod="openstack/ovn-northd-0" Nov 28 12:53:41 crc kubenswrapper[4779]: I1128 12:53:41.714495 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d78bd78f-4723-4bf3-99ee-95509a0100af-scripts\") pod \"ovn-northd-0\" (UID: \"d78bd78f-4723-4bf3-99ee-95509a0100af\") " pod="openstack/ovn-northd-0" Nov 28 12:53:41 crc kubenswrapper[4779]: I1128 12:53:41.714517 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvv77\" (UniqueName: \"kubernetes.io/projected/d78bd78f-4723-4bf3-99ee-95509a0100af-kube-api-access-mvv77\") pod \"ovn-northd-0\" (UID: \"d78bd78f-4723-4bf3-99ee-95509a0100af\") " pod="openstack/ovn-northd-0" Nov 28 12:53:41 crc kubenswrapper[4779]: I1128 12:53:41.714561 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d78bd78f-4723-4bf3-99ee-95509a0100af-config\") pod \"ovn-northd-0\" (UID: \"d78bd78f-4723-4bf3-99ee-95509a0100af\") " pod="openstack/ovn-northd-0" Nov 28 12:53:41 crc kubenswrapper[4779]: I1128 12:53:41.737178 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40b28a82-461d-43d0-a8b7-35730dbff017" path="/var/lib/kubelet/pods/40b28a82-461d-43d0-a8b7-35730dbff017/volumes" Nov 28 12:53:41 crc kubenswrapper[4779]: I1128 12:53:41.737895 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb42d5c-a247-48a2-89f9-22ea311b083e" path="/var/lib/kubelet/pods/4bb42d5c-a247-48a2-89f9-22ea311b083e/volumes" Nov 28 12:53:41 crc kubenswrapper[4779]: I1128 12:53:41.816430 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d78bd78f-4723-4bf3-99ee-95509a0100af-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d78bd78f-4723-4bf3-99ee-95509a0100af\") " pod="openstack/ovn-northd-0" Nov 28 12:53:41 crc kubenswrapper[4779]: I1128 12:53:41.816490 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d78bd78f-4723-4bf3-99ee-95509a0100af-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d78bd78f-4723-4bf3-99ee-95509a0100af\") " pod="openstack/ovn-northd-0" Nov 28 12:53:41 crc kubenswrapper[4779]: I1128 12:53:41.816518 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d78bd78f-4723-4bf3-99ee-95509a0100af-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d78bd78f-4723-4bf3-99ee-95509a0100af\") " pod="openstack/ovn-northd-0" Nov 28 12:53:41 crc kubenswrapper[4779]: I1128 12:53:41.816566 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/265ee755-a70e-4f35-a40a-ef525a3c5088-etc-swift\") pod \"swift-storage-0\" (UID: \"265ee755-a70e-4f35-a40a-ef525a3c5088\") " pod="openstack/swift-storage-0" Nov 28 12:53:41 crc kubenswrapper[4779]: I1128 12:53:41.816594 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d78bd78f-4723-4bf3-99ee-95509a0100af-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d78bd78f-4723-4bf3-99ee-95509a0100af\") " pod="openstack/ovn-northd-0" Nov 28 12:53:41 crc kubenswrapper[4779]: I1128 12:53:41.816651 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d78bd78f-4723-4bf3-99ee-95509a0100af-scripts\") pod \"ovn-northd-0\" (UID: \"d78bd78f-4723-4bf3-99ee-95509a0100af\") " pod="openstack/ovn-northd-0" Nov 28 12:53:41 crc kubenswrapper[4779]: I1128 12:53:41.816682 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvv77\" (UniqueName: \"kubernetes.io/projected/d78bd78f-4723-4bf3-99ee-95509a0100af-kube-api-access-mvv77\") pod \"ovn-northd-0\" (UID: \"d78bd78f-4723-4bf3-99ee-95509a0100af\") " pod="openstack/ovn-northd-0" Nov 28 12:53:41 crc kubenswrapper[4779]: I1128 12:53:41.816715 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d78bd78f-4723-4bf3-99ee-95509a0100af-config\") pod \"ovn-northd-0\" (UID: \"d78bd78f-4723-4bf3-99ee-95509a0100af\") " pod="openstack/ovn-northd-0" Nov 28 12:53:41 crc kubenswrapper[4779]: I1128 12:53:41.817199 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d78bd78f-4723-4bf3-99ee-95509a0100af-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d78bd78f-4723-4bf3-99ee-95509a0100af\") " pod="openstack/ovn-northd-0" Nov 28 12:53:41 crc kubenswrapper[4779]: I1128 12:53:41.818073 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d78bd78f-4723-4bf3-99ee-95509a0100af-scripts\") pod \"ovn-northd-0\" (UID: \"d78bd78f-4723-4bf3-99ee-95509a0100af\") " pod="openstack/ovn-northd-0" Nov 28 12:53:41 crc kubenswrapper[4779]: I1128 12:53:41.818531 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d78bd78f-4723-4bf3-99ee-95509a0100af-config\") pod \"ovn-northd-0\" (UID: \"d78bd78f-4723-4bf3-99ee-95509a0100af\") " pod="openstack/ovn-northd-0" Nov 28 12:53:41 crc kubenswrapper[4779]: E1128 12:53:41.818752 4779 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 28 12:53:41 crc kubenswrapper[4779]: E1128 12:53:41.818769 4779 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 28 12:53:41 crc kubenswrapper[4779]: E1128 12:53:41.818807 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/265ee755-a70e-4f35-a40a-ef525a3c5088-etc-swift podName:265ee755-a70e-4f35-a40a-ef525a3c5088 nodeName:}" failed. No retries permitted until 2025-11-28 12:53:45.818795144 +0000 UTC m=+1086.384470518 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/265ee755-a70e-4f35-a40a-ef525a3c5088-etc-swift") pod "swift-storage-0" (UID: "265ee755-a70e-4f35-a40a-ef525a3c5088") : configmap "swift-ring-files" not found Nov 28 12:53:41 crc kubenswrapper[4779]: I1128 12:53:41.823296 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d78bd78f-4723-4bf3-99ee-95509a0100af-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d78bd78f-4723-4bf3-99ee-95509a0100af\") " pod="openstack/ovn-northd-0" Nov 28 12:53:41 crc kubenswrapper[4779]: I1128 12:53:41.829108 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d78bd78f-4723-4bf3-99ee-95509a0100af-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d78bd78f-4723-4bf3-99ee-95509a0100af\") " pod="openstack/ovn-northd-0" Nov 28 12:53:41 crc kubenswrapper[4779]: I1128 12:53:41.829278 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d78bd78f-4723-4bf3-99ee-95509a0100af-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d78bd78f-4723-4bf3-99ee-95509a0100af\") " pod="openstack/ovn-northd-0" Nov 28 12:53:41 crc kubenswrapper[4779]: I1128 12:53:41.835259 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvv77\" (UniqueName: \"kubernetes.io/projected/d78bd78f-4723-4bf3-99ee-95509a0100af-kube-api-access-mvv77\") pod \"ovn-northd-0\" (UID: \"d78bd78f-4723-4bf3-99ee-95509a0100af\") " pod="openstack/ovn-northd-0" Nov 28 12:53:42 crc kubenswrapper[4779]: I1128 12:53:42.013812 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 28 12:53:42 crc kubenswrapper[4779]: I1128 12:53:42.885280 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 28 12:53:42 crc kubenswrapper[4779]: I1128 12:53:42.885326 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 28 12:53:43 crc kubenswrapper[4779]: I1128 12:53:43.335293 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 28 12:53:43 crc kubenswrapper[4779]: W1128 12:53:43.337737 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd78bd78f_4723_4bf3_99ee_95509a0100af.slice/crio-f69a48cf6c32ab9270125a9dc2ac9a51f3b78e69e6f5159ad9f12acd19dc0ba1 WatchSource:0}: Error finding container f69a48cf6c32ab9270125a9dc2ac9a51f3b78e69e6f5159ad9f12acd19dc0ba1: Status 404 returned error can't find the container with id f69a48cf6c32ab9270125a9dc2ac9a51f3b78e69e6f5159ad9f12acd19dc0ba1 Nov 28 12:53:43 crc kubenswrapper[4779]: I1128 12:53:43.514083 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d78bd78f-4723-4bf3-99ee-95509a0100af","Type":"ContainerStarted","Data":"f69a48cf6c32ab9270125a9dc2ac9a51f3b78e69e6f5159ad9f12acd19dc0ba1"} Nov 28 12:53:43 crc kubenswrapper[4779]: I1128 12:53:43.516223 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-25kzk" event={"ID":"5e769641-0f27-4979-9823-dff8fe453054","Type":"ContainerStarted","Data":"1ebe07777e8ade63683c3cb1fb7ad06580d818b1fdb35522f7116b96a1c83378"} Nov 28 12:53:43 crc kubenswrapper[4779]: I1128 12:53:43.536823 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-25kzk" podStartSLOduration=2.080734126 podStartE2EDuration="5.536802472s" podCreationTimestamp="2025-11-28 12:53:38 +0000 UTC" firstStartedPulling="2025-11-28 12:53:39.474267438 +0000 UTC m=+1080.039942792" lastFinishedPulling="2025-11-28 12:53:42.930335784 +0000 UTC m=+1083.496011138" observedRunningTime="2025-11-28 12:53:43.533810651 +0000 UTC m=+1084.099486005" watchObservedRunningTime="2025-11-28 12:53:43.536802472 +0000 UTC m=+1084.102477826" Nov 28 12:53:44 crc kubenswrapper[4779]: I1128 12:53:44.242323 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 28 12:53:44 crc kubenswrapper[4779]: I1128 12:53:44.242432 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 28 12:53:44 crc kubenswrapper[4779]: I1128 12:53:44.356208 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 28 12:53:44 crc kubenswrapper[4779]: I1128 12:53:44.643408 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 28 12:53:45 crc kubenswrapper[4779]: I1128 12:53:45.913481 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/265ee755-a70e-4f35-a40a-ef525a3c5088-etc-swift\") pod \"swift-storage-0\" (UID: \"265ee755-a70e-4f35-a40a-ef525a3c5088\") " pod="openstack/swift-storage-0" Nov 28 12:53:45 crc kubenswrapper[4779]: E1128 12:53:45.913695 4779 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 28 12:53:45 crc kubenswrapper[4779]: E1128 12:53:45.913918 4779 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 28 12:53:45 crc kubenswrapper[4779]: E1128 12:53:45.913973 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/265ee755-a70e-4f35-a40a-ef525a3c5088-etc-swift podName:265ee755-a70e-4f35-a40a-ef525a3c5088 nodeName:}" failed. No retries permitted until 2025-11-28 12:53:53.913957043 +0000 UTC m=+1094.479632397 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/265ee755-a70e-4f35-a40a-ef525a3c5088-etc-swift") pod "swift-storage-0" (UID: "265ee755-a70e-4f35-a40a-ef525a3c5088") : configmap "swift-ring-files" not found Nov 28 12:53:46 crc kubenswrapper[4779]: I1128 12:53:46.139826 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 28 12:53:46 crc kubenswrapper[4779]: I1128 12:53:46.233546 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 28 12:53:46 crc kubenswrapper[4779]: I1128 12:53:46.561166 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d78bd78f-4723-4bf3-99ee-95509a0100af","Type":"ContainerStarted","Data":"469c46c9b2fee131c620758df737e0e9d58aef9a5e4324335fff538dbbfd4cd2"} Nov 28 12:53:46 crc kubenswrapper[4779]: I1128 12:53:46.561766 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 28 12:53:46 crc kubenswrapper[4779]: I1128 12:53:46.561867 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d78bd78f-4723-4bf3-99ee-95509a0100af","Type":"ContainerStarted","Data":"073476889e8d9a1ea9331ba96e8403ca6f94ab6d3a660c60631b506dac7ebf10"} Nov 28 12:53:46 crc kubenswrapper[4779]: I1128 12:53:46.579467 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.807006895 podStartE2EDuration="5.579446584s" podCreationTimestamp="2025-11-28 12:53:41 +0000 UTC" firstStartedPulling="2025-11-28 12:53:43.339728018 +0000 UTC m=+1083.905403372" lastFinishedPulling="2025-11-28 12:53:46.112167697 +0000 UTC m=+1086.677843061" observedRunningTime="2025-11-28 12:53:46.578672274 +0000 UTC m=+1087.144347628" watchObservedRunningTime="2025-11-28 12:53:46.579446584 +0000 UTC m=+1087.145121938" Nov 28 12:53:47 crc kubenswrapper[4779]: I1128 12:53:47.418349 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-8b5gg" Nov 28 12:53:47 crc kubenswrapper[4779]: I1128 12:53:47.547212 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7bqlc"] Nov 28 12:53:47 crc kubenswrapper[4779]: I1128 12:53:47.547775 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-7bqlc" podUID="b5defa7a-9cf5-4dca-a5ed-465ef0801609" containerName="dnsmasq-dns" containerID="cri-o://da1791f2d0c4c99fb925cf07c9e0ed972c6d8538a0acca4eb851288218b0b685" gracePeriod=10 Nov 28 12:53:48 crc kubenswrapper[4779]: I1128 12:53:48.587617 4779 generic.go:334] "Generic (PLEG): container finished" podID="b5defa7a-9cf5-4dca-a5ed-465ef0801609" containerID="da1791f2d0c4c99fb925cf07c9e0ed972c6d8538a0acca4eb851288218b0b685" exitCode=0 Nov 28 12:53:48 crc kubenswrapper[4779]: I1128 12:53:48.587865 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-7bqlc" event={"ID":"b5defa7a-9cf5-4dca-a5ed-465ef0801609","Type":"ContainerDied","Data":"da1791f2d0c4c99fb925cf07c9e0ed972c6d8538a0acca4eb851288218b0b685"} Nov 28 12:53:48 crc kubenswrapper[4779]: I1128 12:53:48.871244 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-7bqlc" Nov 28 12:53:48 crc kubenswrapper[4779]: I1128 12:53:48.967168 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64p7f\" (UniqueName: \"kubernetes.io/projected/b5defa7a-9cf5-4dca-a5ed-465ef0801609-kube-api-access-64p7f\") pod \"b5defa7a-9cf5-4dca-a5ed-465ef0801609\" (UID: \"b5defa7a-9cf5-4dca-a5ed-465ef0801609\") " Nov 28 12:53:48 crc kubenswrapper[4779]: I1128 12:53:48.967389 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5defa7a-9cf5-4dca-a5ed-465ef0801609-dns-svc\") pod \"b5defa7a-9cf5-4dca-a5ed-465ef0801609\" (UID: \"b5defa7a-9cf5-4dca-a5ed-465ef0801609\") " Nov 28 12:53:48 crc kubenswrapper[4779]: I1128 12:53:48.967461 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5defa7a-9cf5-4dca-a5ed-465ef0801609-config\") pod \"b5defa7a-9cf5-4dca-a5ed-465ef0801609\" (UID: \"b5defa7a-9cf5-4dca-a5ed-465ef0801609\") " Nov 28 12:53:48 crc kubenswrapper[4779]: I1128 12:53:48.975216 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5defa7a-9cf5-4dca-a5ed-465ef0801609-kube-api-access-64p7f" (OuterVolumeSpecName: "kube-api-access-64p7f") pod "b5defa7a-9cf5-4dca-a5ed-465ef0801609" (UID: "b5defa7a-9cf5-4dca-a5ed-465ef0801609"). InnerVolumeSpecName "kube-api-access-64p7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:53:49 crc kubenswrapper[4779]: I1128 12:53:49.015190 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5defa7a-9cf5-4dca-a5ed-465ef0801609-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b5defa7a-9cf5-4dca-a5ed-465ef0801609" (UID: "b5defa7a-9cf5-4dca-a5ed-465ef0801609"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:53:49 crc kubenswrapper[4779]: I1128 12:53:49.018632 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5defa7a-9cf5-4dca-a5ed-465ef0801609-config" (OuterVolumeSpecName: "config") pod "b5defa7a-9cf5-4dca-a5ed-465ef0801609" (UID: "b5defa7a-9cf5-4dca-a5ed-465ef0801609"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:53:49 crc kubenswrapper[4779]: I1128 12:53:49.070008 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64p7f\" (UniqueName: \"kubernetes.io/projected/b5defa7a-9cf5-4dca-a5ed-465ef0801609-kube-api-access-64p7f\") on node \"crc\" DevicePath \"\"" Nov 28 12:53:49 crc kubenswrapper[4779]: I1128 12:53:49.070049 4779 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5defa7a-9cf5-4dca-a5ed-465ef0801609-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 12:53:49 crc kubenswrapper[4779]: I1128 12:53:49.070068 4779 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5defa7a-9cf5-4dca-a5ed-465ef0801609-config\") on node \"crc\" DevicePath \"\"" Nov 28 12:53:49 crc kubenswrapper[4779]: I1128 12:53:49.599924 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-7bqlc" event={"ID":"b5defa7a-9cf5-4dca-a5ed-465ef0801609","Type":"ContainerDied","Data":"ad502463711eed7f85949acded8e59cb61091aeb15026f571b6f1ee5572db478"} Nov 28 12:53:49 crc kubenswrapper[4779]: I1128 12:53:49.600006 4779 scope.go:117] "RemoveContainer" containerID="da1791f2d0c4c99fb925cf07c9e0ed972c6d8538a0acca4eb851288218b0b685" Nov 28 12:53:49 crc kubenswrapper[4779]: I1128 12:53:49.600078 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-7bqlc" Nov 28 12:53:49 crc kubenswrapper[4779]: I1128 12:53:49.643067 4779 scope.go:117] "RemoveContainer" containerID="9fd42989e6b77c0052a1d260b586fe7b14e11213256e50dc41140c29e6db731d" Nov 28 12:53:49 crc kubenswrapper[4779]: I1128 12:53:49.649579 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7bqlc"] Nov 28 12:53:49 crc kubenswrapper[4779]: I1128 12:53:49.662464 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7bqlc"] Nov 28 12:53:49 crc kubenswrapper[4779]: I1128 12:53:49.801968 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5defa7a-9cf5-4dca-a5ed-465ef0801609" path="/var/lib/kubelet/pods/b5defa7a-9cf5-4dca-a5ed-465ef0801609/volumes" Nov 28 12:53:53 crc kubenswrapper[4779]: I1128 12:53:53.639674 4779 generic.go:334] "Generic (PLEG): container finished" podID="5e769641-0f27-4979-9823-dff8fe453054" containerID="1ebe07777e8ade63683c3cb1fb7ad06580d818b1fdb35522f7116b96a1c83378" exitCode=0 Nov 28 12:53:53 crc kubenswrapper[4779]: I1128 12:53:53.639749 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-25kzk" event={"ID":"5e769641-0f27-4979-9823-dff8fe453054","Type":"ContainerDied","Data":"1ebe07777e8ade63683c3cb1fb7ad06580d818b1fdb35522f7116b96a1c83378"} Nov 28 12:53:53 crc kubenswrapper[4779]: I1128 12:53:53.958855 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/265ee755-a70e-4f35-a40a-ef525a3c5088-etc-swift\") pod \"swift-storage-0\" (UID: \"265ee755-a70e-4f35-a40a-ef525a3c5088\") " pod="openstack/swift-storage-0" Nov 28 12:53:53 crc kubenswrapper[4779]: I1128 12:53:53.967941 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/265ee755-a70e-4f35-a40a-ef525a3c5088-etc-swift\") pod \"swift-storage-0\" (UID: \"265ee755-a70e-4f35-a40a-ef525a3c5088\") " pod="openstack/swift-storage-0" Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.239149 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.249031 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-425f-account-create-update-l6kl6"] Nov 28 12:53:54 crc kubenswrapper[4779]: E1128 12:53:54.250297 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5defa7a-9cf5-4dca-a5ed-465ef0801609" containerName="dnsmasq-dns" Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.250329 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5defa7a-9cf5-4dca-a5ed-465ef0801609" containerName="dnsmasq-dns" Nov 28 12:53:54 crc kubenswrapper[4779]: E1128 12:53:54.250396 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5defa7a-9cf5-4dca-a5ed-465ef0801609" containerName="init" Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.250410 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5defa7a-9cf5-4dca-a5ed-465ef0801609" containerName="init" Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.251050 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5defa7a-9cf5-4dca-a5ed-465ef0801609" containerName="dnsmasq-dns" Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.252420 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-425f-account-create-update-l6kl6" Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.262437 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.297539 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-8msj7"] Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.301993 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8msj7" Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.320395 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-425f-account-create-update-l6kl6"] Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.330678 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-8msj7"] Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.384736 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e2a93bc-7245-4557-851b-33230f2031dc-operator-scripts\") pod \"keystone-db-create-8msj7\" (UID: \"8e2a93bc-7245-4557-851b-33230f2031dc\") " pod="openstack/keystone-db-create-8msj7" Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.384801 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv95p\" (UniqueName: \"kubernetes.io/projected/8e2a93bc-7245-4557-851b-33230f2031dc-kube-api-access-rv95p\") pod \"keystone-db-create-8msj7\" (UID: \"8e2a93bc-7245-4557-851b-33230f2031dc\") " pod="openstack/keystone-db-create-8msj7" Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.384854 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj4t4\" (UniqueName: \"kubernetes.io/projected/f140dc70-fd92-49d0-b831-44c97eb32ead-kube-api-access-hj4t4\") pod \"keystone-425f-account-create-update-l6kl6\" (UID: \"f140dc70-fd92-49d0-b831-44c97eb32ead\") " pod="openstack/keystone-425f-account-create-update-l6kl6" Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.384892 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f140dc70-fd92-49d0-b831-44c97eb32ead-operator-scripts\") pod \"keystone-425f-account-create-update-l6kl6\" (UID: \"f140dc70-fd92-49d0-b831-44c97eb32ead\") " pod="openstack/keystone-425f-account-create-update-l6kl6" Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.486490 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f140dc70-fd92-49d0-b831-44c97eb32ead-operator-scripts\") pod \"keystone-425f-account-create-update-l6kl6\" (UID: \"f140dc70-fd92-49d0-b831-44c97eb32ead\") " pod="openstack/keystone-425f-account-create-update-l6kl6" Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.486861 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e2a93bc-7245-4557-851b-33230f2031dc-operator-scripts\") pod \"keystone-db-create-8msj7\" (UID: \"8e2a93bc-7245-4557-851b-33230f2031dc\") " pod="openstack/keystone-db-create-8msj7" Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.486950 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv95p\" (UniqueName: \"kubernetes.io/projected/8e2a93bc-7245-4557-851b-33230f2031dc-kube-api-access-rv95p\") pod \"keystone-db-create-8msj7\" (UID: \"8e2a93bc-7245-4557-851b-33230f2031dc\") " pod="openstack/keystone-db-create-8msj7" Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.487030 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj4t4\" (UniqueName: \"kubernetes.io/projected/f140dc70-fd92-49d0-b831-44c97eb32ead-kube-api-access-hj4t4\") pod \"keystone-425f-account-create-update-l6kl6\" (UID: \"f140dc70-fd92-49d0-b831-44c97eb32ead\") " pod="openstack/keystone-425f-account-create-update-l6kl6" Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.487900 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e2a93bc-7245-4557-851b-33230f2031dc-operator-scripts\") pod \"keystone-db-create-8msj7\" (UID: \"8e2a93bc-7245-4557-851b-33230f2031dc\") " pod="openstack/keystone-db-create-8msj7" Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.487990 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f140dc70-fd92-49d0-b831-44c97eb32ead-operator-scripts\") pod \"keystone-425f-account-create-update-l6kl6\" (UID: \"f140dc70-fd92-49d0-b831-44c97eb32ead\") " pod="openstack/keystone-425f-account-create-update-l6kl6" Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.507826 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj4t4\" (UniqueName: \"kubernetes.io/projected/f140dc70-fd92-49d0-b831-44c97eb32ead-kube-api-access-hj4t4\") pod \"keystone-425f-account-create-update-l6kl6\" (UID: \"f140dc70-fd92-49d0-b831-44c97eb32ead\") " pod="openstack/keystone-425f-account-create-update-l6kl6" Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.509367 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv95p\" (UniqueName: \"kubernetes.io/projected/8e2a93bc-7245-4557-851b-33230f2031dc-kube-api-access-rv95p\") pod \"keystone-db-create-8msj7\" (UID: \"8e2a93bc-7245-4557-851b-33230f2031dc\") " pod="openstack/keystone-db-create-8msj7" Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.533902 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-5vn8z"] Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.536055 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-5vn8z" Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.543715 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-5vn8z"] Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.589310 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4sqj\" (UniqueName: \"kubernetes.io/projected/81190d6e-e211-4aae-890b-bfd66bd92381-kube-api-access-q4sqj\") pod \"placement-db-create-5vn8z\" (UID: \"81190d6e-e211-4aae-890b-bfd66bd92381\") " pod="openstack/placement-db-create-5vn8z" Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.589389 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81190d6e-e211-4aae-890b-bfd66bd92381-operator-scripts\") pod \"placement-db-create-5vn8z\" (UID: \"81190d6e-e211-4aae-890b-bfd66bd92381\") " pod="openstack/placement-db-create-5vn8z" Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.641431 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-8ea0-account-create-update-4kjzs"] Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.644164 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8ea0-account-create-update-4kjzs" Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.648045 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.655364 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8ea0-account-create-update-4kjzs"] Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.684138 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-425f-account-create-update-l6kl6" Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.692080 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4sqj\" (UniqueName: \"kubernetes.io/projected/81190d6e-e211-4aae-890b-bfd66bd92381-kube-api-access-q4sqj\") pod \"placement-db-create-5vn8z\" (UID: \"81190d6e-e211-4aae-890b-bfd66bd92381\") " pod="openstack/placement-db-create-5vn8z" Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.692169 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81190d6e-e211-4aae-890b-bfd66bd92381-operator-scripts\") pod \"placement-db-create-5vn8z\" (UID: \"81190d6e-e211-4aae-890b-bfd66bd92381\") " pod="openstack/placement-db-create-5vn8z" Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.692216 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqpvv\" (UniqueName: \"kubernetes.io/projected/c1273702-2d8b-401a-afd5-335a5ceb8bbe-kube-api-access-sqpvv\") pod \"placement-8ea0-account-create-update-4kjzs\" (UID: \"c1273702-2d8b-401a-afd5-335a5ceb8bbe\") " pod="openstack/placement-8ea0-account-create-update-4kjzs" Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.692317 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1273702-2d8b-401a-afd5-335a5ceb8bbe-operator-scripts\") pod \"placement-8ea0-account-create-update-4kjzs\" (UID: \"c1273702-2d8b-401a-afd5-335a5ceb8bbe\") " pod="openstack/placement-8ea0-account-create-update-4kjzs" Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.693349 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81190d6e-e211-4aae-890b-bfd66bd92381-operator-scripts\") pod \"placement-db-create-5vn8z\" (UID: \"81190d6e-e211-4aae-890b-bfd66bd92381\") " pod="openstack/placement-db-create-5vn8z" Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.696487 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8msj7" Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.710633 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4sqj\" (UniqueName: \"kubernetes.io/projected/81190d6e-e211-4aae-890b-bfd66bd92381-kube-api-access-q4sqj\") pod \"placement-db-create-5vn8z\" (UID: \"81190d6e-e211-4aae-890b-bfd66bd92381\") " pod="openstack/placement-db-create-5vn8z" Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.736995 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-6cjk6"] Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.740479 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-6cjk6" Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.756384 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-6cjk6"] Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.794441 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1273702-2d8b-401a-afd5-335a5ceb8bbe-operator-scripts\") pod \"placement-8ea0-account-create-update-4kjzs\" (UID: \"c1273702-2d8b-401a-afd5-335a5ceb8bbe\") " pod="openstack/placement-8ea0-account-create-update-4kjzs" Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.794513 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xw8g\" (UniqueName: \"kubernetes.io/projected/0a63a0ed-a2ba-4acb-8f0d-a88e165e6cc9-kube-api-access-9xw8g\") pod \"glance-db-create-6cjk6\" (UID: \"0a63a0ed-a2ba-4acb-8f0d-a88e165e6cc9\") " pod="openstack/glance-db-create-6cjk6" Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.794649 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a63a0ed-a2ba-4acb-8f0d-a88e165e6cc9-operator-scripts\") pod \"glance-db-create-6cjk6\" (UID: \"0a63a0ed-a2ba-4acb-8f0d-a88e165e6cc9\") " pod="openstack/glance-db-create-6cjk6" Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.794742 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqpvv\" (UniqueName: \"kubernetes.io/projected/c1273702-2d8b-401a-afd5-335a5ceb8bbe-kube-api-access-sqpvv\") pod \"placement-8ea0-account-create-update-4kjzs\" (UID: \"c1273702-2d8b-401a-afd5-335a5ceb8bbe\") " pod="openstack/placement-8ea0-account-create-update-4kjzs" Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.795253 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1273702-2d8b-401a-afd5-335a5ceb8bbe-operator-scripts\") pod \"placement-8ea0-account-create-update-4kjzs\" (UID: \"c1273702-2d8b-401a-afd5-335a5ceb8bbe\") " pod="openstack/placement-8ea0-account-create-update-4kjzs" Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.823780 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.825323 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqpvv\" (UniqueName: \"kubernetes.io/projected/c1273702-2d8b-401a-afd5-335a5ceb8bbe-kube-api-access-sqpvv\") pod \"placement-8ea0-account-create-update-4kjzs\" (UID: \"c1273702-2d8b-401a-afd5-335a5ceb8bbe\") " pod="openstack/placement-8ea0-account-create-update-4kjzs" Nov 28 12:53:54 crc kubenswrapper[4779]: W1128 12:53:54.839735 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod265ee755_a70e_4f35_a40a_ef525a3c5088.slice/crio-fa3b5b94dd7efef7a39d493373eed734fba38b6fa53c1952ea15f088979291e9 WatchSource:0}: Error finding container fa3b5b94dd7efef7a39d493373eed734fba38b6fa53c1952ea15f088979291e9: Status 404 returned error can't find the container with id fa3b5b94dd7efef7a39d493373eed734fba38b6fa53c1952ea15f088979291e9 Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.857737 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-7fa2-account-create-update-jmqxq"] Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.859719 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7fa2-account-create-update-jmqxq" Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.871714 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.889195 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-7fa2-account-create-update-jmqxq"] Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.889876 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-5vn8z" Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.896081 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xw8g\" (UniqueName: \"kubernetes.io/projected/0a63a0ed-a2ba-4acb-8f0d-a88e165e6cc9-kube-api-access-9xw8g\") pod \"glance-db-create-6cjk6\" (UID: \"0a63a0ed-a2ba-4acb-8f0d-a88e165e6cc9\") " pod="openstack/glance-db-create-6cjk6" Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.896309 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a63a0ed-a2ba-4acb-8f0d-a88e165e6cc9-operator-scripts\") pod \"glance-db-create-6cjk6\" (UID: \"0a63a0ed-a2ba-4acb-8f0d-a88e165e6cc9\") " pod="openstack/glance-db-create-6cjk6" Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.896968 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a63a0ed-a2ba-4acb-8f0d-a88e165e6cc9-operator-scripts\") pod \"glance-db-create-6cjk6\" (UID: \"0a63a0ed-a2ba-4acb-8f0d-a88e165e6cc9\") " pod="openstack/glance-db-create-6cjk6" Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.924779 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xw8g\" (UniqueName: \"kubernetes.io/projected/0a63a0ed-a2ba-4acb-8f0d-a88e165e6cc9-kube-api-access-9xw8g\") pod \"glance-db-create-6cjk6\" (UID: \"0a63a0ed-a2ba-4acb-8f0d-a88e165e6cc9\") " pod="openstack/glance-db-create-6cjk6" Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.949112 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-25kzk" Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.963281 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8ea0-account-create-update-4kjzs" Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.998739 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e769641-0f27-4979-9823-dff8fe453054-combined-ca-bundle\") pod \"5e769641-0f27-4979-9823-dff8fe453054\" (UID: \"5e769641-0f27-4979-9823-dff8fe453054\") " Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.998824 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5e769641-0f27-4979-9823-dff8fe453054-scripts\") pod \"5e769641-0f27-4979-9823-dff8fe453054\" (UID: \"5e769641-0f27-4979-9823-dff8fe453054\") " Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.998906 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5e769641-0f27-4979-9823-dff8fe453054-dispersionconf\") pod \"5e769641-0f27-4979-9823-dff8fe453054\" (UID: \"5e769641-0f27-4979-9823-dff8fe453054\") " Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.998933 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5e769641-0f27-4979-9823-dff8fe453054-swiftconf\") pod \"5e769641-0f27-4979-9823-dff8fe453054\" (UID: \"5e769641-0f27-4979-9823-dff8fe453054\") " Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.999054 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5e769641-0f27-4979-9823-dff8fe453054-etc-swift\") pod \"5e769641-0f27-4979-9823-dff8fe453054\" (UID: \"5e769641-0f27-4979-9823-dff8fe453054\") " Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.999115 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hrtj\" (UniqueName: \"kubernetes.io/projected/5e769641-0f27-4979-9823-dff8fe453054-kube-api-access-8hrtj\") pod \"5e769641-0f27-4979-9823-dff8fe453054\" (UID: \"5e769641-0f27-4979-9823-dff8fe453054\") " Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.999136 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5e769641-0f27-4979-9823-dff8fe453054-ring-data-devices\") pod \"5e769641-0f27-4979-9823-dff8fe453054\" (UID: \"5e769641-0f27-4979-9823-dff8fe453054\") " Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.999437 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e230e28f-3821-476c-b967-2dc505f4206c-operator-scripts\") pod \"glance-7fa2-account-create-update-jmqxq\" (UID: \"e230e28f-3821-476c-b967-2dc505f4206c\") " pod="openstack/glance-7fa2-account-create-update-jmqxq" Nov 28 12:53:54 crc kubenswrapper[4779]: I1128 12:53:54.999538 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkdnn\" (UniqueName: \"kubernetes.io/projected/e230e28f-3821-476c-b967-2dc505f4206c-kube-api-access-qkdnn\") pod \"glance-7fa2-account-create-update-jmqxq\" (UID: \"e230e28f-3821-476c-b967-2dc505f4206c\") " pod="openstack/glance-7fa2-account-create-update-jmqxq" Nov 28 12:53:55 crc kubenswrapper[4779]: I1128 12:53:55.001606 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e769641-0f27-4979-9823-dff8fe453054-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "5e769641-0f27-4979-9823-dff8fe453054" (UID: "5e769641-0f27-4979-9823-dff8fe453054"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:53:55 crc kubenswrapper[4779]: I1128 12:53:55.003644 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e769641-0f27-4979-9823-dff8fe453054-kube-api-access-8hrtj" (OuterVolumeSpecName: "kube-api-access-8hrtj") pod "5e769641-0f27-4979-9823-dff8fe453054" (UID: "5e769641-0f27-4979-9823-dff8fe453054"). InnerVolumeSpecName "kube-api-access-8hrtj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:53:55 crc kubenswrapper[4779]: I1128 12:53:55.003764 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e769641-0f27-4979-9823-dff8fe453054-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "5e769641-0f27-4979-9823-dff8fe453054" (UID: "5e769641-0f27-4979-9823-dff8fe453054"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:53:55 crc kubenswrapper[4779]: I1128 12:53:55.005467 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e769641-0f27-4979-9823-dff8fe453054-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "5e769641-0f27-4979-9823-dff8fe453054" (UID: "5e769641-0f27-4979-9823-dff8fe453054"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:53:55 crc kubenswrapper[4779]: I1128 12:53:55.020571 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e769641-0f27-4979-9823-dff8fe453054-scripts" (OuterVolumeSpecName: "scripts") pod "5e769641-0f27-4979-9823-dff8fe453054" (UID: "5e769641-0f27-4979-9823-dff8fe453054"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:53:55 crc kubenswrapper[4779]: I1128 12:53:55.033491 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e769641-0f27-4979-9823-dff8fe453054-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5e769641-0f27-4979-9823-dff8fe453054" (UID: "5e769641-0f27-4979-9823-dff8fe453054"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:53:55 crc kubenswrapper[4779]: I1128 12:53:55.060587 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e769641-0f27-4979-9823-dff8fe453054-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "5e769641-0f27-4979-9823-dff8fe453054" (UID: "5e769641-0f27-4979-9823-dff8fe453054"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:53:55 crc kubenswrapper[4779]: I1128 12:53:55.064838 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-6cjk6" Nov 28 12:53:55 crc kubenswrapper[4779]: I1128 12:53:55.102385 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkdnn\" (UniqueName: \"kubernetes.io/projected/e230e28f-3821-476c-b967-2dc505f4206c-kube-api-access-qkdnn\") pod \"glance-7fa2-account-create-update-jmqxq\" (UID: \"e230e28f-3821-476c-b967-2dc505f4206c\") " pod="openstack/glance-7fa2-account-create-update-jmqxq" Nov 28 12:53:55 crc kubenswrapper[4779]: I1128 12:53:55.102560 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e230e28f-3821-476c-b967-2dc505f4206c-operator-scripts\") pod \"glance-7fa2-account-create-update-jmqxq\" (UID: \"e230e28f-3821-476c-b967-2dc505f4206c\") " pod="openstack/glance-7fa2-account-create-update-jmqxq" Nov 28 12:53:55 crc kubenswrapper[4779]: I1128 12:53:55.102644 4779 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5e769641-0f27-4979-9823-dff8fe453054-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 28 12:53:55 crc kubenswrapper[4779]: I1128 12:53:55.102655 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hrtj\" (UniqueName: \"kubernetes.io/projected/5e769641-0f27-4979-9823-dff8fe453054-kube-api-access-8hrtj\") on node \"crc\" DevicePath \"\"" Nov 28 12:53:55 crc kubenswrapper[4779]: I1128 12:53:55.103624 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e230e28f-3821-476c-b967-2dc505f4206c-operator-scripts\") pod \"glance-7fa2-account-create-update-jmqxq\" (UID: \"e230e28f-3821-476c-b967-2dc505f4206c\") " pod="openstack/glance-7fa2-account-create-update-jmqxq" Nov 28 12:53:55 crc kubenswrapper[4779]: I1128 12:53:55.104292 4779 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5e769641-0f27-4979-9823-dff8fe453054-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 28 12:53:55 crc kubenswrapper[4779]: I1128 12:53:55.104312 4779 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e769641-0f27-4979-9823-dff8fe453054-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:53:55 crc kubenswrapper[4779]: I1128 12:53:55.104323 4779 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5e769641-0f27-4979-9823-dff8fe453054-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:53:55 crc kubenswrapper[4779]: I1128 12:53:55.104331 4779 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5e769641-0f27-4979-9823-dff8fe453054-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 28 12:53:55 crc kubenswrapper[4779]: I1128 12:53:55.104339 4779 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5e769641-0f27-4979-9823-dff8fe453054-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 28 12:53:55 crc kubenswrapper[4779]: I1128 12:53:55.117778 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkdnn\" (UniqueName: \"kubernetes.io/projected/e230e28f-3821-476c-b967-2dc505f4206c-kube-api-access-qkdnn\") pod \"glance-7fa2-account-create-update-jmqxq\" (UID: \"e230e28f-3821-476c-b967-2dc505f4206c\") " pod="openstack/glance-7fa2-account-create-update-jmqxq" Nov 28 12:53:55 crc kubenswrapper[4779]: I1128 12:53:55.199557 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7fa2-account-create-update-jmqxq" Nov 28 12:53:55 crc kubenswrapper[4779]: I1128 12:53:55.234834 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-425f-account-create-update-l6kl6"] Nov 28 12:53:55 crc kubenswrapper[4779]: W1128 12:53:55.239674 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf140dc70_fd92_49d0_b831_44c97eb32ead.slice/crio-9e5dff7a50287e2d38333e1b07d7cb867004a36eabc7172c798c0a7b3b8ab8a1 WatchSource:0}: Error finding container 9e5dff7a50287e2d38333e1b07d7cb867004a36eabc7172c798c0a7b3b8ab8a1: Status 404 returned error can't find the container with id 9e5dff7a50287e2d38333e1b07d7cb867004a36eabc7172c798c0a7b3b8ab8a1 Nov 28 12:53:55 crc kubenswrapper[4779]: I1128 12:53:55.338477 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-8msj7"] Nov 28 12:53:55 crc kubenswrapper[4779]: W1128 12:53:55.355574 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e2a93bc_7245_4557_851b_33230f2031dc.slice/crio-eed8ffded8d978106e8a48616cebaa67f1b0651757a1a99c6165d16dd6af58c3 WatchSource:0}: Error finding container eed8ffded8d978106e8a48616cebaa67f1b0651757a1a99c6165d16dd6af58c3: Status 404 returned error can't find the container with id eed8ffded8d978106e8a48616cebaa67f1b0651757a1a99c6165d16dd6af58c3 Nov 28 12:53:55 crc kubenswrapper[4779]: I1128 12:53:55.452257 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8ea0-account-create-update-4kjzs"] Nov 28 12:53:55 crc kubenswrapper[4779]: W1128 12:53:55.458563 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81190d6e_e211_4aae_890b_bfd66bd92381.slice/crio-79ccde7aa55b74a9475576188efa6a2e31185b23d9808a17c94ed34ea586e1e1 WatchSource:0}: Error finding container 79ccde7aa55b74a9475576188efa6a2e31185b23d9808a17c94ed34ea586e1e1: Status 404 returned error can't find the container with id 79ccde7aa55b74a9475576188efa6a2e31185b23d9808a17c94ed34ea586e1e1 Nov 28 12:53:55 crc kubenswrapper[4779]: I1128 12:53:55.459070 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-5vn8z"] Nov 28 12:53:55 crc kubenswrapper[4779]: I1128 12:53:55.561557 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-6cjk6"] Nov 28 12:53:55 crc kubenswrapper[4779]: W1128 12:53:55.562969 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a63a0ed_a2ba_4acb_8f0d_a88e165e6cc9.slice/crio-b15b59bb78ce8e8bfd128af236a94a99229d76ca02bc860664d43cd7972bb535 WatchSource:0}: Error finding container b15b59bb78ce8e8bfd128af236a94a99229d76ca02bc860664d43cd7972bb535: Status 404 returned error can't find the container with id b15b59bb78ce8e8bfd128af236a94a99229d76ca02bc860664d43cd7972bb535 Nov 28 12:53:55 crc kubenswrapper[4779]: I1128 12:53:55.659871 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8msj7" event={"ID":"8e2a93bc-7245-4557-851b-33230f2031dc","Type":"ContainerStarted","Data":"4a8f33639020f3c0fe5b644edf134735b546c1893f7feb6c7a1293a777446c08"} Nov 28 12:53:55 crc kubenswrapper[4779]: I1128 12:53:55.659917 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8msj7" event={"ID":"8e2a93bc-7245-4557-851b-33230f2031dc","Type":"ContainerStarted","Data":"eed8ffded8d978106e8a48616cebaa67f1b0651757a1a99c6165d16dd6af58c3"} Nov 28 12:53:55 crc kubenswrapper[4779]: I1128 12:53:55.662342 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-7fa2-account-create-update-jmqxq"] Nov 28 12:53:55 crc kubenswrapper[4779]: I1128 12:53:55.665876 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-425f-account-create-update-l6kl6" event={"ID":"f140dc70-fd92-49d0-b831-44c97eb32ead","Type":"ContainerStarted","Data":"bc4bf7c3e5cd78cac151a4c3aaf31093108c26b9d53fce0fbf18737703d40733"} Nov 28 12:53:55 crc kubenswrapper[4779]: I1128 12:53:55.665912 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-425f-account-create-update-l6kl6" event={"ID":"f140dc70-fd92-49d0-b831-44c97eb32ead","Type":"ContainerStarted","Data":"9e5dff7a50287e2d38333e1b07d7cb867004a36eabc7172c798c0a7b3b8ab8a1"} Nov 28 12:53:55 crc kubenswrapper[4779]: I1128 12:53:55.668610 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"265ee755-a70e-4f35-a40a-ef525a3c5088","Type":"ContainerStarted","Data":"fa3b5b94dd7efef7a39d493373eed734fba38b6fa53c1952ea15f088979291e9"} Nov 28 12:53:55 crc kubenswrapper[4779]: W1128 12:53:55.669869 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode230e28f_3821_476c_b967_2dc505f4206c.slice/crio-7048cf7ad635e9073a4dffb0233eb7edbc76b6a0c3fb5ace6fe1568f9b166b1d WatchSource:0}: Error finding container 7048cf7ad635e9073a4dffb0233eb7edbc76b6a0c3fb5ace6fe1568f9b166b1d: Status 404 returned error can't find the container with id 7048cf7ad635e9073a4dffb0233eb7edbc76b6a0c3fb5ace6fe1568f9b166b1d Nov 28 12:53:55 crc kubenswrapper[4779]: I1128 12:53:55.670486 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8ea0-account-create-update-4kjzs" event={"ID":"c1273702-2d8b-401a-afd5-335a5ceb8bbe","Type":"ContainerStarted","Data":"d80784a79b7627fa0dfdc15071f37e6e45c825db0493bd49d55b7ad301125a08"} Nov 28 12:53:55 crc kubenswrapper[4779]: I1128 12:53:55.673000 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-25kzk" event={"ID":"5e769641-0f27-4979-9823-dff8fe453054","Type":"ContainerDied","Data":"57eb440fe0db38db8e3f7677ca6d4080fb38cedb1e894df7872d396e3a2c99e2"} Nov 28 12:53:55 crc kubenswrapper[4779]: I1128 12:53:55.673022 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-25kzk" Nov 28 12:53:55 crc kubenswrapper[4779]: I1128 12:53:55.673034 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57eb440fe0db38db8e3f7677ca6d4080fb38cedb1e894df7872d396e3a2c99e2" Nov 28 12:53:55 crc kubenswrapper[4779]: I1128 12:53:55.677374 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-6cjk6" event={"ID":"0a63a0ed-a2ba-4acb-8f0d-a88e165e6cc9","Type":"ContainerStarted","Data":"b15b59bb78ce8e8bfd128af236a94a99229d76ca02bc860664d43cd7972bb535"} Nov 28 12:53:55 crc kubenswrapper[4779]: I1128 12:53:55.683164 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-5vn8z" event={"ID":"81190d6e-e211-4aae-890b-bfd66bd92381","Type":"ContainerStarted","Data":"79ccde7aa55b74a9475576188efa6a2e31185b23d9808a17c94ed34ea586e1e1"} Nov 28 12:53:55 crc kubenswrapper[4779]: I1128 12:53:55.695276 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-425f-account-create-update-l6kl6" podStartSLOduration=1.695259635 podStartE2EDuration="1.695259635s" podCreationTimestamp="2025-11-28 12:53:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:53:55.691528955 +0000 UTC m=+1096.257204309" watchObservedRunningTime="2025-11-28 12:53:55.695259635 +0000 UTC m=+1096.260934979" Nov 28 12:53:55 crc kubenswrapper[4779]: I1128 12:53:55.695845 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-8msj7" podStartSLOduration=1.69584011 podStartE2EDuration="1.69584011s" podCreationTimestamp="2025-11-28 12:53:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:53:55.676299856 +0000 UTC m=+1096.241975230" watchObservedRunningTime="2025-11-28 12:53:55.69584011 +0000 UTC m=+1096.261515464" Nov 28 12:53:56 crc kubenswrapper[4779]: I1128 12:53:56.692608 4779 generic.go:334] "Generic (PLEG): container finished" podID="f140dc70-fd92-49d0-b831-44c97eb32ead" containerID="bc4bf7c3e5cd78cac151a4c3aaf31093108c26b9d53fce0fbf18737703d40733" exitCode=0 Nov 28 12:53:56 crc kubenswrapper[4779]: I1128 12:53:56.692962 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-425f-account-create-update-l6kl6" event={"ID":"f140dc70-fd92-49d0-b831-44c97eb32ead","Type":"ContainerDied","Data":"bc4bf7c3e5cd78cac151a4c3aaf31093108c26b9d53fce0fbf18737703d40733"} Nov 28 12:53:56 crc kubenswrapper[4779]: I1128 12:53:56.696215 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8ea0-account-create-update-4kjzs" event={"ID":"c1273702-2d8b-401a-afd5-335a5ceb8bbe","Type":"ContainerStarted","Data":"fc8c58524405df12ba69786bb2626f5c6b5e862bf65ef0c6a72a5fd1b721d53d"} Nov 28 12:53:56 crc kubenswrapper[4779]: I1128 12:53:56.698483 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-5vn8z" event={"ID":"81190d6e-e211-4aae-890b-bfd66bd92381","Type":"ContainerStarted","Data":"dc344c587375bd3128515924db8821f4e1d2fa6f8a7dca76850c0b4f8c8b53ca"} Nov 28 12:53:56 crc kubenswrapper[4779]: I1128 12:53:56.699853 4779 generic.go:334] "Generic (PLEG): container finished" podID="8e2a93bc-7245-4557-851b-33230f2031dc" containerID="4a8f33639020f3c0fe5b644edf134735b546c1893f7feb6c7a1293a777446c08" exitCode=0 Nov 28 12:53:56 crc kubenswrapper[4779]: I1128 12:53:56.699983 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8msj7" event={"ID":"8e2a93bc-7245-4557-851b-33230f2031dc","Type":"ContainerDied","Data":"4a8f33639020f3c0fe5b644edf134735b546c1893f7feb6c7a1293a777446c08"} Nov 28 12:53:56 crc kubenswrapper[4779]: I1128 12:53:56.702745 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7fa2-account-create-update-jmqxq" event={"ID":"e230e28f-3821-476c-b967-2dc505f4206c","Type":"ContainerStarted","Data":"2127b3d45c645c4324c1049ec66e2977c65be3269d8d492985536aba6284a0eb"} Nov 28 12:53:56 crc kubenswrapper[4779]: I1128 12:53:56.702904 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7fa2-account-create-update-jmqxq" event={"ID":"e230e28f-3821-476c-b967-2dc505f4206c","Type":"ContainerStarted","Data":"7048cf7ad635e9073a4dffb0233eb7edbc76b6a0c3fb5ace6fe1568f9b166b1d"} Nov 28 12:53:56 crc kubenswrapper[4779]: I1128 12:53:56.710422 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-6cjk6" event={"ID":"0a63a0ed-a2ba-4acb-8f0d-a88e165e6cc9","Type":"ContainerStarted","Data":"da51acca282f1b5df22c6223a93480a5826ff2942625b9a772c53a78f9a8f914"} Nov 28 12:53:56 crc kubenswrapper[4779]: I1128 12:53:56.723774 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-8ea0-account-create-update-4kjzs" podStartSLOduration=2.723749828 podStartE2EDuration="2.723749828s" podCreationTimestamp="2025-11-28 12:53:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:53:56.718896258 +0000 UTC m=+1097.284571612" watchObservedRunningTime="2025-11-28 12:53:56.723749828 +0000 UTC m=+1097.289425202" Nov 28 12:53:56 crc kubenswrapper[4779]: I1128 12:53:56.746629 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-7fa2-account-create-update-jmqxq" podStartSLOduration=2.746604651 podStartE2EDuration="2.746604651s" podCreationTimestamp="2025-11-28 12:53:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:53:56.734968039 +0000 UTC m=+1097.300643393" watchObservedRunningTime="2025-11-28 12:53:56.746604651 +0000 UTC m=+1097.312280025" Nov 28 12:53:56 crc kubenswrapper[4779]: I1128 12:53:56.772038 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-5vn8z" podStartSLOduration=2.772019212 podStartE2EDuration="2.772019212s" podCreationTimestamp="2025-11-28 12:53:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:53:56.766209367 +0000 UTC m=+1097.331884731" watchObservedRunningTime="2025-11-28 12:53:56.772019212 +0000 UTC m=+1097.337694566" Nov 28 12:53:56 crc kubenswrapper[4779]: I1128 12:53:56.784316 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-6cjk6" podStartSLOduration=2.784301902 podStartE2EDuration="2.784301902s" podCreationTimestamp="2025-11-28 12:53:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:53:56.781113696 +0000 UTC m=+1097.346789060" watchObservedRunningTime="2025-11-28 12:53:56.784301902 +0000 UTC m=+1097.349977256" Nov 28 12:53:57 crc kubenswrapper[4779]: I1128 12:53:57.080810 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 28 12:53:57 crc kubenswrapper[4779]: I1128 12:53:57.722253 4779 generic.go:334] "Generic (PLEG): container finished" podID="e230e28f-3821-476c-b967-2dc505f4206c" containerID="2127b3d45c645c4324c1049ec66e2977c65be3269d8d492985536aba6284a0eb" exitCode=0 Nov 28 12:53:57 crc kubenswrapper[4779]: I1128 12:53:57.722342 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7fa2-account-create-update-jmqxq" event={"ID":"e230e28f-3821-476c-b967-2dc505f4206c","Type":"ContainerDied","Data":"2127b3d45c645c4324c1049ec66e2977c65be3269d8d492985536aba6284a0eb"} Nov 28 12:53:57 crc kubenswrapper[4779]: I1128 12:53:57.748818 4779 generic.go:334] "Generic (PLEG): container finished" podID="c1273702-2d8b-401a-afd5-335a5ceb8bbe" containerID="fc8c58524405df12ba69786bb2626f5c6b5e862bf65ef0c6a72a5fd1b721d53d" exitCode=0 Nov 28 12:53:57 crc kubenswrapper[4779]: I1128 12:53:57.752260 4779 generic.go:334] "Generic (PLEG): container finished" podID="0a63a0ed-a2ba-4acb-8f0d-a88e165e6cc9" containerID="da51acca282f1b5df22c6223a93480a5826ff2942625b9a772c53a78f9a8f914" exitCode=0 Nov 28 12:53:57 crc kubenswrapper[4779]: I1128 12:53:57.752306 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"265ee755-a70e-4f35-a40a-ef525a3c5088","Type":"ContainerStarted","Data":"8b52b54851d5d5990c482b64efe0c99f452a310b197f87bbff81efff46a06895"} Nov 28 12:53:57 crc kubenswrapper[4779]: I1128 12:53:57.752450 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"265ee755-a70e-4f35-a40a-ef525a3c5088","Type":"ContainerStarted","Data":"18a1a85a87c4dbe96a6969fe8bd263fb8e6a44f32f552509df55223f4826d0ca"} Nov 28 12:53:57 crc kubenswrapper[4779]: I1128 12:53:57.752467 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8ea0-account-create-update-4kjzs" event={"ID":"c1273702-2d8b-401a-afd5-335a5ceb8bbe","Type":"ContainerDied","Data":"fc8c58524405df12ba69786bb2626f5c6b5e862bf65ef0c6a72a5fd1b721d53d"} Nov 28 12:53:57 crc kubenswrapper[4779]: I1128 12:53:57.752507 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-6cjk6" event={"ID":"0a63a0ed-a2ba-4acb-8f0d-a88e165e6cc9","Type":"ContainerDied","Data":"da51acca282f1b5df22c6223a93480a5826ff2942625b9a772c53a78f9a8f914"} Nov 28 12:53:57 crc kubenswrapper[4779]: I1128 12:53:57.756498 4779 generic.go:334] "Generic (PLEG): container finished" podID="81190d6e-e211-4aae-890b-bfd66bd92381" containerID="dc344c587375bd3128515924db8821f4e1d2fa6f8a7dca76850c0b4f8c8b53ca" exitCode=0 Nov 28 12:53:57 crc kubenswrapper[4779]: I1128 12:53:57.756979 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-5vn8z" event={"ID":"81190d6e-e211-4aae-890b-bfd66bd92381","Type":"ContainerDied","Data":"dc344c587375bd3128515924db8821f4e1d2fa6f8a7dca76850c0b4f8c8b53ca"} Nov 28 12:53:58 crc kubenswrapper[4779]: I1128 12:53:58.203988 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8msj7" Nov 28 12:53:58 crc kubenswrapper[4779]: I1128 12:53:58.204258 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-425f-account-create-update-l6kl6" Nov 28 12:53:58 crc kubenswrapper[4779]: I1128 12:53:58.294062 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e2a93bc-7245-4557-851b-33230f2031dc-operator-scripts\") pod \"8e2a93bc-7245-4557-851b-33230f2031dc\" (UID: \"8e2a93bc-7245-4557-851b-33230f2031dc\") " Nov 28 12:53:58 crc kubenswrapper[4779]: I1128 12:53:58.294279 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f140dc70-fd92-49d0-b831-44c97eb32ead-operator-scripts\") pod \"f140dc70-fd92-49d0-b831-44c97eb32ead\" (UID: \"f140dc70-fd92-49d0-b831-44c97eb32ead\") " Nov 28 12:53:58 crc kubenswrapper[4779]: I1128 12:53:58.294336 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hj4t4\" (UniqueName: \"kubernetes.io/projected/f140dc70-fd92-49d0-b831-44c97eb32ead-kube-api-access-hj4t4\") pod \"f140dc70-fd92-49d0-b831-44c97eb32ead\" (UID: \"f140dc70-fd92-49d0-b831-44c97eb32ead\") " Nov 28 12:53:58 crc kubenswrapper[4779]: I1128 12:53:58.294378 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rv95p\" (UniqueName: \"kubernetes.io/projected/8e2a93bc-7245-4557-851b-33230f2031dc-kube-api-access-rv95p\") pod \"8e2a93bc-7245-4557-851b-33230f2031dc\" (UID: \"8e2a93bc-7245-4557-851b-33230f2031dc\") " Nov 28 12:53:58 crc kubenswrapper[4779]: I1128 12:53:58.295394 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e2a93bc-7245-4557-851b-33230f2031dc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8e2a93bc-7245-4557-851b-33230f2031dc" (UID: "8e2a93bc-7245-4557-851b-33230f2031dc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:53:58 crc kubenswrapper[4779]: I1128 12:53:58.295400 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f140dc70-fd92-49d0-b831-44c97eb32ead-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f140dc70-fd92-49d0-b831-44c97eb32ead" (UID: "f140dc70-fd92-49d0-b831-44c97eb32ead"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:53:58 crc kubenswrapper[4779]: I1128 12:53:58.302386 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e2a93bc-7245-4557-851b-33230f2031dc-kube-api-access-rv95p" (OuterVolumeSpecName: "kube-api-access-rv95p") pod "8e2a93bc-7245-4557-851b-33230f2031dc" (UID: "8e2a93bc-7245-4557-851b-33230f2031dc"). InnerVolumeSpecName "kube-api-access-rv95p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:53:58 crc kubenswrapper[4779]: I1128 12:53:58.302542 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f140dc70-fd92-49d0-b831-44c97eb32ead-kube-api-access-hj4t4" (OuterVolumeSpecName: "kube-api-access-hj4t4") pod "f140dc70-fd92-49d0-b831-44c97eb32ead" (UID: "f140dc70-fd92-49d0-b831-44c97eb32ead"). InnerVolumeSpecName "kube-api-access-hj4t4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:53:58 crc kubenswrapper[4779]: I1128 12:53:58.396939 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hj4t4\" (UniqueName: \"kubernetes.io/projected/f140dc70-fd92-49d0-b831-44c97eb32ead-kube-api-access-hj4t4\") on node \"crc\" DevicePath \"\"" Nov 28 12:53:58 crc kubenswrapper[4779]: I1128 12:53:58.396995 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rv95p\" (UniqueName: \"kubernetes.io/projected/8e2a93bc-7245-4557-851b-33230f2031dc-kube-api-access-rv95p\") on node \"crc\" DevicePath \"\"" Nov 28 12:53:58 crc kubenswrapper[4779]: I1128 12:53:58.397009 4779 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e2a93bc-7245-4557-851b-33230f2031dc-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:53:58 crc kubenswrapper[4779]: I1128 12:53:58.397021 4779 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f140dc70-fd92-49d0-b831-44c97eb32ead-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:53:58 crc kubenswrapper[4779]: I1128 12:53:58.770248 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8msj7" event={"ID":"8e2a93bc-7245-4557-851b-33230f2031dc","Type":"ContainerDied","Data":"eed8ffded8d978106e8a48616cebaa67f1b0651757a1a99c6165d16dd6af58c3"} Nov 28 12:53:58 crc kubenswrapper[4779]: I1128 12:53:58.770653 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eed8ffded8d978106e8a48616cebaa67f1b0651757a1a99c6165d16dd6af58c3" Nov 28 12:53:58 crc kubenswrapper[4779]: I1128 12:53:58.770299 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8msj7" Nov 28 12:53:58 crc kubenswrapper[4779]: I1128 12:53:58.772975 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-425f-account-create-update-l6kl6" event={"ID":"f140dc70-fd92-49d0-b831-44c97eb32ead","Type":"ContainerDied","Data":"9e5dff7a50287e2d38333e1b07d7cb867004a36eabc7172c798c0a7b3b8ab8a1"} Nov 28 12:53:58 crc kubenswrapper[4779]: I1128 12:53:58.773006 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-425f-account-create-update-l6kl6" Nov 28 12:53:58 crc kubenswrapper[4779]: I1128 12:53:58.773012 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e5dff7a50287e2d38333e1b07d7cb867004a36eabc7172c798c0a7b3b8ab8a1" Nov 28 12:53:58 crc kubenswrapper[4779]: I1128 12:53:58.776304 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"265ee755-a70e-4f35-a40a-ef525a3c5088","Type":"ContainerStarted","Data":"38b97975e636deab80d1bfc6d919f65369d17a8e409d6add1cd7432b24447b07"} Nov 28 12:53:58 crc kubenswrapper[4779]: I1128 12:53:58.776327 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"265ee755-a70e-4f35-a40a-ef525a3c5088","Type":"ContainerStarted","Data":"9056cb3eb6fe902a00e60d8ce8b236e88e6dd0c506644f1bbb4df261ef257abe"} Nov 28 12:53:59 crc kubenswrapper[4779]: I1128 12:53:59.185639 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-6cjk6" Nov 28 12:53:59 crc kubenswrapper[4779]: I1128 12:53:59.284113 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-5vn8z" Nov 28 12:53:59 crc kubenswrapper[4779]: I1128 12:53:59.289748 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8ea0-account-create-update-4kjzs" Nov 28 12:53:59 crc kubenswrapper[4779]: I1128 12:53:59.302514 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7fa2-account-create-update-jmqxq" Nov 28 12:53:59 crc kubenswrapper[4779]: I1128 12:53:59.322847 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xw8g\" (UniqueName: \"kubernetes.io/projected/0a63a0ed-a2ba-4acb-8f0d-a88e165e6cc9-kube-api-access-9xw8g\") pod \"0a63a0ed-a2ba-4acb-8f0d-a88e165e6cc9\" (UID: \"0a63a0ed-a2ba-4acb-8f0d-a88e165e6cc9\") " Nov 28 12:53:59 crc kubenswrapper[4779]: I1128 12:53:59.322914 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a63a0ed-a2ba-4acb-8f0d-a88e165e6cc9-operator-scripts\") pod \"0a63a0ed-a2ba-4acb-8f0d-a88e165e6cc9\" (UID: \"0a63a0ed-a2ba-4acb-8f0d-a88e165e6cc9\") " Nov 28 12:53:59 crc kubenswrapper[4779]: I1128 12:53:59.323847 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a63a0ed-a2ba-4acb-8f0d-a88e165e6cc9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0a63a0ed-a2ba-4acb-8f0d-a88e165e6cc9" (UID: "0a63a0ed-a2ba-4acb-8f0d-a88e165e6cc9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:53:59 crc kubenswrapper[4779]: I1128 12:53:59.328406 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a63a0ed-a2ba-4acb-8f0d-a88e165e6cc9-kube-api-access-9xw8g" (OuterVolumeSpecName: "kube-api-access-9xw8g") pod "0a63a0ed-a2ba-4acb-8f0d-a88e165e6cc9" (UID: "0a63a0ed-a2ba-4acb-8f0d-a88e165e6cc9"). InnerVolumeSpecName "kube-api-access-9xw8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:53:59 crc kubenswrapper[4779]: I1128 12:53:59.424806 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81190d6e-e211-4aae-890b-bfd66bd92381-operator-scripts\") pod \"81190d6e-e211-4aae-890b-bfd66bd92381\" (UID: \"81190d6e-e211-4aae-890b-bfd66bd92381\") " Nov 28 12:53:59 crc kubenswrapper[4779]: I1128 12:53:59.424912 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqpvv\" (UniqueName: \"kubernetes.io/projected/c1273702-2d8b-401a-afd5-335a5ceb8bbe-kube-api-access-sqpvv\") pod \"c1273702-2d8b-401a-afd5-335a5ceb8bbe\" (UID: \"c1273702-2d8b-401a-afd5-335a5ceb8bbe\") " Nov 28 12:53:59 crc kubenswrapper[4779]: I1128 12:53:59.424939 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e230e28f-3821-476c-b967-2dc505f4206c-operator-scripts\") pod \"e230e28f-3821-476c-b967-2dc505f4206c\" (UID: \"e230e28f-3821-476c-b967-2dc505f4206c\") " Nov 28 12:53:59 crc kubenswrapper[4779]: I1128 12:53:59.424986 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1273702-2d8b-401a-afd5-335a5ceb8bbe-operator-scripts\") pod \"c1273702-2d8b-401a-afd5-335a5ceb8bbe\" (UID: \"c1273702-2d8b-401a-afd5-335a5ceb8bbe\") " Nov 28 12:53:59 crc kubenswrapper[4779]: I1128 12:53:59.425010 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkdnn\" (UniqueName: \"kubernetes.io/projected/e230e28f-3821-476c-b967-2dc505f4206c-kube-api-access-qkdnn\") pod \"e230e28f-3821-476c-b967-2dc505f4206c\" (UID: \"e230e28f-3821-476c-b967-2dc505f4206c\") " Nov 28 12:53:59 crc kubenswrapper[4779]: I1128 12:53:59.425048 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4sqj\" (UniqueName: \"kubernetes.io/projected/81190d6e-e211-4aae-890b-bfd66bd92381-kube-api-access-q4sqj\") pod \"81190d6e-e211-4aae-890b-bfd66bd92381\" (UID: \"81190d6e-e211-4aae-890b-bfd66bd92381\") " Nov 28 12:53:59 crc kubenswrapper[4779]: I1128 12:53:59.425370 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xw8g\" (UniqueName: \"kubernetes.io/projected/0a63a0ed-a2ba-4acb-8f0d-a88e165e6cc9-kube-api-access-9xw8g\") on node \"crc\" DevicePath \"\"" Nov 28 12:53:59 crc kubenswrapper[4779]: I1128 12:53:59.425382 4779 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a63a0ed-a2ba-4acb-8f0d-a88e165e6cc9-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:53:59 crc kubenswrapper[4779]: I1128 12:53:59.425744 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1273702-2d8b-401a-afd5-335a5ceb8bbe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c1273702-2d8b-401a-afd5-335a5ceb8bbe" (UID: "c1273702-2d8b-401a-afd5-335a5ceb8bbe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:53:59 crc kubenswrapper[4779]: I1128 12:53:59.426032 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e230e28f-3821-476c-b967-2dc505f4206c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e230e28f-3821-476c-b967-2dc505f4206c" (UID: "e230e28f-3821-476c-b967-2dc505f4206c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:53:59 crc kubenswrapper[4779]: I1128 12:53:59.426063 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81190d6e-e211-4aae-890b-bfd66bd92381-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "81190d6e-e211-4aae-890b-bfd66bd92381" (UID: "81190d6e-e211-4aae-890b-bfd66bd92381"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:53:59 crc kubenswrapper[4779]: I1128 12:53:59.431784 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1273702-2d8b-401a-afd5-335a5ceb8bbe-kube-api-access-sqpvv" (OuterVolumeSpecName: "kube-api-access-sqpvv") pod "c1273702-2d8b-401a-afd5-335a5ceb8bbe" (UID: "c1273702-2d8b-401a-afd5-335a5ceb8bbe"). InnerVolumeSpecName "kube-api-access-sqpvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:53:59 crc kubenswrapper[4779]: I1128 12:53:59.432683 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e230e28f-3821-476c-b967-2dc505f4206c-kube-api-access-qkdnn" (OuterVolumeSpecName: "kube-api-access-qkdnn") pod "e230e28f-3821-476c-b967-2dc505f4206c" (UID: "e230e28f-3821-476c-b967-2dc505f4206c"). InnerVolumeSpecName "kube-api-access-qkdnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:53:59 crc kubenswrapper[4779]: I1128 12:53:59.438777 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81190d6e-e211-4aae-890b-bfd66bd92381-kube-api-access-q4sqj" (OuterVolumeSpecName: "kube-api-access-q4sqj") pod "81190d6e-e211-4aae-890b-bfd66bd92381" (UID: "81190d6e-e211-4aae-890b-bfd66bd92381"). InnerVolumeSpecName "kube-api-access-q4sqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:53:59 crc kubenswrapper[4779]: I1128 12:53:59.528470 4779 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1273702-2d8b-401a-afd5-335a5ceb8bbe-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:53:59 crc kubenswrapper[4779]: I1128 12:53:59.528543 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkdnn\" (UniqueName: \"kubernetes.io/projected/e230e28f-3821-476c-b967-2dc505f4206c-kube-api-access-qkdnn\") on node \"crc\" DevicePath \"\"" Nov 28 12:53:59 crc kubenswrapper[4779]: I1128 12:53:59.528559 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4sqj\" (UniqueName: \"kubernetes.io/projected/81190d6e-e211-4aae-890b-bfd66bd92381-kube-api-access-q4sqj\") on node \"crc\" DevicePath \"\"" Nov 28 12:53:59 crc kubenswrapper[4779]: I1128 12:53:59.528572 4779 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81190d6e-e211-4aae-890b-bfd66bd92381-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:53:59 crc kubenswrapper[4779]: I1128 12:53:59.528584 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqpvv\" (UniqueName: \"kubernetes.io/projected/c1273702-2d8b-401a-afd5-335a5ceb8bbe-kube-api-access-sqpvv\") on node \"crc\" DevicePath \"\"" Nov 28 12:53:59 crc kubenswrapper[4779]: I1128 12:53:59.528623 4779 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e230e28f-3821-476c-b967-2dc505f4206c-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:53:59 crc kubenswrapper[4779]: I1128 12:53:59.719655 4779 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-7bg4l" podUID="5049f1f8-c081-4671-8d6a-9282a53dd6bd" containerName="ovn-controller" probeResult="failure" output=< Nov 28 12:53:59 crc kubenswrapper[4779]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 28 12:53:59 crc kubenswrapper[4779]: > Nov 28 12:53:59 crc kubenswrapper[4779]: I1128 12:53:59.752590 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-c6d9j" Nov 28 12:53:59 crc kubenswrapper[4779]: I1128 12:53:59.785007 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8ea0-account-create-update-4kjzs" event={"ID":"c1273702-2d8b-401a-afd5-335a5ceb8bbe","Type":"ContainerDied","Data":"d80784a79b7627fa0dfdc15071f37e6e45c825db0493bd49d55b7ad301125a08"} Nov 28 12:53:59 crc kubenswrapper[4779]: I1128 12:53:59.785074 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d80784a79b7627fa0dfdc15071f37e6e45c825db0493bd49d55b7ad301125a08" Nov 28 12:53:59 crc kubenswrapper[4779]: I1128 12:53:59.786122 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8ea0-account-create-update-4kjzs" Nov 28 12:53:59 crc kubenswrapper[4779]: I1128 12:53:59.787758 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-6cjk6" Nov 28 12:53:59 crc kubenswrapper[4779]: I1128 12:53:59.788208 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-6cjk6" event={"ID":"0a63a0ed-a2ba-4acb-8f0d-a88e165e6cc9","Type":"ContainerDied","Data":"b15b59bb78ce8e8bfd128af236a94a99229d76ca02bc860664d43cd7972bb535"} Nov 28 12:53:59 crc kubenswrapper[4779]: I1128 12:53:59.788259 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b15b59bb78ce8e8bfd128af236a94a99229d76ca02bc860664d43cd7972bb535" Nov 28 12:53:59 crc kubenswrapper[4779]: I1128 12:53:59.790655 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-5vn8z" event={"ID":"81190d6e-e211-4aae-890b-bfd66bd92381","Type":"ContainerDied","Data":"79ccde7aa55b74a9475576188efa6a2e31185b23d9808a17c94ed34ea586e1e1"} Nov 28 12:53:59 crc kubenswrapper[4779]: I1128 12:53:59.790719 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79ccde7aa55b74a9475576188efa6a2e31185b23d9808a17c94ed34ea586e1e1" Nov 28 12:53:59 crc kubenswrapper[4779]: I1128 12:53:59.790807 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-5vn8z" Nov 28 12:53:59 crc kubenswrapper[4779]: I1128 12:53:59.796543 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7fa2-account-create-update-jmqxq" event={"ID":"e230e28f-3821-476c-b967-2dc505f4206c","Type":"ContainerDied","Data":"7048cf7ad635e9073a4dffb0233eb7edbc76b6a0c3fb5ace6fe1568f9b166b1d"} Nov 28 12:53:59 crc kubenswrapper[4779]: I1128 12:53:59.796712 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7048cf7ad635e9073a4dffb0233eb7edbc76b6a0c3fb5ace6fe1568f9b166b1d" Nov 28 12:53:59 crc kubenswrapper[4779]: I1128 12:53:59.796680 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7fa2-account-create-update-jmqxq" Nov 28 12:54:00 crc kubenswrapper[4779]: I1128 12:54:00.805893 4779 generic.go:334] "Generic (PLEG): container finished" podID="486d0b33-cc59-495a-ba1f-e51c47e0d37e" containerID="dd6084584efb63ab6484ff18173ba4f693e06a7fcd6ab961420fb6eb533c5733" exitCode=0 Nov 28 12:54:00 crc kubenswrapper[4779]: I1128 12:54:00.806132 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"486d0b33-cc59-495a-ba1f-e51c47e0d37e","Type":"ContainerDied","Data":"dd6084584efb63ab6484ff18173ba4f693e06a7fcd6ab961420fb6eb533c5733"} Nov 28 12:54:00 crc kubenswrapper[4779]: I1128 12:54:00.832586 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"265ee755-a70e-4f35-a40a-ef525a3c5088","Type":"ContainerStarted","Data":"cecf57953437767c984857037a9f9eebbc7266f7a89b6b91d1b60c93ef9d4d88"} Nov 28 12:54:00 crc kubenswrapper[4779]: I1128 12:54:00.832656 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"265ee755-a70e-4f35-a40a-ef525a3c5088","Type":"ContainerStarted","Data":"04cbb033cde6c989258fedf4326634db87a0c2c027ddd68d23c886995ebd8a7c"} Nov 28 12:54:00 crc kubenswrapper[4779]: I1128 12:54:00.832678 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"265ee755-a70e-4f35-a40a-ef525a3c5088","Type":"ContainerStarted","Data":"c888ee00e060b4f9ba03e1976a5d9d59825ed2e03f9d856d05ca7e2e15162528"} Nov 28 12:54:00 crc kubenswrapper[4779]: I1128 12:54:00.832697 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"265ee755-a70e-4f35-a40a-ef525a3c5088","Type":"ContainerStarted","Data":"f0496aff6ec43be866ad59c1bd2e86f639aa8e22360e8ec455cb5a82bb7221b2"} Nov 28 12:54:00 crc kubenswrapper[4779]: I1128 12:54:00.835063 4779 generic.go:334] "Generic (PLEG): container finished" podID="1c8c979a-2995-4080-a0b6-173e62faceee" containerID="83a10acad2ad96fbf02da7dec091f283aa84123110fb7c2a468f72da9c94c337" exitCode=0 Nov 28 12:54:00 crc kubenswrapper[4779]: I1128 12:54:00.835123 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1c8c979a-2995-4080-a0b6-173e62faceee","Type":"ContainerDied","Data":"83a10acad2ad96fbf02da7dec091f283aa84123110fb7c2a468f72da9c94c337"} Nov 28 12:54:02 crc kubenswrapper[4779]: I1128 12:54:02.873659 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1c8c979a-2995-4080-a0b6-173e62faceee","Type":"ContainerStarted","Data":"c5011532b76ebd7f52be3d1adec88fce96e7c546eec695918b4b16e74e7c8d0e"} Nov 28 12:54:02 crc kubenswrapper[4779]: I1128 12:54:02.874744 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 28 12:54:02 crc kubenswrapper[4779]: I1128 12:54:02.874979 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"486d0b33-cc59-495a-ba1f-e51c47e0d37e","Type":"ContainerStarted","Data":"49b506c13222ec7c43ad84124ec49e368724a10ad38c3fa28aa5a33a5a360647"} Nov 28 12:54:02 crc kubenswrapper[4779]: I1128 12:54:02.875403 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:54:02 crc kubenswrapper[4779]: I1128 12:54:02.920825 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=54.826241248 podStartE2EDuration="1m3.920801288s" podCreationTimestamp="2025-11-28 12:52:59 +0000 UTC" firstStartedPulling="2025-11-28 12:53:17.288693183 +0000 UTC m=+1057.854368527" lastFinishedPulling="2025-11-28 12:53:26.383253213 +0000 UTC m=+1066.948928567" observedRunningTime="2025-11-28 12:54:02.917034147 +0000 UTC m=+1103.482709521" watchObservedRunningTime="2025-11-28 12:54:02.920801288 +0000 UTC m=+1103.486476672" Nov 28 12:54:02 crc kubenswrapper[4779]: I1128 12:54:02.954765 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=54.604285167 podStartE2EDuration="1m3.954735218s" podCreationTimestamp="2025-11-28 12:52:59 +0000 UTC" firstStartedPulling="2025-11-28 12:53:17.099525631 +0000 UTC m=+1057.665200985" lastFinishedPulling="2025-11-28 12:53:26.449975682 +0000 UTC m=+1067.015651036" observedRunningTime="2025-11-28 12:54:02.945167742 +0000 UTC m=+1103.510843096" watchObservedRunningTime="2025-11-28 12:54:02.954735218 +0000 UTC m=+1103.520410572" Nov 28 12:54:03 crc kubenswrapper[4779]: I1128 12:54:03.893118 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"265ee755-a70e-4f35-a40a-ef525a3c5088","Type":"ContainerStarted","Data":"15f44482cc77699dab69740b25502b41da53e3f8a18cfef465c5dc83a2bbcb04"} Nov 28 12:54:03 crc kubenswrapper[4779]: I1128 12:54:03.893978 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"265ee755-a70e-4f35-a40a-ef525a3c5088","Type":"ContainerStarted","Data":"7e0122bcea32ad3470e62714984bcc4886c429a8ac131016f5725fac5cf5dd31"} Nov 28 12:54:03 crc kubenswrapper[4779]: I1128 12:54:03.893992 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"265ee755-a70e-4f35-a40a-ef525a3c5088","Type":"ContainerStarted","Data":"3a94764db8407d6fbf1d5e1cdbd3e1b1b66e61d9c8575d2f4ad277ff3ea02ebb"} Nov 28 12:54:03 crc kubenswrapper[4779]: I1128 12:54:03.894001 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"265ee755-a70e-4f35-a40a-ef525a3c5088","Type":"ContainerStarted","Data":"fdf04bb0844eb70aba41eb0d88f4965343d7c0c49f6a720e8c1c00902367f03c"} Nov 28 12:54:03 crc kubenswrapper[4779]: I1128 12:54:03.894010 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"265ee755-a70e-4f35-a40a-ef525a3c5088","Type":"ContainerStarted","Data":"51d38fe242a83845c9755949509c4863b50d3dd70606a8fdb719fe316e3d8a6a"} Nov 28 12:54:04 crc kubenswrapper[4779]: I1128 12:54:04.714784 4779 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-7bg4l" podUID="5049f1f8-c081-4671-8d6a-9282a53dd6bd" containerName="ovn-controller" probeResult="failure" output=< Nov 28 12:54:04 crc kubenswrapper[4779]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 28 12:54:04 crc kubenswrapper[4779]: > Nov 28 12:54:04 crc kubenswrapper[4779]: I1128 12:54:04.909442 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-hzq4r"] Nov 28 12:54:04 crc kubenswrapper[4779]: E1128 12:54:04.910080 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81190d6e-e211-4aae-890b-bfd66bd92381" containerName="mariadb-database-create" Nov 28 12:54:04 crc kubenswrapper[4779]: I1128 12:54:04.910117 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="81190d6e-e211-4aae-890b-bfd66bd92381" containerName="mariadb-database-create" Nov 28 12:54:04 crc kubenswrapper[4779]: E1128 12:54:04.910140 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a63a0ed-a2ba-4acb-8f0d-a88e165e6cc9" containerName="mariadb-database-create" Nov 28 12:54:04 crc kubenswrapper[4779]: I1128 12:54:04.910149 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a63a0ed-a2ba-4acb-8f0d-a88e165e6cc9" containerName="mariadb-database-create" Nov 28 12:54:04 crc kubenswrapper[4779]: E1128 12:54:04.910168 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f140dc70-fd92-49d0-b831-44c97eb32ead" containerName="mariadb-account-create-update" Nov 28 12:54:04 crc kubenswrapper[4779]: I1128 12:54:04.910177 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="f140dc70-fd92-49d0-b831-44c97eb32ead" containerName="mariadb-account-create-update" Nov 28 12:54:04 crc kubenswrapper[4779]: E1128 12:54:04.910208 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e769641-0f27-4979-9823-dff8fe453054" containerName="swift-ring-rebalance" Nov 28 12:54:04 crc kubenswrapper[4779]: I1128 12:54:04.910216 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e769641-0f27-4979-9823-dff8fe453054" containerName="swift-ring-rebalance" Nov 28 12:54:04 crc kubenswrapper[4779]: E1128 12:54:04.910231 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e2a93bc-7245-4557-851b-33230f2031dc" containerName="mariadb-database-create" Nov 28 12:54:04 crc kubenswrapper[4779]: I1128 12:54:04.910240 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e2a93bc-7245-4557-851b-33230f2031dc" containerName="mariadb-database-create" Nov 28 12:54:04 crc kubenswrapper[4779]: E1128 12:54:04.910254 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1273702-2d8b-401a-afd5-335a5ceb8bbe" containerName="mariadb-account-create-update" Nov 28 12:54:04 crc kubenswrapper[4779]: I1128 12:54:04.910262 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1273702-2d8b-401a-afd5-335a5ceb8bbe" containerName="mariadb-account-create-update" Nov 28 12:54:04 crc kubenswrapper[4779]: E1128 12:54:04.910282 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e230e28f-3821-476c-b967-2dc505f4206c" containerName="mariadb-account-create-update" Nov 28 12:54:04 crc kubenswrapper[4779]: I1128 12:54:04.910291 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="e230e28f-3821-476c-b967-2dc505f4206c" containerName="mariadb-account-create-update" Nov 28 12:54:04 crc kubenswrapper[4779]: I1128 12:54:04.910499 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e2a93bc-7245-4557-851b-33230f2031dc" containerName="mariadb-database-create" Nov 28 12:54:04 crc kubenswrapper[4779]: I1128 12:54:04.910519 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="81190d6e-e211-4aae-890b-bfd66bd92381" containerName="mariadb-database-create" Nov 28 12:54:04 crc kubenswrapper[4779]: I1128 12:54:04.910537 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a63a0ed-a2ba-4acb-8f0d-a88e165e6cc9" containerName="mariadb-database-create" Nov 28 12:54:04 crc kubenswrapper[4779]: I1128 12:54:04.910557 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1273702-2d8b-401a-afd5-335a5ceb8bbe" containerName="mariadb-account-create-update" Nov 28 12:54:04 crc kubenswrapper[4779]: I1128 12:54:04.910572 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="e230e28f-3821-476c-b967-2dc505f4206c" containerName="mariadb-account-create-update" Nov 28 12:54:04 crc kubenswrapper[4779]: I1128 12:54:04.910591 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e769641-0f27-4979-9823-dff8fe453054" containerName="swift-ring-rebalance" Nov 28 12:54:04 crc kubenswrapper[4779]: I1128 12:54:04.910604 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="f140dc70-fd92-49d0-b831-44c97eb32ead" containerName="mariadb-account-create-update" Nov 28 12:54:04 crc kubenswrapper[4779]: I1128 12:54:04.911245 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-hzq4r" Nov 28 12:54:04 crc kubenswrapper[4779]: I1128 12:54:04.913919 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 28 12:54:04 crc kubenswrapper[4779]: I1128 12:54:04.913978 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-q42sg" Nov 28 12:54:04 crc kubenswrapper[4779]: I1128 12:54:04.914347 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"265ee755-a70e-4f35-a40a-ef525a3c5088","Type":"ContainerStarted","Data":"e97db1010d85cdbb77088e3a02ce187776ac15957c605080f7b62f6f5203979f"} Nov 28 12:54:04 crc kubenswrapper[4779]: I1128 12:54:04.914412 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"265ee755-a70e-4f35-a40a-ef525a3c5088","Type":"ContainerStarted","Data":"f7f49966c078553bcc2016b30898129ca6779cd4ea2e79aa70f858a9f00ec18b"} Nov 28 12:54:04 crc kubenswrapper[4779]: I1128 12:54:04.933627 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-hzq4r"] Nov 28 12:54:05 crc kubenswrapper[4779]: I1128 12:54:05.000317 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=21.066876635 podStartE2EDuration="29.000294367s" podCreationTimestamp="2025-11-28 12:53:36 +0000 UTC" firstStartedPulling="2025-11-28 12:53:54.841794863 +0000 UTC m=+1095.407470247" lastFinishedPulling="2025-11-28 12:54:02.775212615 +0000 UTC m=+1103.340887979" observedRunningTime="2025-11-28 12:54:04.996347583 +0000 UTC m=+1105.562022947" watchObservedRunningTime="2025-11-28 12:54:05.000294367 +0000 UTC m=+1105.565969721" Nov 28 12:54:05 crc kubenswrapper[4779]: I1128 12:54:05.024436 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/30731004-d3bb-4ed7-820a-37fe3e7ee7e1-db-sync-config-data\") pod \"glance-db-sync-hzq4r\" (UID: \"30731004-d3bb-4ed7-820a-37fe3e7ee7e1\") " pod="openstack/glance-db-sync-hzq4r" Nov 28 12:54:05 crc kubenswrapper[4779]: I1128 12:54:05.024686 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcfwb\" (UniqueName: \"kubernetes.io/projected/30731004-d3bb-4ed7-820a-37fe3e7ee7e1-kube-api-access-kcfwb\") pod \"glance-db-sync-hzq4r\" (UID: \"30731004-d3bb-4ed7-820a-37fe3e7ee7e1\") " pod="openstack/glance-db-sync-hzq4r" Nov 28 12:54:05 crc kubenswrapper[4779]: I1128 12:54:05.024790 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30731004-d3bb-4ed7-820a-37fe3e7ee7e1-config-data\") pod \"glance-db-sync-hzq4r\" (UID: \"30731004-d3bb-4ed7-820a-37fe3e7ee7e1\") " pod="openstack/glance-db-sync-hzq4r" Nov 28 12:54:05 crc kubenswrapper[4779]: I1128 12:54:05.024829 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30731004-d3bb-4ed7-820a-37fe3e7ee7e1-combined-ca-bundle\") pod \"glance-db-sync-hzq4r\" (UID: \"30731004-d3bb-4ed7-820a-37fe3e7ee7e1\") " pod="openstack/glance-db-sync-hzq4r" Nov 28 12:54:05 crc kubenswrapper[4779]: I1128 12:54:05.126506 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/30731004-d3bb-4ed7-820a-37fe3e7ee7e1-db-sync-config-data\") pod \"glance-db-sync-hzq4r\" (UID: \"30731004-d3bb-4ed7-820a-37fe3e7ee7e1\") " pod="openstack/glance-db-sync-hzq4r" Nov 28 12:54:05 crc kubenswrapper[4779]: I1128 12:54:05.126845 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcfwb\" (UniqueName: \"kubernetes.io/projected/30731004-d3bb-4ed7-820a-37fe3e7ee7e1-kube-api-access-kcfwb\") pod \"glance-db-sync-hzq4r\" (UID: \"30731004-d3bb-4ed7-820a-37fe3e7ee7e1\") " pod="openstack/glance-db-sync-hzq4r" Nov 28 12:54:05 crc kubenswrapper[4779]: I1128 12:54:05.126991 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30731004-d3bb-4ed7-820a-37fe3e7ee7e1-config-data\") pod \"glance-db-sync-hzq4r\" (UID: \"30731004-d3bb-4ed7-820a-37fe3e7ee7e1\") " pod="openstack/glance-db-sync-hzq4r" Nov 28 12:54:05 crc kubenswrapper[4779]: I1128 12:54:05.127133 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30731004-d3bb-4ed7-820a-37fe3e7ee7e1-combined-ca-bundle\") pod \"glance-db-sync-hzq4r\" (UID: \"30731004-d3bb-4ed7-820a-37fe3e7ee7e1\") " pod="openstack/glance-db-sync-hzq4r" Nov 28 12:54:05 crc kubenswrapper[4779]: I1128 12:54:05.134904 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30731004-d3bb-4ed7-820a-37fe3e7ee7e1-config-data\") pod \"glance-db-sync-hzq4r\" (UID: \"30731004-d3bb-4ed7-820a-37fe3e7ee7e1\") " pod="openstack/glance-db-sync-hzq4r" Nov 28 12:54:05 crc kubenswrapper[4779]: I1128 12:54:05.139744 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/30731004-d3bb-4ed7-820a-37fe3e7ee7e1-db-sync-config-data\") pod \"glance-db-sync-hzq4r\" (UID: \"30731004-d3bb-4ed7-820a-37fe3e7ee7e1\") " pod="openstack/glance-db-sync-hzq4r" Nov 28 12:54:05 crc kubenswrapper[4779]: I1128 12:54:05.141858 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30731004-d3bb-4ed7-820a-37fe3e7ee7e1-combined-ca-bundle\") pod \"glance-db-sync-hzq4r\" (UID: \"30731004-d3bb-4ed7-820a-37fe3e7ee7e1\") " pod="openstack/glance-db-sync-hzq4r" Nov 28 12:54:05 crc kubenswrapper[4779]: I1128 12:54:05.145716 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcfwb\" (UniqueName: \"kubernetes.io/projected/30731004-d3bb-4ed7-820a-37fe3e7ee7e1-kube-api-access-kcfwb\") pod \"glance-db-sync-hzq4r\" (UID: \"30731004-d3bb-4ed7-820a-37fe3e7ee7e1\") " pod="openstack/glance-db-sync-hzq4r" Nov 28 12:54:05 crc kubenswrapper[4779]: I1128 12:54:05.235988 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-hzq4r" Nov 28 12:54:05 crc kubenswrapper[4779]: I1128 12:54:05.268442 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-vjtpr"] Nov 28 12:54:05 crc kubenswrapper[4779]: I1128 12:54:05.270082 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-vjtpr" Nov 28 12:54:05 crc kubenswrapper[4779]: I1128 12:54:05.277134 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Nov 28 12:54:05 crc kubenswrapper[4779]: I1128 12:54:05.303196 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-vjtpr"] Nov 28 12:54:05 crc kubenswrapper[4779]: I1128 12:54:05.330774 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8367a732-6c2b-4fbd-8325-0e3c6eabc40e-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-vjtpr\" (UID: \"8367a732-6c2b-4fbd-8325-0e3c6eabc40e\") " pod="openstack/dnsmasq-dns-77585f5f8c-vjtpr" Nov 28 12:54:05 crc kubenswrapper[4779]: I1128 12:54:05.330814 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8367a732-6c2b-4fbd-8325-0e3c6eabc40e-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-vjtpr\" (UID: \"8367a732-6c2b-4fbd-8325-0e3c6eabc40e\") " pod="openstack/dnsmasq-dns-77585f5f8c-vjtpr" Nov 28 12:54:05 crc kubenswrapper[4779]: I1128 12:54:05.330848 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8367a732-6c2b-4fbd-8325-0e3c6eabc40e-config\") pod \"dnsmasq-dns-77585f5f8c-vjtpr\" (UID: \"8367a732-6c2b-4fbd-8325-0e3c6eabc40e\") " pod="openstack/dnsmasq-dns-77585f5f8c-vjtpr" Nov 28 12:54:05 crc kubenswrapper[4779]: I1128 12:54:05.330876 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8367a732-6c2b-4fbd-8325-0e3c6eabc40e-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-vjtpr\" (UID: \"8367a732-6c2b-4fbd-8325-0e3c6eabc40e\") " pod="openstack/dnsmasq-dns-77585f5f8c-vjtpr" Nov 28 12:54:05 crc kubenswrapper[4779]: I1128 12:54:05.330900 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km8m4\" (UniqueName: \"kubernetes.io/projected/8367a732-6c2b-4fbd-8325-0e3c6eabc40e-kube-api-access-km8m4\") pod \"dnsmasq-dns-77585f5f8c-vjtpr\" (UID: \"8367a732-6c2b-4fbd-8325-0e3c6eabc40e\") " pod="openstack/dnsmasq-dns-77585f5f8c-vjtpr" Nov 28 12:54:05 crc kubenswrapper[4779]: I1128 12:54:05.330940 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8367a732-6c2b-4fbd-8325-0e3c6eabc40e-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-vjtpr\" (UID: \"8367a732-6c2b-4fbd-8325-0e3c6eabc40e\") " pod="openstack/dnsmasq-dns-77585f5f8c-vjtpr" Nov 28 12:54:05 crc kubenswrapper[4779]: I1128 12:54:05.432601 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-km8m4\" (UniqueName: \"kubernetes.io/projected/8367a732-6c2b-4fbd-8325-0e3c6eabc40e-kube-api-access-km8m4\") pod \"dnsmasq-dns-77585f5f8c-vjtpr\" (UID: \"8367a732-6c2b-4fbd-8325-0e3c6eabc40e\") " pod="openstack/dnsmasq-dns-77585f5f8c-vjtpr" Nov 28 12:54:05 crc kubenswrapper[4779]: I1128 12:54:05.432952 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8367a732-6c2b-4fbd-8325-0e3c6eabc40e-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-vjtpr\" (UID: \"8367a732-6c2b-4fbd-8325-0e3c6eabc40e\") " pod="openstack/dnsmasq-dns-77585f5f8c-vjtpr" Nov 28 12:54:05 crc kubenswrapper[4779]: I1128 12:54:05.433026 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8367a732-6c2b-4fbd-8325-0e3c6eabc40e-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-vjtpr\" (UID: \"8367a732-6c2b-4fbd-8325-0e3c6eabc40e\") " pod="openstack/dnsmasq-dns-77585f5f8c-vjtpr" Nov 28 12:54:05 crc kubenswrapper[4779]: I1128 12:54:05.433042 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8367a732-6c2b-4fbd-8325-0e3c6eabc40e-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-vjtpr\" (UID: \"8367a732-6c2b-4fbd-8325-0e3c6eabc40e\") " pod="openstack/dnsmasq-dns-77585f5f8c-vjtpr" Nov 28 12:54:05 crc kubenswrapper[4779]: I1128 12:54:05.433085 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8367a732-6c2b-4fbd-8325-0e3c6eabc40e-config\") pod \"dnsmasq-dns-77585f5f8c-vjtpr\" (UID: \"8367a732-6c2b-4fbd-8325-0e3c6eabc40e\") " pod="openstack/dnsmasq-dns-77585f5f8c-vjtpr" Nov 28 12:54:05 crc kubenswrapper[4779]: I1128 12:54:05.433124 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8367a732-6c2b-4fbd-8325-0e3c6eabc40e-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-vjtpr\" (UID: \"8367a732-6c2b-4fbd-8325-0e3c6eabc40e\") " pod="openstack/dnsmasq-dns-77585f5f8c-vjtpr" Nov 28 12:54:05 crc kubenswrapper[4779]: I1128 12:54:05.433967 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8367a732-6c2b-4fbd-8325-0e3c6eabc40e-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-vjtpr\" (UID: \"8367a732-6c2b-4fbd-8325-0e3c6eabc40e\") " pod="openstack/dnsmasq-dns-77585f5f8c-vjtpr" Nov 28 12:54:05 crc kubenswrapper[4779]: I1128 12:54:05.433985 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8367a732-6c2b-4fbd-8325-0e3c6eabc40e-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-vjtpr\" (UID: \"8367a732-6c2b-4fbd-8325-0e3c6eabc40e\") " pod="openstack/dnsmasq-dns-77585f5f8c-vjtpr" Nov 28 12:54:05 crc kubenswrapper[4779]: I1128 12:54:05.434185 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8367a732-6c2b-4fbd-8325-0e3c6eabc40e-config\") pod \"dnsmasq-dns-77585f5f8c-vjtpr\" (UID: \"8367a732-6c2b-4fbd-8325-0e3c6eabc40e\") " pod="openstack/dnsmasq-dns-77585f5f8c-vjtpr" Nov 28 12:54:05 crc kubenswrapper[4779]: I1128 12:54:05.434258 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8367a732-6c2b-4fbd-8325-0e3c6eabc40e-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-vjtpr\" (UID: \"8367a732-6c2b-4fbd-8325-0e3c6eabc40e\") " pod="openstack/dnsmasq-dns-77585f5f8c-vjtpr" Nov 28 12:54:05 crc kubenswrapper[4779]: I1128 12:54:05.434805 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8367a732-6c2b-4fbd-8325-0e3c6eabc40e-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-vjtpr\" (UID: \"8367a732-6c2b-4fbd-8325-0e3c6eabc40e\") " pod="openstack/dnsmasq-dns-77585f5f8c-vjtpr" Nov 28 12:54:05 crc kubenswrapper[4779]: I1128 12:54:05.464863 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-km8m4\" (UniqueName: \"kubernetes.io/projected/8367a732-6c2b-4fbd-8325-0e3c6eabc40e-kube-api-access-km8m4\") pod \"dnsmasq-dns-77585f5f8c-vjtpr\" (UID: \"8367a732-6c2b-4fbd-8325-0e3c6eabc40e\") " pod="openstack/dnsmasq-dns-77585f5f8c-vjtpr" Nov 28 12:54:05 crc kubenswrapper[4779]: I1128 12:54:05.641782 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-vjtpr" Nov 28 12:54:05 crc kubenswrapper[4779]: I1128 12:54:05.869241 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-hzq4r"] Nov 28 12:54:05 crc kubenswrapper[4779]: I1128 12:54:05.922661 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-hzq4r" event={"ID":"30731004-d3bb-4ed7-820a-37fe3e7ee7e1","Type":"ContainerStarted","Data":"f3ad39691ae1c31af9fa7e486e555a12bb8dc4bc040b63acb098e466679af52c"} Nov 28 12:54:06 crc kubenswrapper[4779]: I1128 12:54:06.202753 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-vjtpr"] Nov 28 12:54:06 crc kubenswrapper[4779]: W1128 12:54:06.207857 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8367a732_6c2b_4fbd_8325_0e3c6eabc40e.slice/crio-762c62bca407da7bef127c3d83c28d28337ae4fdc3c3e9d8ad7a97d582763e73 WatchSource:0}: Error finding container 762c62bca407da7bef127c3d83c28d28337ae4fdc3c3e9d8ad7a97d582763e73: Status 404 returned error can't find the container with id 762c62bca407da7bef127c3d83c28d28337ae4fdc3c3e9d8ad7a97d582763e73 Nov 28 12:54:06 crc kubenswrapper[4779]: I1128 12:54:06.935440 4779 generic.go:334] "Generic (PLEG): container finished" podID="8367a732-6c2b-4fbd-8325-0e3c6eabc40e" containerID="bdd2c1b22e61b1c91b586239352a97b6ac5bbdb40d348f8411441e27bec1dc43" exitCode=0 Nov 28 12:54:06 crc kubenswrapper[4779]: I1128 12:54:06.935505 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-vjtpr" event={"ID":"8367a732-6c2b-4fbd-8325-0e3c6eabc40e","Type":"ContainerDied","Data":"bdd2c1b22e61b1c91b586239352a97b6ac5bbdb40d348f8411441e27bec1dc43"} Nov 28 12:54:06 crc kubenswrapper[4779]: I1128 12:54:06.935531 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-vjtpr" event={"ID":"8367a732-6c2b-4fbd-8325-0e3c6eabc40e","Type":"ContainerStarted","Data":"762c62bca407da7bef127c3d83c28d28337ae4fdc3c3e9d8ad7a97d582763e73"} Nov 28 12:54:07 crc kubenswrapper[4779]: I1128 12:54:07.950599 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-vjtpr" event={"ID":"8367a732-6c2b-4fbd-8325-0e3c6eabc40e","Type":"ContainerStarted","Data":"279416cdd5a7057b391b215ca9fea4e4dc0ce87b39f1f4adb3528ccd3038908c"} Nov 28 12:54:07 crc kubenswrapper[4779]: I1128 12:54:07.951323 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77585f5f8c-vjtpr" Nov 28 12:54:07 crc kubenswrapper[4779]: I1128 12:54:07.978544 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77585f5f8c-vjtpr" podStartSLOduration=2.9785204800000002 podStartE2EDuration="2.97852048s" podCreationTimestamp="2025-11-28 12:54:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:54:07.973347983 +0000 UTC m=+1108.539023377" watchObservedRunningTime="2025-11-28 12:54:07.97852048 +0000 UTC m=+1108.544195834" Nov 28 12:54:09 crc kubenswrapper[4779]: I1128 12:54:09.736876 4779 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-7bg4l" podUID="5049f1f8-c081-4671-8d6a-9282a53dd6bd" containerName="ovn-controller" probeResult="failure" output=< Nov 28 12:54:09 crc kubenswrapper[4779]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 28 12:54:09 crc kubenswrapper[4779]: > Nov 28 12:54:09 crc kubenswrapper[4779]: I1128 12:54:09.767748 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-c6d9j" Nov 28 12:54:09 crc kubenswrapper[4779]: I1128 12:54:09.992815 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-7bg4l-config-mrnpf"] Nov 28 12:54:09 crc kubenswrapper[4779]: I1128 12:54:09.994417 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7bg4l-config-mrnpf" Nov 28 12:54:10 crc kubenswrapper[4779]: I1128 12:54:10.005112 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 28 12:54:10 crc kubenswrapper[4779]: I1128 12:54:10.024204 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7bg4l-config-mrnpf"] Nov 28 12:54:10 crc kubenswrapper[4779]: I1128 12:54:10.116732 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/098fb61e-d451-4d3c-b556-c22d00e487ec-var-run\") pod \"ovn-controller-7bg4l-config-mrnpf\" (UID: \"098fb61e-d451-4d3c-b556-c22d00e487ec\") " pod="openstack/ovn-controller-7bg4l-config-mrnpf" Nov 28 12:54:10 crc kubenswrapper[4779]: I1128 12:54:10.116783 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/098fb61e-d451-4d3c-b556-c22d00e487ec-var-log-ovn\") pod \"ovn-controller-7bg4l-config-mrnpf\" (UID: \"098fb61e-d451-4d3c-b556-c22d00e487ec\") " pod="openstack/ovn-controller-7bg4l-config-mrnpf" Nov 28 12:54:10 crc kubenswrapper[4779]: I1128 12:54:10.116815 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk4x5\" (UniqueName: \"kubernetes.io/projected/098fb61e-d451-4d3c-b556-c22d00e487ec-kube-api-access-tk4x5\") pod \"ovn-controller-7bg4l-config-mrnpf\" (UID: \"098fb61e-d451-4d3c-b556-c22d00e487ec\") " pod="openstack/ovn-controller-7bg4l-config-mrnpf" Nov 28 12:54:10 crc kubenswrapper[4779]: I1128 12:54:10.116841 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/098fb61e-d451-4d3c-b556-c22d00e487ec-scripts\") pod \"ovn-controller-7bg4l-config-mrnpf\" (UID: \"098fb61e-d451-4d3c-b556-c22d00e487ec\") " pod="openstack/ovn-controller-7bg4l-config-mrnpf" Nov 28 12:54:10 crc kubenswrapper[4779]: I1128 12:54:10.116887 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/098fb61e-d451-4d3c-b556-c22d00e487ec-additional-scripts\") pod \"ovn-controller-7bg4l-config-mrnpf\" (UID: \"098fb61e-d451-4d3c-b556-c22d00e487ec\") " pod="openstack/ovn-controller-7bg4l-config-mrnpf" Nov 28 12:54:10 crc kubenswrapper[4779]: I1128 12:54:10.116907 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/098fb61e-d451-4d3c-b556-c22d00e487ec-var-run-ovn\") pod \"ovn-controller-7bg4l-config-mrnpf\" (UID: \"098fb61e-d451-4d3c-b556-c22d00e487ec\") " pod="openstack/ovn-controller-7bg4l-config-mrnpf" Nov 28 12:54:10 crc kubenswrapper[4779]: I1128 12:54:10.218533 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/098fb61e-d451-4d3c-b556-c22d00e487ec-var-run\") pod \"ovn-controller-7bg4l-config-mrnpf\" (UID: \"098fb61e-d451-4d3c-b556-c22d00e487ec\") " pod="openstack/ovn-controller-7bg4l-config-mrnpf" Nov 28 12:54:10 crc kubenswrapper[4779]: I1128 12:54:10.218679 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/098fb61e-d451-4d3c-b556-c22d00e487ec-var-run\") pod \"ovn-controller-7bg4l-config-mrnpf\" (UID: \"098fb61e-d451-4d3c-b556-c22d00e487ec\") " pod="openstack/ovn-controller-7bg4l-config-mrnpf" Nov 28 12:54:10 crc kubenswrapper[4779]: I1128 12:54:10.218698 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/098fb61e-d451-4d3c-b556-c22d00e487ec-var-log-ovn\") pod \"ovn-controller-7bg4l-config-mrnpf\" (UID: \"098fb61e-d451-4d3c-b556-c22d00e487ec\") " pod="openstack/ovn-controller-7bg4l-config-mrnpf" Nov 28 12:54:10 crc kubenswrapper[4779]: I1128 12:54:10.218773 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tk4x5\" (UniqueName: \"kubernetes.io/projected/098fb61e-d451-4d3c-b556-c22d00e487ec-kube-api-access-tk4x5\") pod \"ovn-controller-7bg4l-config-mrnpf\" (UID: \"098fb61e-d451-4d3c-b556-c22d00e487ec\") " pod="openstack/ovn-controller-7bg4l-config-mrnpf" Nov 28 12:54:10 crc kubenswrapper[4779]: I1128 12:54:10.218812 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/098fb61e-d451-4d3c-b556-c22d00e487ec-var-log-ovn\") pod \"ovn-controller-7bg4l-config-mrnpf\" (UID: \"098fb61e-d451-4d3c-b556-c22d00e487ec\") " pod="openstack/ovn-controller-7bg4l-config-mrnpf" Nov 28 12:54:10 crc kubenswrapper[4779]: I1128 12:54:10.218829 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/098fb61e-d451-4d3c-b556-c22d00e487ec-scripts\") pod \"ovn-controller-7bg4l-config-mrnpf\" (UID: \"098fb61e-d451-4d3c-b556-c22d00e487ec\") " pod="openstack/ovn-controller-7bg4l-config-mrnpf" Nov 28 12:54:10 crc kubenswrapper[4779]: I1128 12:54:10.218959 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/098fb61e-d451-4d3c-b556-c22d00e487ec-additional-scripts\") pod \"ovn-controller-7bg4l-config-mrnpf\" (UID: \"098fb61e-d451-4d3c-b556-c22d00e487ec\") " pod="openstack/ovn-controller-7bg4l-config-mrnpf" Nov 28 12:54:10 crc kubenswrapper[4779]: I1128 12:54:10.218999 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/098fb61e-d451-4d3c-b556-c22d00e487ec-var-run-ovn\") pod \"ovn-controller-7bg4l-config-mrnpf\" (UID: \"098fb61e-d451-4d3c-b556-c22d00e487ec\") " pod="openstack/ovn-controller-7bg4l-config-mrnpf" Nov 28 12:54:10 crc kubenswrapper[4779]: I1128 12:54:10.219164 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/098fb61e-d451-4d3c-b556-c22d00e487ec-var-run-ovn\") pod \"ovn-controller-7bg4l-config-mrnpf\" (UID: \"098fb61e-d451-4d3c-b556-c22d00e487ec\") " pod="openstack/ovn-controller-7bg4l-config-mrnpf" Nov 28 12:54:10 crc kubenswrapper[4779]: I1128 12:54:10.221949 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/098fb61e-d451-4d3c-b556-c22d00e487ec-additional-scripts\") pod \"ovn-controller-7bg4l-config-mrnpf\" (UID: \"098fb61e-d451-4d3c-b556-c22d00e487ec\") " pod="openstack/ovn-controller-7bg4l-config-mrnpf" Nov 28 12:54:10 crc kubenswrapper[4779]: I1128 12:54:10.222629 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/098fb61e-d451-4d3c-b556-c22d00e487ec-scripts\") pod \"ovn-controller-7bg4l-config-mrnpf\" (UID: \"098fb61e-d451-4d3c-b556-c22d00e487ec\") " pod="openstack/ovn-controller-7bg4l-config-mrnpf" Nov 28 12:54:10 crc kubenswrapper[4779]: I1128 12:54:10.242292 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk4x5\" (UniqueName: \"kubernetes.io/projected/098fb61e-d451-4d3c-b556-c22d00e487ec-kube-api-access-tk4x5\") pod \"ovn-controller-7bg4l-config-mrnpf\" (UID: \"098fb61e-d451-4d3c-b556-c22d00e487ec\") " pod="openstack/ovn-controller-7bg4l-config-mrnpf" Nov 28 12:54:10 crc kubenswrapper[4779]: I1128 12:54:10.319964 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7bg4l-config-mrnpf" Nov 28 12:54:10 crc kubenswrapper[4779]: I1128 12:54:10.778762 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7bg4l-config-mrnpf"] Nov 28 12:54:10 crc kubenswrapper[4779]: I1128 12:54:10.985017 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7bg4l-config-mrnpf" event={"ID":"098fb61e-d451-4d3c-b556-c22d00e487ec","Type":"ContainerStarted","Data":"fc6e4f7ba3b115a7734c1355815a4c8f54154656ef3d2799bcd45b6bca210c1a"} Nov 28 12:54:11 crc kubenswrapper[4779]: I1128 12:54:11.996988 4779 generic.go:334] "Generic (PLEG): container finished" podID="098fb61e-d451-4d3c-b556-c22d00e487ec" containerID="8550a6d31866f1124fe606fc8fe9729e7eb07c23b5656ff7ad010771b01292f4" exitCode=0 Nov 28 12:54:11 crc kubenswrapper[4779]: I1128 12:54:11.997379 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7bg4l-config-mrnpf" event={"ID":"098fb61e-d451-4d3c-b556-c22d00e487ec","Type":"ContainerDied","Data":"8550a6d31866f1124fe606fc8fe9729e7eb07c23b5656ff7ad010771b01292f4"} Nov 28 12:54:14 crc kubenswrapper[4779]: I1128 12:54:14.732739 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-7bg4l" Nov 28 12:54:15 crc kubenswrapper[4779]: I1128 12:54:15.644304 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-77585f5f8c-vjtpr" Nov 28 12:54:15 crc kubenswrapper[4779]: I1128 12:54:15.718402 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-8b5gg"] Nov 28 12:54:15 crc kubenswrapper[4779]: I1128 12:54:15.718712 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-8b5gg" podUID="8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba" containerName="dnsmasq-dns" containerID="cri-o://802074bc68ba8fb39b3da0987f20e2b07ad90d6e52adc2870fa58e01a7e66fc7" gracePeriod=10 Nov 28 12:54:16 crc kubenswrapper[4779]: I1128 12:54:16.042533 4779 generic.go:334] "Generic (PLEG): container finished" podID="8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba" containerID="802074bc68ba8fb39b3da0987f20e2b07ad90d6e52adc2870fa58e01a7e66fc7" exitCode=0 Nov 28 12:54:16 crc kubenswrapper[4779]: I1128 12:54:16.042623 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-8b5gg" event={"ID":"8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba","Type":"ContainerDied","Data":"802074bc68ba8fb39b3da0987f20e2b07ad90d6e52adc2870fa58e01a7e66fc7"} Nov 28 12:54:17 crc kubenswrapper[4779]: I1128 12:54:17.417484 4779 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-8b5gg" podUID="8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: connect: connection refused" Nov 28 12:54:19 crc kubenswrapper[4779]: I1128 12:54:19.468117 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7bg4l-config-mrnpf" Nov 28 12:54:19 crc kubenswrapper[4779]: I1128 12:54:19.559430 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-8b5gg" Nov 28 12:54:19 crc kubenswrapper[4779]: I1128 12:54:19.622118 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/098fb61e-d451-4d3c-b556-c22d00e487ec-var-run-ovn\") pod \"098fb61e-d451-4d3c-b556-c22d00e487ec\" (UID: \"098fb61e-d451-4d3c-b556-c22d00e487ec\") " Nov 28 12:54:19 crc kubenswrapper[4779]: I1128 12:54:19.622176 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/098fb61e-d451-4d3c-b556-c22d00e487ec-scripts\") pod \"098fb61e-d451-4d3c-b556-c22d00e487ec\" (UID: \"098fb61e-d451-4d3c-b556-c22d00e487ec\") " Nov 28 12:54:19 crc kubenswrapper[4779]: I1128 12:54:19.622200 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/098fb61e-d451-4d3c-b556-c22d00e487ec-var-run\") pod \"098fb61e-d451-4d3c-b556-c22d00e487ec\" (UID: \"098fb61e-d451-4d3c-b556-c22d00e487ec\") " Nov 28 12:54:19 crc kubenswrapper[4779]: I1128 12:54:19.622230 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/098fb61e-d451-4d3c-b556-c22d00e487ec-additional-scripts\") pod \"098fb61e-d451-4d3c-b556-c22d00e487ec\" (UID: \"098fb61e-d451-4d3c-b556-c22d00e487ec\") " Nov 28 12:54:19 crc kubenswrapper[4779]: I1128 12:54:19.622268 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/098fb61e-d451-4d3c-b556-c22d00e487ec-var-log-ovn\") pod \"098fb61e-d451-4d3c-b556-c22d00e487ec\" (UID: \"098fb61e-d451-4d3c-b556-c22d00e487ec\") " Nov 28 12:54:19 crc kubenswrapper[4779]: I1128 12:54:19.622289 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk4x5\" (UniqueName: \"kubernetes.io/projected/098fb61e-d451-4d3c-b556-c22d00e487ec-kube-api-access-tk4x5\") pod \"098fb61e-d451-4d3c-b556-c22d00e487ec\" (UID: \"098fb61e-d451-4d3c-b556-c22d00e487ec\") " Nov 28 12:54:19 crc kubenswrapper[4779]: I1128 12:54:19.622298 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/098fb61e-d451-4d3c-b556-c22d00e487ec-var-run" (OuterVolumeSpecName: "var-run") pod "098fb61e-d451-4d3c-b556-c22d00e487ec" (UID: "098fb61e-d451-4d3c-b556-c22d00e487ec"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:54:19 crc kubenswrapper[4779]: I1128 12:54:19.622662 4779 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/098fb61e-d451-4d3c-b556-c22d00e487ec-var-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:54:19 crc kubenswrapper[4779]: I1128 12:54:19.623073 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/098fb61e-d451-4d3c-b556-c22d00e487ec-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "098fb61e-d451-4d3c-b556-c22d00e487ec" (UID: "098fb61e-d451-4d3c-b556-c22d00e487ec"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:54:19 crc kubenswrapper[4779]: I1128 12:54:19.623142 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/098fb61e-d451-4d3c-b556-c22d00e487ec-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "098fb61e-d451-4d3c-b556-c22d00e487ec" (UID: "098fb61e-d451-4d3c-b556-c22d00e487ec"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:54:19 crc kubenswrapper[4779]: I1128 12:54:19.623443 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/098fb61e-d451-4d3c-b556-c22d00e487ec-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "098fb61e-d451-4d3c-b556-c22d00e487ec" (UID: "098fb61e-d451-4d3c-b556-c22d00e487ec"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:54:19 crc kubenswrapper[4779]: I1128 12:54:19.624001 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/098fb61e-d451-4d3c-b556-c22d00e487ec-scripts" (OuterVolumeSpecName: "scripts") pod "098fb61e-d451-4d3c-b556-c22d00e487ec" (UID: "098fb61e-d451-4d3c-b556-c22d00e487ec"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:54:19 crc kubenswrapper[4779]: I1128 12:54:19.627299 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/098fb61e-d451-4d3c-b556-c22d00e487ec-kube-api-access-tk4x5" (OuterVolumeSpecName: "kube-api-access-tk4x5") pod "098fb61e-d451-4d3c-b556-c22d00e487ec" (UID: "098fb61e-d451-4d3c-b556-c22d00e487ec"). InnerVolumeSpecName "kube-api-access-tk4x5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:54:19 crc kubenswrapper[4779]: I1128 12:54:19.724536 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwxxf\" (UniqueName: \"kubernetes.io/projected/8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba-kube-api-access-nwxxf\") pod \"8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba\" (UID: \"8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba\") " Nov 28 12:54:19 crc kubenswrapper[4779]: I1128 12:54:19.724654 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba-ovsdbserver-sb\") pod \"8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba\" (UID: \"8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba\") " Nov 28 12:54:19 crc kubenswrapper[4779]: I1128 12:54:19.724678 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba-config\") pod \"8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba\" (UID: \"8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba\") " Nov 28 12:54:19 crc kubenswrapper[4779]: I1128 12:54:19.724771 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba-ovsdbserver-nb\") pod \"8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba\" (UID: \"8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba\") " Nov 28 12:54:19 crc kubenswrapper[4779]: I1128 12:54:19.724841 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba-dns-svc\") pod \"8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba\" (UID: \"8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba\") " Nov 28 12:54:19 crc kubenswrapper[4779]: I1128 12:54:19.725625 4779 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/098fb61e-d451-4d3c-b556-c22d00e487ec-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:54:19 crc kubenswrapper[4779]: I1128 12:54:19.725652 4779 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/098fb61e-d451-4d3c-b556-c22d00e487ec-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 28 12:54:19 crc kubenswrapper[4779]: I1128 12:54:19.725663 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk4x5\" (UniqueName: \"kubernetes.io/projected/098fb61e-d451-4d3c-b556-c22d00e487ec-kube-api-access-tk4x5\") on node \"crc\" DevicePath \"\"" Nov 28 12:54:19 crc kubenswrapper[4779]: I1128 12:54:19.725695 4779 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/098fb61e-d451-4d3c-b556-c22d00e487ec-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 28 12:54:19 crc kubenswrapper[4779]: I1128 12:54:19.725708 4779 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/098fb61e-d451-4d3c-b556-c22d00e487ec-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:54:19 crc kubenswrapper[4779]: I1128 12:54:19.730687 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba-kube-api-access-nwxxf" (OuterVolumeSpecName: "kube-api-access-nwxxf") pod "8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba" (UID: "8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba"). InnerVolumeSpecName "kube-api-access-nwxxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:54:19 crc kubenswrapper[4779]: I1128 12:54:19.762679 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba" (UID: "8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:54:19 crc kubenswrapper[4779]: I1128 12:54:19.770570 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba" (UID: "8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:54:19 crc kubenswrapper[4779]: I1128 12:54:19.778160 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba-config" (OuterVolumeSpecName: "config") pod "8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba" (UID: "8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:54:19 crc kubenswrapper[4779]: I1128 12:54:19.781334 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba" (UID: "8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:54:19 crc kubenswrapper[4779]: I1128 12:54:19.827400 4779 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 12:54:19 crc kubenswrapper[4779]: I1128 12:54:19.827670 4779 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 12:54:19 crc kubenswrapper[4779]: I1128 12:54:19.827681 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwxxf\" (UniqueName: \"kubernetes.io/projected/8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba-kube-api-access-nwxxf\") on node \"crc\" DevicePath \"\"" Nov 28 12:54:19 crc kubenswrapper[4779]: I1128 12:54:19.827691 4779 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 12:54:19 crc kubenswrapper[4779]: I1128 12:54:19.827700 4779 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba-config\") on node \"crc\" DevicePath \"\"" Nov 28 12:54:20 crc kubenswrapper[4779]: I1128 12:54:20.083451 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7bg4l-config-mrnpf" event={"ID":"098fb61e-d451-4d3c-b556-c22d00e487ec","Type":"ContainerDied","Data":"fc6e4f7ba3b115a7734c1355815a4c8f54154656ef3d2799bcd45b6bca210c1a"} Nov 28 12:54:20 crc kubenswrapper[4779]: I1128 12:54:20.083502 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc6e4f7ba3b115a7734c1355815a4c8f54154656ef3d2799bcd45b6bca210c1a" Nov 28 12:54:20 crc kubenswrapper[4779]: I1128 12:54:20.083464 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7bg4l-config-mrnpf" Nov 28 12:54:20 crc kubenswrapper[4779]: I1128 12:54:20.085873 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-8b5gg" event={"ID":"8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba","Type":"ContainerDied","Data":"194cfbd6e22911e2766414ffc9838e28cb46029d28c5939dbe26b067217e4b21"} Nov 28 12:54:20 crc kubenswrapper[4779]: I1128 12:54:20.085917 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-8b5gg" Nov 28 12:54:20 crc kubenswrapper[4779]: I1128 12:54:20.086062 4779 scope.go:117] "RemoveContainer" containerID="802074bc68ba8fb39b3da0987f20e2b07ad90d6e52adc2870fa58e01a7e66fc7" Nov 28 12:54:20 crc kubenswrapper[4779]: I1128 12:54:20.088014 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-hzq4r" event={"ID":"30731004-d3bb-4ed7-820a-37fe3e7ee7e1","Type":"ContainerStarted","Data":"1eada839e5267d3245004362b6eb536e129a4d143dc96d5010455efc59426b88"} Nov 28 12:54:20 crc kubenswrapper[4779]: I1128 12:54:20.117852 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-hzq4r" podStartSLOduration=2.64167968 podStartE2EDuration="16.117831679s" podCreationTimestamp="2025-11-28 12:54:04 +0000 UTC" firstStartedPulling="2025-11-28 12:54:05.887910514 +0000 UTC m=+1106.453585868" lastFinishedPulling="2025-11-28 12:54:19.364062513 +0000 UTC m=+1119.929737867" observedRunningTime="2025-11-28 12:54:20.114641685 +0000 UTC m=+1120.680317069" watchObservedRunningTime="2025-11-28 12:54:20.117831679 +0000 UTC m=+1120.683507033" Nov 28 12:54:20 crc kubenswrapper[4779]: I1128 12:54:20.119819 4779 scope.go:117] "RemoveContainer" containerID="adea18a5be93bc202a7808de0b66f45b7a429a25fb8c0845f0bdc63c4a4ae0a2" Nov 28 12:54:20 crc kubenswrapper[4779]: I1128 12:54:20.140258 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-8b5gg"] Nov 28 12:54:20 crc kubenswrapper[4779]: I1128 12:54:20.155762 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-8b5gg"] Nov 28 12:54:20 crc kubenswrapper[4779]: I1128 12:54:20.599916 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-7bg4l-config-mrnpf"] Nov 28 12:54:20 crc kubenswrapper[4779]: I1128 12:54:20.609928 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-7bg4l-config-mrnpf"] Nov 28 12:54:20 crc kubenswrapper[4779]: I1128 12:54:20.782180 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-7bg4l-config-4xlc9"] Nov 28 12:54:20 crc kubenswrapper[4779]: E1128 12:54:20.782881 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba" containerName="dnsmasq-dns" Nov 28 12:54:20 crc kubenswrapper[4779]: I1128 12:54:20.782924 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba" containerName="dnsmasq-dns" Nov 28 12:54:20 crc kubenswrapper[4779]: E1128 12:54:20.782957 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="098fb61e-d451-4d3c-b556-c22d00e487ec" containerName="ovn-config" Nov 28 12:54:20 crc kubenswrapper[4779]: I1128 12:54:20.783004 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="098fb61e-d451-4d3c-b556-c22d00e487ec" containerName="ovn-config" Nov 28 12:54:20 crc kubenswrapper[4779]: E1128 12:54:20.783052 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba" containerName="init" Nov 28 12:54:20 crc kubenswrapper[4779]: I1128 12:54:20.783071 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba" containerName="init" Nov 28 12:54:20 crc kubenswrapper[4779]: I1128 12:54:20.783611 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba" containerName="dnsmasq-dns" Nov 28 12:54:20 crc kubenswrapper[4779]: I1128 12:54:20.783657 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="098fb61e-d451-4d3c-b556-c22d00e487ec" containerName="ovn-config" Nov 28 12:54:20 crc kubenswrapper[4779]: I1128 12:54:20.784756 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7bg4l-config-4xlc9" Nov 28 12:54:20 crc kubenswrapper[4779]: I1128 12:54:20.792343 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7bg4l-config-4xlc9"] Nov 28 12:54:20 crc kubenswrapper[4779]: I1128 12:54:20.799709 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 28 12:54:20 crc kubenswrapper[4779]: I1128 12:54:20.958309 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 28 12:54:20 crc kubenswrapper[4779]: I1128 12:54:20.959849 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/409d8e8a-a5ce-4940-8d6b-f58c87eeb764-var-run\") pod \"ovn-controller-7bg4l-config-4xlc9\" (UID: \"409d8e8a-a5ce-4940-8d6b-f58c87eeb764\") " pod="openstack/ovn-controller-7bg4l-config-4xlc9" Nov 28 12:54:20 crc kubenswrapper[4779]: I1128 12:54:20.959960 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/409d8e8a-a5ce-4940-8d6b-f58c87eeb764-additional-scripts\") pod \"ovn-controller-7bg4l-config-4xlc9\" (UID: \"409d8e8a-a5ce-4940-8d6b-f58c87eeb764\") " pod="openstack/ovn-controller-7bg4l-config-4xlc9" Nov 28 12:54:20 crc kubenswrapper[4779]: I1128 12:54:20.959994 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k28jm\" (UniqueName: \"kubernetes.io/projected/409d8e8a-a5ce-4940-8d6b-f58c87eeb764-kube-api-access-k28jm\") pod \"ovn-controller-7bg4l-config-4xlc9\" (UID: \"409d8e8a-a5ce-4940-8d6b-f58c87eeb764\") " pod="openstack/ovn-controller-7bg4l-config-4xlc9" Nov 28 12:54:20 crc kubenswrapper[4779]: I1128 12:54:20.960062 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/409d8e8a-a5ce-4940-8d6b-f58c87eeb764-scripts\") pod \"ovn-controller-7bg4l-config-4xlc9\" (UID: \"409d8e8a-a5ce-4940-8d6b-f58c87eeb764\") " pod="openstack/ovn-controller-7bg4l-config-4xlc9" Nov 28 12:54:20 crc kubenswrapper[4779]: I1128 12:54:20.960251 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/409d8e8a-a5ce-4940-8d6b-f58c87eeb764-var-log-ovn\") pod \"ovn-controller-7bg4l-config-4xlc9\" (UID: \"409d8e8a-a5ce-4940-8d6b-f58c87eeb764\") " pod="openstack/ovn-controller-7bg4l-config-4xlc9" Nov 28 12:54:20 crc kubenswrapper[4779]: I1128 12:54:20.960383 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/409d8e8a-a5ce-4940-8d6b-f58c87eeb764-var-run-ovn\") pod \"ovn-controller-7bg4l-config-4xlc9\" (UID: \"409d8e8a-a5ce-4940-8d6b-f58c87eeb764\") " pod="openstack/ovn-controller-7bg4l-config-4xlc9" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.062359 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/409d8e8a-a5ce-4940-8d6b-f58c87eeb764-scripts\") pod \"ovn-controller-7bg4l-config-4xlc9\" (UID: \"409d8e8a-a5ce-4940-8d6b-f58c87eeb764\") " pod="openstack/ovn-controller-7bg4l-config-4xlc9" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.062521 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/409d8e8a-a5ce-4940-8d6b-f58c87eeb764-var-log-ovn\") pod \"ovn-controller-7bg4l-config-4xlc9\" (UID: \"409d8e8a-a5ce-4940-8d6b-f58c87eeb764\") " pod="openstack/ovn-controller-7bg4l-config-4xlc9" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.062559 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/409d8e8a-a5ce-4940-8d6b-f58c87eeb764-var-run-ovn\") pod \"ovn-controller-7bg4l-config-4xlc9\" (UID: \"409d8e8a-a5ce-4940-8d6b-f58c87eeb764\") " pod="openstack/ovn-controller-7bg4l-config-4xlc9" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.062632 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/409d8e8a-a5ce-4940-8d6b-f58c87eeb764-var-run\") pod \"ovn-controller-7bg4l-config-4xlc9\" (UID: \"409d8e8a-a5ce-4940-8d6b-f58c87eeb764\") " pod="openstack/ovn-controller-7bg4l-config-4xlc9" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.062682 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/409d8e8a-a5ce-4940-8d6b-f58c87eeb764-additional-scripts\") pod \"ovn-controller-7bg4l-config-4xlc9\" (UID: \"409d8e8a-a5ce-4940-8d6b-f58c87eeb764\") " pod="openstack/ovn-controller-7bg4l-config-4xlc9" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.062706 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k28jm\" (UniqueName: \"kubernetes.io/projected/409d8e8a-a5ce-4940-8d6b-f58c87eeb764-kube-api-access-k28jm\") pod \"ovn-controller-7bg4l-config-4xlc9\" (UID: \"409d8e8a-a5ce-4940-8d6b-f58c87eeb764\") " pod="openstack/ovn-controller-7bg4l-config-4xlc9" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.063506 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/409d8e8a-a5ce-4940-8d6b-f58c87eeb764-var-log-ovn\") pod \"ovn-controller-7bg4l-config-4xlc9\" (UID: \"409d8e8a-a5ce-4940-8d6b-f58c87eeb764\") " pod="openstack/ovn-controller-7bg4l-config-4xlc9" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.063582 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/409d8e8a-a5ce-4940-8d6b-f58c87eeb764-var-run-ovn\") pod \"ovn-controller-7bg4l-config-4xlc9\" (UID: \"409d8e8a-a5ce-4940-8d6b-f58c87eeb764\") " pod="openstack/ovn-controller-7bg4l-config-4xlc9" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.064073 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/409d8e8a-a5ce-4940-8d6b-f58c87eeb764-var-run\") pod \"ovn-controller-7bg4l-config-4xlc9\" (UID: \"409d8e8a-a5ce-4940-8d6b-f58c87eeb764\") " pod="openstack/ovn-controller-7bg4l-config-4xlc9" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.065258 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/409d8e8a-a5ce-4940-8d6b-f58c87eeb764-additional-scripts\") pod \"ovn-controller-7bg4l-config-4xlc9\" (UID: \"409d8e8a-a5ce-4940-8d6b-f58c87eeb764\") " pod="openstack/ovn-controller-7bg4l-config-4xlc9" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.066007 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/409d8e8a-a5ce-4940-8d6b-f58c87eeb764-scripts\") pod \"ovn-controller-7bg4l-config-4xlc9\" (UID: \"409d8e8a-a5ce-4940-8d6b-f58c87eeb764\") " pod="openstack/ovn-controller-7bg4l-config-4xlc9" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.125345 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k28jm\" (UniqueName: \"kubernetes.io/projected/409d8e8a-a5ce-4940-8d6b-f58c87eeb764-kube-api-access-k28jm\") pod \"ovn-controller-7bg4l-config-4xlc9\" (UID: \"409d8e8a-a5ce-4940-8d6b-f58c87eeb764\") " pod="openstack/ovn-controller-7bg4l-config-4xlc9" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.306257 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.332505 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-js645"] Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.339274 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-js645" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.362315 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-js645"] Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.424524 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7bg4l-config-4xlc9" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.472144 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mxq8\" (UniqueName: \"kubernetes.io/projected/4e0e6aa9-aad0-4d46-85c9-11cf40ac2928-kube-api-access-2mxq8\") pod \"cinder-db-create-js645\" (UID: \"4e0e6aa9-aad0-4d46-85c9-11cf40ac2928\") " pod="openstack/cinder-db-create-js645" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.472304 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e0e6aa9-aad0-4d46-85c9-11cf40ac2928-operator-scripts\") pod \"cinder-db-create-js645\" (UID: \"4e0e6aa9-aad0-4d46-85c9-11cf40ac2928\") " pod="openstack/cinder-db-create-js645" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.516226 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-h4bzm"] Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.520306 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-h4bzm" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.537843 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-h4bzm"] Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.573401 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mxq8\" (UniqueName: \"kubernetes.io/projected/4e0e6aa9-aad0-4d46-85c9-11cf40ac2928-kube-api-access-2mxq8\") pod \"cinder-db-create-js645\" (UID: \"4e0e6aa9-aad0-4d46-85c9-11cf40ac2928\") " pod="openstack/cinder-db-create-js645" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.573500 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdtvc\" (UniqueName: \"kubernetes.io/projected/07dc1232-19a1-43de-9b7b-9613e964a39b-kube-api-access-qdtvc\") pod \"barbican-db-create-h4bzm\" (UID: \"07dc1232-19a1-43de-9b7b-9613e964a39b\") " pod="openstack/barbican-db-create-h4bzm" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.573536 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e0e6aa9-aad0-4d46-85c9-11cf40ac2928-operator-scripts\") pod \"cinder-db-create-js645\" (UID: \"4e0e6aa9-aad0-4d46-85c9-11cf40ac2928\") " pod="openstack/cinder-db-create-js645" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.573571 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07dc1232-19a1-43de-9b7b-9613e964a39b-operator-scripts\") pod \"barbican-db-create-h4bzm\" (UID: \"07dc1232-19a1-43de-9b7b-9613e964a39b\") " pod="openstack/barbican-db-create-h4bzm" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.574467 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e0e6aa9-aad0-4d46-85c9-11cf40ac2928-operator-scripts\") pod \"cinder-db-create-js645\" (UID: \"4e0e6aa9-aad0-4d46-85c9-11cf40ac2928\") " pod="openstack/cinder-db-create-js645" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.579606 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-6c6a-account-create-update-cvfsb"] Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.580604 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6c6a-account-create-update-cvfsb" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.582755 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.596114 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-6c6a-account-create-update-cvfsb"] Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.597699 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mxq8\" (UniqueName: \"kubernetes.io/projected/4e0e6aa9-aad0-4d46-85c9-11cf40ac2928-kube-api-access-2mxq8\") pod \"cinder-db-create-js645\" (UID: \"4e0e6aa9-aad0-4d46-85c9-11cf40ac2928\") " pod="openstack/cinder-db-create-js645" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.645528 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-a859-account-create-update-82cnk"] Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.646451 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a859-account-create-update-82cnk" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.651749 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.663525 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-js645" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.674889 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73eaf386-eead-4fbe-bbfb-a41423521b9f-operator-scripts\") pod \"cinder-6c6a-account-create-update-cvfsb\" (UID: \"73eaf386-eead-4fbe-bbfb-a41423521b9f\") " pod="openstack/cinder-6c6a-account-create-update-cvfsb" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.674930 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdtvc\" (UniqueName: \"kubernetes.io/projected/07dc1232-19a1-43de-9b7b-9613e964a39b-kube-api-access-qdtvc\") pod \"barbican-db-create-h4bzm\" (UID: \"07dc1232-19a1-43de-9b7b-9613e964a39b\") " pod="openstack/barbican-db-create-h4bzm" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.674974 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07dc1232-19a1-43de-9b7b-9613e964a39b-operator-scripts\") pod \"barbican-db-create-h4bzm\" (UID: \"07dc1232-19a1-43de-9b7b-9613e964a39b\") " pod="openstack/barbican-db-create-h4bzm" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.675024 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7b4f\" (UniqueName: \"kubernetes.io/projected/73eaf386-eead-4fbe-bbfb-a41423521b9f-kube-api-access-l7b4f\") pod \"cinder-6c6a-account-create-update-cvfsb\" (UID: \"73eaf386-eead-4fbe-bbfb-a41423521b9f\") " pod="openstack/cinder-6c6a-account-create-update-cvfsb" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.676781 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-a859-account-create-update-82cnk"] Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.677222 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07dc1232-19a1-43de-9b7b-9613e964a39b-operator-scripts\") pod \"barbican-db-create-h4bzm\" (UID: \"07dc1232-19a1-43de-9b7b-9613e964a39b\") " pod="openstack/barbican-db-create-h4bzm" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.712069 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdtvc\" (UniqueName: \"kubernetes.io/projected/07dc1232-19a1-43de-9b7b-9613e964a39b-kube-api-access-qdtvc\") pod \"barbican-db-create-h4bzm\" (UID: \"07dc1232-19a1-43de-9b7b-9613e964a39b\") " pod="openstack/barbican-db-create-h4bzm" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.724543 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-bhx9p"] Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.725538 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-bhx9p" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.761975 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="098fb61e-d451-4d3c-b556-c22d00e487ec" path="/var/lib/kubelet/pods/098fb61e-d451-4d3c-b556-c22d00e487ec/volumes" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.763753 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba" path="/var/lib/kubelet/pods/8e899e0c-1cd1-44f5-8c8e-eb451c64d4ba/volumes" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.764615 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-bhx9p"] Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.776038 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7b4f\" (UniqueName: \"kubernetes.io/projected/73eaf386-eead-4fbe-bbfb-a41423521b9f-kube-api-access-l7b4f\") pod \"cinder-6c6a-account-create-update-cvfsb\" (UID: \"73eaf386-eead-4fbe-bbfb-a41423521b9f\") " pod="openstack/cinder-6c6a-account-create-update-cvfsb" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.776104 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sngpx\" (UniqueName: \"kubernetes.io/projected/ef019325-ce2d-4119-85d3-eac3868665ce-kube-api-access-sngpx\") pod \"barbican-a859-account-create-update-82cnk\" (UID: \"ef019325-ce2d-4119-85d3-eac3868665ce\") " pod="openstack/barbican-a859-account-create-update-82cnk" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.776159 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73eaf386-eead-4fbe-bbfb-a41423521b9f-operator-scripts\") pod \"cinder-6c6a-account-create-update-cvfsb\" (UID: \"73eaf386-eead-4fbe-bbfb-a41423521b9f\") " pod="openstack/cinder-6c6a-account-create-update-cvfsb" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.776181 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef019325-ce2d-4119-85d3-eac3868665ce-operator-scripts\") pod \"barbican-a859-account-create-update-82cnk\" (UID: \"ef019325-ce2d-4119-85d3-eac3868665ce\") " pod="openstack/barbican-a859-account-create-update-82cnk" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.776941 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73eaf386-eead-4fbe-bbfb-a41423521b9f-operator-scripts\") pod \"cinder-6c6a-account-create-update-cvfsb\" (UID: \"73eaf386-eead-4fbe-bbfb-a41423521b9f\") " pod="openstack/cinder-6c6a-account-create-update-cvfsb" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.802464 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7b4f\" (UniqueName: \"kubernetes.io/projected/73eaf386-eead-4fbe-bbfb-a41423521b9f-kube-api-access-l7b4f\") pod \"cinder-6c6a-account-create-update-cvfsb\" (UID: \"73eaf386-eead-4fbe-bbfb-a41423521b9f\") " pod="openstack/cinder-6c6a-account-create-update-cvfsb" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.830632 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-vlmfj"] Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.832907 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-vlmfj" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.838445 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-nlxvv" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.838577 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.838778 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.839223 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.853840 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-05a1-account-create-update-bjnnp"] Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.854811 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-05a1-account-create-update-bjnnp" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.860706 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-vlmfj"] Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.862382 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.874248 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-05a1-account-create-update-bjnnp"] Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.877275 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sngpx\" (UniqueName: \"kubernetes.io/projected/ef019325-ce2d-4119-85d3-eac3868665ce-kube-api-access-sngpx\") pod \"barbican-a859-account-create-update-82cnk\" (UID: \"ef019325-ce2d-4119-85d3-eac3868665ce\") " pod="openstack/barbican-a859-account-create-update-82cnk" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.877321 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nldsk\" (UniqueName: \"kubernetes.io/projected/4ae2270c-607f-4315-959e-eb8536afafe9-kube-api-access-nldsk\") pod \"keystone-db-sync-vlmfj\" (UID: \"4ae2270c-607f-4315-959e-eb8536afafe9\") " pod="openstack/keystone-db-sync-vlmfj" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.877391 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6tlx\" (UniqueName: \"kubernetes.io/projected/b001146a-ebfc-4821-b7b8-3dbbf14749c9-kube-api-access-v6tlx\") pod \"heat-db-create-bhx9p\" (UID: \"b001146a-ebfc-4821-b7b8-3dbbf14749c9\") " pod="openstack/heat-db-create-bhx9p" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.877421 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ae2270c-607f-4315-959e-eb8536afafe9-combined-ca-bundle\") pod \"keystone-db-sync-vlmfj\" (UID: \"4ae2270c-607f-4315-959e-eb8536afafe9\") " pod="openstack/keystone-db-sync-vlmfj" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.877474 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef019325-ce2d-4119-85d3-eac3868665ce-operator-scripts\") pod \"barbican-a859-account-create-update-82cnk\" (UID: \"ef019325-ce2d-4119-85d3-eac3868665ce\") " pod="openstack/barbican-a859-account-create-update-82cnk" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.878321 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b001146a-ebfc-4821-b7b8-3dbbf14749c9-operator-scripts\") pod \"heat-db-create-bhx9p\" (UID: \"b001146a-ebfc-4821-b7b8-3dbbf14749c9\") " pod="openstack/heat-db-create-bhx9p" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.878260 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-h4bzm" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.878153 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef019325-ce2d-4119-85d3-eac3868665ce-operator-scripts\") pod \"barbican-a859-account-create-update-82cnk\" (UID: \"ef019325-ce2d-4119-85d3-eac3868665ce\") " pod="openstack/barbican-a859-account-create-update-82cnk" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.878415 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ae2270c-607f-4315-959e-eb8536afafe9-config-data\") pod \"keystone-db-sync-vlmfj\" (UID: \"4ae2270c-607f-4315-959e-eb8536afafe9\") " pod="openstack/keystone-db-sync-vlmfj" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.895606 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6c6a-account-create-update-cvfsb" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.905122 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sngpx\" (UniqueName: \"kubernetes.io/projected/ef019325-ce2d-4119-85d3-eac3868665ce-kube-api-access-sngpx\") pod \"barbican-a859-account-create-update-82cnk\" (UID: \"ef019325-ce2d-4119-85d3-eac3868665ce\") " pod="openstack/barbican-a859-account-create-update-82cnk" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.947790 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-bl4b8"] Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.948684 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-bl4b8" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.957813 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-36a9-account-create-update-blv9g"] Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.958806 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-36a9-account-create-update-blv9g" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.961995 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.980653 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nldsk\" (UniqueName: \"kubernetes.io/projected/4ae2270c-607f-4315-959e-eb8536afafe9-kube-api-access-nldsk\") pod \"keystone-db-sync-vlmfj\" (UID: \"4ae2270c-607f-4315-959e-eb8536afafe9\") " pod="openstack/keystone-db-sync-vlmfj" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.980692 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6tlx\" (UniqueName: \"kubernetes.io/projected/b001146a-ebfc-4821-b7b8-3dbbf14749c9-kube-api-access-v6tlx\") pod \"heat-db-create-bhx9p\" (UID: \"b001146a-ebfc-4821-b7b8-3dbbf14749c9\") " pod="openstack/heat-db-create-bhx9p" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.980729 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ae2270c-607f-4315-959e-eb8536afafe9-combined-ca-bundle\") pod \"keystone-db-sync-vlmfj\" (UID: \"4ae2270c-607f-4315-959e-eb8536afafe9\") " pod="openstack/keystone-db-sync-vlmfj" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.980751 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3ebdf4b-0d38-4646-9e9d-742c3152849c-operator-scripts\") pod \"heat-05a1-account-create-update-bjnnp\" (UID: \"b3ebdf4b-0d38-4646-9e9d-742c3152849c\") " pod="openstack/heat-05a1-account-create-update-bjnnp" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.980794 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b001146a-ebfc-4821-b7b8-3dbbf14749c9-operator-scripts\") pod \"heat-db-create-bhx9p\" (UID: \"b001146a-ebfc-4821-b7b8-3dbbf14749c9\") " pod="openstack/heat-db-create-bhx9p" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.980838 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4k89\" (UniqueName: \"kubernetes.io/projected/b3ebdf4b-0d38-4646-9e9d-742c3152849c-kube-api-access-r4k89\") pod \"heat-05a1-account-create-update-bjnnp\" (UID: \"b3ebdf4b-0d38-4646-9e9d-742c3152849c\") " pod="openstack/heat-05a1-account-create-update-bjnnp" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.980858 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ae2270c-607f-4315-959e-eb8536afafe9-config-data\") pod \"keystone-db-sync-vlmfj\" (UID: \"4ae2270c-607f-4315-959e-eb8536afafe9\") " pod="openstack/keystone-db-sync-vlmfj" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.981981 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b001146a-ebfc-4821-b7b8-3dbbf14749c9-operator-scripts\") pod \"heat-db-create-bhx9p\" (UID: \"b001146a-ebfc-4821-b7b8-3dbbf14749c9\") " pod="openstack/heat-db-create-bhx9p" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.988758 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ae2270c-607f-4315-959e-eb8536afafe9-config-data\") pod \"keystone-db-sync-vlmfj\" (UID: \"4ae2270c-607f-4315-959e-eb8536afafe9\") " pod="openstack/keystone-db-sync-vlmfj" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.991445 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a859-account-create-update-82cnk" Nov 28 12:54:21 crc kubenswrapper[4779]: I1128 12:54:21.992048 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ae2270c-607f-4315-959e-eb8536afafe9-combined-ca-bundle\") pod \"keystone-db-sync-vlmfj\" (UID: \"4ae2270c-607f-4315-959e-eb8536afafe9\") " pod="openstack/keystone-db-sync-vlmfj" Nov 28 12:54:22 crc kubenswrapper[4779]: I1128 12:54:22.008195 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-36a9-account-create-update-blv9g"] Nov 28 12:54:22 crc kubenswrapper[4779]: I1128 12:54:22.010856 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nldsk\" (UniqueName: \"kubernetes.io/projected/4ae2270c-607f-4315-959e-eb8536afafe9-kube-api-access-nldsk\") pod \"keystone-db-sync-vlmfj\" (UID: \"4ae2270c-607f-4315-959e-eb8536afafe9\") " pod="openstack/keystone-db-sync-vlmfj" Nov 28 12:54:22 crc kubenswrapper[4779]: I1128 12:54:22.021542 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6tlx\" (UniqueName: \"kubernetes.io/projected/b001146a-ebfc-4821-b7b8-3dbbf14749c9-kube-api-access-v6tlx\") pod \"heat-db-create-bhx9p\" (UID: \"b001146a-ebfc-4821-b7b8-3dbbf14749c9\") " pod="openstack/heat-db-create-bhx9p" Nov 28 12:54:22 crc kubenswrapper[4779]: I1128 12:54:22.031380 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-bl4b8"] Nov 28 12:54:22 crc kubenswrapper[4779]: I1128 12:54:22.081875 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0aafeb02-0f52-4cf1-b856-e357be7e80b2-operator-scripts\") pod \"neutron-db-create-bl4b8\" (UID: \"0aafeb02-0f52-4cf1-b856-e357be7e80b2\") " pod="openstack/neutron-db-create-bl4b8" Nov 28 12:54:22 crc kubenswrapper[4779]: I1128 12:54:22.082391 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4k89\" (UniqueName: \"kubernetes.io/projected/b3ebdf4b-0d38-4646-9e9d-742c3152849c-kube-api-access-r4k89\") pod \"heat-05a1-account-create-update-bjnnp\" (UID: \"b3ebdf4b-0d38-4646-9e9d-742c3152849c\") " pod="openstack/heat-05a1-account-create-update-bjnnp" Nov 28 12:54:22 crc kubenswrapper[4779]: I1128 12:54:22.082463 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfbdg\" (UniqueName: \"kubernetes.io/projected/0aafeb02-0f52-4cf1-b856-e357be7e80b2-kube-api-access-qfbdg\") pod \"neutron-db-create-bl4b8\" (UID: \"0aafeb02-0f52-4cf1-b856-e357be7e80b2\") " pod="openstack/neutron-db-create-bl4b8" Nov 28 12:54:22 crc kubenswrapper[4779]: I1128 12:54:22.082606 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/917a2830-66a9-4c55-9cf0-74c6dac98030-operator-scripts\") pod \"neutron-36a9-account-create-update-blv9g\" (UID: \"917a2830-66a9-4c55-9cf0-74c6dac98030\") " pod="openstack/neutron-36a9-account-create-update-blv9g" Nov 28 12:54:22 crc kubenswrapper[4779]: I1128 12:54:22.082685 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3ebdf4b-0d38-4646-9e9d-742c3152849c-operator-scripts\") pod \"heat-05a1-account-create-update-bjnnp\" (UID: \"b3ebdf4b-0d38-4646-9e9d-742c3152849c\") " pod="openstack/heat-05a1-account-create-update-bjnnp" Nov 28 12:54:22 crc kubenswrapper[4779]: I1128 12:54:22.082785 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt64q\" (UniqueName: \"kubernetes.io/projected/917a2830-66a9-4c55-9cf0-74c6dac98030-kube-api-access-zt64q\") pod \"neutron-36a9-account-create-update-blv9g\" (UID: \"917a2830-66a9-4c55-9cf0-74c6dac98030\") " pod="openstack/neutron-36a9-account-create-update-blv9g" Nov 28 12:54:22 crc kubenswrapper[4779]: I1128 12:54:22.083543 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3ebdf4b-0d38-4646-9e9d-742c3152849c-operator-scripts\") pod \"heat-05a1-account-create-update-bjnnp\" (UID: \"b3ebdf4b-0d38-4646-9e9d-742c3152849c\") " pod="openstack/heat-05a1-account-create-update-bjnnp" Nov 28 12:54:22 crc kubenswrapper[4779]: I1128 12:54:22.090662 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-bhx9p" Nov 28 12:54:22 crc kubenswrapper[4779]: I1128 12:54:22.144950 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4k89\" (UniqueName: \"kubernetes.io/projected/b3ebdf4b-0d38-4646-9e9d-742c3152849c-kube-api-access-r4k89\") pod \"heat-05a1-account-create-update-bjnnp\" (UID: \"b3ebdf4b-0d38-4646-9e9d-742c3152849c\") " pod="openstack/heat-05a1-account-create-update-bjnnp" Nov 28 12:54:22 crc kubenswrapper[4779]: I1128 12:54:22.150852 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7bg4l-config-4xlc9"] Nov 28 12:54:22 crc kubenswrapper[4779]: I1128 12:54:22.159707 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-vlmfj" Nov 28 12:54:22 crc kubenswrapper[4779]: I1128 12:54:22.169125 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-05a1-account-create-update-bjnnp" Nov 28 12:54:22 crc kubenswrapper[4779]: I1128 12:54:22.181046 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7bg4l-config-4xlc9" event={"ID":"409d8e8a-a5ce-4940-8d6b-f58c87eeb764","Type":"ContainerStarted","Data":"233277d5d3923a3c23fb3cc76abb54a5200d84e41c18f4d8409b1a3065b6192c"} Nov 28 12:54:22 crc kubenswrapper[4779]: I1128 12:54:22.185012 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/917a2830-66a9-4c55-9cf0-74c6dac98030-operator-scripts\") pod \"neutron-36a9-account-create-update-blv9g\" (UID: \"917a2830-66a9-4c55-9cf0-74c6dac98030\") " pod="openstack/neutron-36a9-account-create-update-blv9g" Nov 28 12:54:22 crc kubenswrapper[4779]: I1128 12:54:22.185081 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt64q\" (UniqueName: \"kubernetes.io/projected/917a2830-66a9-4c55-9cf0-74c6dac98030-kube-api-access-zt64q\") pod \"neutron-36a9-account-create-update-blv9g\" (UID: \"917a2830-66a9-4c55-9cf0-74c6dac98030\") " pod="openstack/neutron-36a9-account-create-update-blv9g" Nov 28 12:54:22 crc kubenswrapper[4779]: I1128 12:54:22.185142 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0aafeb02-0f52-4cf1-b856-e357be7e80b2-operator-scripts\") pod \"neutron-db-create-bl4b8\" (UID: \"0aafeb02-0f52-4cf1-b856-e357be7e80b2\") " pod="openstack/neutron-db-create-bl4b8" Nov 28 12:54:22 crc kubenswrapper[4779]: I1128 12:54:22.185224 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfbdg\" (UniqueName: \"kubernetes.io/projected/0aafeb02-0f52-4cf1-b856-e357be7e80b2-kube-api-access-qfbdg\") pod \"neutron-db-create-bl4b8\" (UID: \"0aafeb02-0f52-4cf1-b856-e357be7e80b2\") " pod="openstack/neutron-db-create-bl4b8" Nov 28 12:54:22 crc kubenswrapper[4779]: I1128 12:54:22.186217 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0aafeb02-0f52-4cf1-b856-e357be7e80b2-operator-scripts\") pod \"neutron-db-create-bl4b8\" (UID: \"0aafeb02-0f52-4cf1-b856-e357be7e80b2\") " pod="openstack/neutron-db-create-bl4b8" Nov 28 12:54:22 crc kubenswrapper[4779]: I1128 12:54:22.190408 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/917a2830-66a9-4c55-9cf0-74c6dac98030-operator-scripts\") pod \"neutron-36a9-account-create-update-blv9g\" (UID: \"917a2830-66a9-4c55-9cf0-74c6dac98030\") " pod="openstack/neutron-36a9-account-create-update-blv9g" Nov 28 12:54:22 crc kubenswrapper[4779]: I1128 12:54:22.221048 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt64q\" (UniqueName: \"kubernetes.io/projected/917a2830-66a9-4c55-9cf0-74c6dac98030-kube-api-access-zt64q\") pod \"neutron-36a9-account-create-update-blv9g\" (UID: \"917a2830-66a9-4c55-9cf0-74c6dac98030\") " pod="openstack/neutron-36a9-account-create-update-blv9g" Nov 28 12:54:22 crc kubenswrapper[4779]: I1128 12:54:22.222711 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfbdg\" (UniqueName: \"kubernetes.io/projected/0aafeb02-0f52-4cf1-b856-e357be7e80b2-kube-api-access-qfbdg\") pod \"neutron-db-create-bl4b8\" (UID: \"0aafeb02-0f52-4cf1-b856-e357be7e80b2\") " pod="openstack/neutron-db-create-bl4b8" Nov 28 12:54:22 crc kubenswrapper[4779]: I1128 12:54:22.288704 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-bl4b8" Nov 28 12:54:22 crc kubenswrapper[4779]: I1128 12:54:22.315848 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-36a9-account-create-update-blv9g" Nov 28 12:54:22 crc kubenswrapper[4779]: I1128 12:54:22.337264 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-js645"] Nov 28 12:54:22 crc kubenswrapper[4779]: W1128 12:54:22.380429 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e0e6aa9_aad0_4d46_85c9_11cf40ac2928.slice/crio-47b529d67444668042864d5eedf38794ece7b5e06a0175f71f0117b8f3a7b4e8 WatchSource:0}: Error finding container 47b529d67444668042864d5eedf38794ece7b5e06a0175f71f0117b8f3a7b4e8: Status 404 returned error can't find the container with id 47b529d67444668042864d5eedf38794ece7b5e06a0175f71f0117b8f3a7b4e8 Nov 28 12:54:22 crc kubenswrapper[4779]: I1128 12:54:22.465737 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-h4bzm"] Nov 28 12:54:22 crc kubenswrapper[4779]: I1128 12:54:22.606638 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-6c6a-account-create-update-cvfsb"] Nov 28 12:54:22 crc kubenswrapper[4779]: I1128 12:54:22.615630 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-a859-account-create-update-82cnk"] Nov 28 12:54:22 crc kubenswrapper[4779]: W1128 12:54:22.632473 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73eaf386_eead_4fbe_bbfb_a41423521b9f.slice/crio-d15dff3c71e5d8f253836d385746f9ab6bc386f64b54d86b8ddee2aa1f78c2d5 WatchSource:0}: Error finding container d15dff3c71e5d8f253836d385746f9ab6bc386f64b54d86b8ddee2aa1f78c2d5: Status 404 returned error can't find the container with id d15dff3c71e5d8f253836d385746f9ab6bc386f64b54d86b8ddee2aa1f78c2d5 Nov 28 12:54:22 crc kubenswrapper[4779]: W1128 12:54:22.632784 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef019325_ce2d_4119_85d3_eac3868665ce.slice/crio-55274084dd42c9521460545b28a4edb14541785ff1daf477b1623b3b4c0b7045 WatchSource:0}: Error finding container 55274084dd42c9521460545b28a4edb14541785ff1daf477b1623b3b4c0b7045: Status 404 returned error can't find the container with id 55274084dd42c9521460545b28a4edb14541785ff1daf477b1623b3b4c0b7045 Nov 28 12:54:22 crc kubenswrapper[4779]: I1128 12:54:22.751458 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-bhx9p"] Nov 28 12:54:22 crc kubenswrapper[4779]: W1128 12:54:22.767264 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb001146a_ebfc_4821_b7b8_3dbbf14749c9.slice/crio-3761697b319ad03057121313b61d4009d0ce2d4f0baa2d32bc20bc78b29537b0 WatchSource:0}: Error finding container 3761697b319ad03057121313b61d4009d0ce2d4f0baa2d32bc20bc78b29537b0: Status 404 returned error can't find the container with id 3761697b319ad03057121313b61d4009d0ce2d4f0baa2d32bc20bc78b29537b0 Nov 28 12:54:22 crc kubenswrapper[4779]: I1128 12:54:22.826182 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-vlmfj"] Nov 28 12:54:22 crc kubenswrapper[4779]: I1128 12:54:22.851199 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-05a1-account-create-update-bjnnp"] Nov 28 12:54:22 crc kubenswrapper[4779]: I1128 12:54:22.862180 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-bl4b8"] Nov 28 12:54:23 crc kubenswrapper[4779]: I1128 12:54:23.032796 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-36a9-account-create-update-blv9g"] Nov 28 12:54:23 crc kubenswrapper[4779]: W1128 12:54:23.081460 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod917a2830_66a9_4c55_9cf0_74c6dac98030.slice/crio-a8eb8e7536632aaadb2bfff75119d503812586bef7c771cc8ff3ff74c84e0066 WatchSource:0}: Error finding container a8eb8e7536632aaadb2bfff75119d503812586bef7c771cc8ff3ff74c84e0066: Status 404 returned error can't find the container with id a8eb8e7536632aaadb2bfff75119d503812586bef7c771cc8ff3ff74c84e0066 Nov 28 12:54:23 crc kubenswrapper[4779]: I1128 12:54:23.189723 4779 generic.go:334] "Generic (PLEG): container finished" podID="409d8e8a-a5ce-4940-8d6b-f58c87eeb764" containerID="33096196fd271bbe3108c36cc829c5d0a96697b7ea1c7ef4e6f62c1b83e3c738" exitCode=0 Nov 28 12:54:23 crc kubenswrapper[4779]: I1128 12:54:23.189802 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7bg4l-config-4xlc9" event={"ID":"409d8e8a-a5ce-4940-8d6b-f58c87eeb764","Type":"ContainerDied","Data":"33096196fd271bbe3108c36cc829c5d0a96697b7ea1c7ef4e6f62c1b83e3c738"} Nov 28 12:54:23 crc kubenswrapper[4779]: I1128 12:54:23.190853 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-vlmfj" event={"ID":"4ae2270c-607f-4315-959e-eb8536afafe9","Type":"ContainerStarted","Data":"dfe0fa4ff910f5c2776c005e95bd3bfce2e4cc8efe541af563cca16a61fc4f8c"} Nov 28 12:54:23 crc kubenswrapper[4779]: I1128 12:54:23.191897 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-bl4b8" event={"ID":"0aafeb02-0f52-4cf1-b856-e357be7e80b2","Type":"ContainerStarted","Data":"69e517a5ffe52955666da4363bf4e9457fe9e785b8f43f7f8cb2cafdba5a1e79"} Nov 28 12:54:23 crc kubenswrapper[4779]: I1128 12:54:23.194145 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a859-account-create-update-82cnk" event={"ID":"ef019325-ce2d-4119-85d3-eac3868665ce","Type":"ContainerStarted","Data":"3b0fc7683f2230e31c28fb1a5a9771b7c83d12a389ecb78437e32f3c551262e8"} Nov 28 12:54:23 crc kubenswrapper[4779]: I1128 12:54:23.194173 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a859-account-create-update-82cnk" event={"ID":"ef019325-ce2d-4119-85d3-eac3868665ce","Type":"ContainerStarted","Data":"55274084dd42c9521460545b28a4edb14541785ff1daf477b1623b3b4c0b7045"} Nov 28 12:54:23 crc kubenswrapper[4779]: I1128 12:54:23.195575 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-36a9-account-create-update-blv9g" event={"ID":"917a2830-66a9-4c55-9cf0-74c6dac98030","Type":"ContainerStarted","Data":"a8eb8e7536632aaadb2bfff75119d503812586bef7c771cc8ff3ff74c84e0066"} Nov 28 12:54:23 crc kubenswrapper[4779]: I1128 12:54:23.197534 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-05a1-account-create-update-bjnnp" event={"ID":"b3ebdf4b-0d38-4646-9e9d-742c3152849c","Type":"ContainerStarted","Data":"93db06973d8e05ad0b9283a1955630c607c962cf1fe004a75e5916a4363e4d32"} Nov 28 12:54:23 crc kubenswrapper[4779]: I1128 12:54:23.197559 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-05a1-account-create-update-bjnnp" event={"ID":"b3ebdf4b-0d38-4646-9e9d-742c3152849c","Type":"ContainerStarted","Data":"58d67a6b4c688e4d123ace125de249fc25265bb8bbf2240f1855ed600b0632ab"} Nov 28 12:54:23 crc kubenswrapper[4779]: I1128 12:54:23.200956 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6c6a-account-create-update-cvfsb" event={"ID":"73eaf386-eead-4fbe-bbfb-a41423521b9f","Type":"ContainerStarted","Data":"ace92a66f48960cbff8f49d4255b9015c950a72ce94c5e5e8560d9829fe42fed"} Nov 28 12:54:23 crc kubenswrapper[4779]: I1128 12:54:23.200980 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6c6a-account-create-update-cvfsb" event={"ID":"73eaf386-eead-4fbe-bbfb-a41423521b9f","Type":"ContainerStarted","Data":"d15dff3c71e5d8f253836d385746f9ab6bc386f64b54d86b8ddee2aa1f78c2d5"} Nov 28 12:54:23 crc kubenswrapper[4779]: I1128 12:54:23.202772 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-bhx9p" event={"ID":"b001146a-ebfc-4821-b7b8-3dbbf14749c9","Type":"ContainerStarted","Data":"3bd66579955aa434bcbddd29386fbe17ea2dba6b7b6ee070c4c82d4c626c53c0"} Nov 28 12:54:23 crc kubenswrapper[4779]: I1128 12:54:23.202811 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-bhx9p" event={"ID":"b001146a-ebfc-4821-b7b8-3dbbf14749c9","Type":"ContainerStarted","Data":"3761697b319ad03057121313b61d4009d0ce2d4f0baa2d32bc20bc78b29537b0"} Nov 28 12:54:23 crc kubenswrapper[4779]: I1128 12:54:23.205704 4779 generic.go:334] "Generic (PLEG): container finished" podID="07dc1232-19a1-43de-9b7b-9613e964a39b" containerID="d5712495c1505a6517ba972b5a5eff011ae8d6a80eefe06e89033e23e512c235" exitCode=0 Nov 28 12:54:23 crc kubenswrapper[4779]: I1128 12:54:23.205768 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-h4bzm" event={"ID":"07dc1232-19a1-43de-9b7b-9613e964a39b","Type":"ContainerDied","Data":"d5712495c1505a6517ba972b5a5eff011ae8d6a80eefe06e89033e23e512c235"} Nov 28 12:54:23 crc kubenswrapper[4779]: I1128 12:54:23.205793 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-h4bzm" event={"ID":"07dc1232-19a1-43de-9b7b-9613e964a39b","Type":"ContainerStarted","Data":"e210e5512549df865c70e3089453b20c9f9e1441b364b600ab7eb6d1f6077e94"} Nov 28 12:54:23 crc kubenswrapper[4779]: I1128 12:54:23.208834 4779 generic.go:334] "Generic (PLEG): container finished" podID="4e0e6aa9-aad0-4d46-85c9-11cf40ac2928" containerID="e781b14fc8cd82214a4ac1e0bc8b2cbae527cfa43c17b3146ab4866d4c42828f" exitCode=0 Nov 28 12:54:23 crc kubenswrapper[4779]: I1128 12:54:23.208869 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-js645" event={"ID":"4e0e6aa9-aad0-4d46-85c9-11cf40ac2928","Type":"ContainerDied","Data":"e781b14fc8cd82214a4ac1e0bc8b2cbae527cfa43c17b3146ab4866d4c42828f"} Nov 28 12:54:23 crc kubenswrapper[4779]: I1128 12:54:23.208890 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-js645" event={"ID":"4e0e6aa9-aad0-4d46-85c9-11cf40ac2928","Type":"ContainerStarted","Data":"47b529d67444668042864d5eedf38794ece7b5e06a0175f71f0117b8f3a7b4e8"} Nov 28 12:54:23 crc kubenswrapper[4779]: I1128 12:54:23.227265 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-05a1-account-create-update-bjnnp" podStartSLOduration=2.227244622 podStartE2EDuration="2.227244622s" podCreationTimestamp="2025-11-28 12:54:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:54:23.215507942 +0000 UTC m=+1123.781183296" watchObservedRunningTime="2025-11-28 12:54:23.227244622 +0000 UTC m=+1123.792919966" Nov 28 12:54:23 crc kubenswrapper[4779]: I1128 12:54:23.234124 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-a859-account-create-update-82cnk" podStartSLOduration=2.234105413 podStartE2EDuration="2.234105413s" podCreationTimestamp="2025-11-28 12:54:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:54:23.231315649 +0000 UTC m=+1123.796991003" watchObservedRunningTime="2025-11-28 12:54:23.234105413 +0000 UTC m=+1123.799780767" Nov 28 12:54:23 crc kubenswrapper[4779]: I1128 12:54:23.244445 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-6c6a-account-create-update-cvfsb" podStartSLOduration=2.244429435 podStartE2EDuration="2.244429435s" podCreationTimestamp="2025-11-28 12:54:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:54:23.241758805 +0000 UTC m=+1123.807434159" watchObservedRunningTime="2025-11-28 12:54:23.244429435 +0000 UTC m=+1123.810104789" Nov 28 12:54:23 crc kubenswrapper[4779]: I1128 12:54:23.262163 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-create-bhx9p" podStartSLOduration=2.262142683 podStartE2EDuration="2.262142683s" podCreationTimestamp="2025-11-28 12:54:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:54:23.255265001 +0000 UTC m=+1123.820940355" watchObservedRunningTime="2025-11-28 12:54:23.262142683 +0000 UTC m=+1123.827818037" Nov 28 12:54:24 crc kubenswrapper[4779]: I1128 12:54:24.218306 4779 generic.go:334] "Generic (PLEG): container finished" podID="ef019325-ce2d-4119-85d3-eac3868665ce" containerID="3b0fc7683f2230e31c28fb1a5a9771b7c83d12a389ecb78437e32f3c551262e8" exitCode=0 Nov 28 12:54:24 crc kubenswrapper[4779]: I1128 12:54:24.218622 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a859-account-create-update-82cnk" event={"ID":"ef019325-ce2d-4119-85d3-eac3868665ce","Type":"ContainerDied","Data":"3b0fc7683f2230e31c28fb1a5a9771b7c83d12a389ecb78437e32f3c551262e8"} Nov 28 12:54:24 crc kubenswrapper[4779]: I1128 12:54:24.223047 4779 generic.go:334] "Generic (PLEG): container finished" podID="917a2830-66a9-4c55-9cf0-74c6dac98030" containerID="fe7b5777bf81ec3381397ea71bbabb6f37a576a56aaec9256a74fb1b34b4fafe" exitCode=0 Nov 28 12:54:24 crc kubenswrapper[4779]: I1128 12:54:24.223132 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-36a9-account-create-update-blv9g" event={"ID":"917a2830-66a9-4c55-9cf0-74c6dac98030","Type":"ContainerDied","Data":"fe7b5777bf81ec3381397ea71bbabb6f37a576a56aaec9256a74fb1b34b4fafe"} Nov 28 12:54:24 crc kubenswrapper[4779]: I1128 12:54:24.224690 4779 generic.go:334] "Generic (PLEG): container finished" podID="b001146a-ebfc-4821-b7b8-3dbbf14749c9" containerID="3bd66579955aa434bcbddd29386fbe17ea2dba6b7b6ee070c4c82d4c626c53c0" exitCode=0 Nov 28 12:54:24 crc kubenswrapper[4779]: I1128 12:54:24.224751 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-bhx9p" event={"ID":"b001146a-ebfc-4821-b7b8-3dbbf14749c9","Type":"ContainerDied","Data":"3bd66579955aa434bcbddd29386fbe17ea2dba6b7b6ee070c4c82d4c626c53c0"} Nov 28 12:54:24 crc kubenswrapper[4779]: I1128 12:54:24.226461 4779 generic.go:334] "Generic (PLEG): container finished" podID="b3ebdf4b-0d38-4646-9e9d-742c3152849c" containerID="93db06973d8e05ad0b9283a1955630c607c962cf1fe004a75e5916a4363e4d32" exitCode=0 Nov 28 12:54:24 crc kubenswrapper[4779]: I1128 12:54:24.226499 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-05a1-account-create-update-bjnnp" event={"ID":"b3ebdf4b-0d38-4646-9e9d-742c3152849c","Type":"ContainerDied","Data":"93db06973d8e05ad0b9283a1955630c607c962cf1fe004a75e5916a4363e4d32"} Nov 28 12:54:24 crc kubenswrapper[4779]: I1128 12:54:24.227732 4779 generic.go:334] "Generic (PLEG): container finished" podID="73eaf386-eead-4fbe-bbfb-a41423521b9f" containerID="ace92a66f48960cbff8f49d4255b9015c950a72ce94c5e5e8560d9829fe42fed" exitCode=0 Nov 28 12:54:24 crc kubenswrapper[4779]: I1128 12:54:24.227769 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6c6a-account-create-update-cvfsb" event={"ID":"73eaf386-eead-4fbe-bbfb-a41423521b9f","Type":"ContainerDied","Data":"ace92a66f48960cbff8f49d4255b9015c950a72ce94c5e5e8560d9829fe42fed"} Nov 28 12:54:24 crc kubenswrapper[4779]: I1128 12:54:24.229118 4779 generic.go:334] "Generic (PLEG): container finished" podID="0aafeb02-0f52-4cf1-b856-e357be7e80b2" containerID="ba8c4cb8663ad9a2e4f4df75a7f7dd6f90b98d7d5ad7ccd8f0d6f920c243499f" exitCode=0 Nov 28 12:54:24 crc kubenswrapper[4779]: I1128 12:54:24.229271 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-bl4b8" event={"ID":"0aafeb02-0f52-4cf1-b856-e357be7e80b2","Type":"ContainerDied","Data":"ba8c4cb8663ad9a2e4f4df75a7f7dd6f90b98d7d5ad7ccd8f0d6f920c243499f"} Nov 28 12:54:24 crc kubenswrapper[4779]: I1128 12:54:24.567550 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7bg4l-config-4xlc9" Nov 28 12:54:24 crc kubenswrapper[4779]: I1128 12:54:24.634708 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/409d8e8a-a5ce-4940-8d6b-f58c87eeb764-var-run-ovn\") pod \"409d8e8a-a5ce-4940-8d6b-f58c87eeb764\" (UID: \"409d8e8a-a5ce-4940-8d6b-f58c87eeb764\") " Nov 28 12:54:24 crc kubenswrapper[4779]: I1128 12:54:24.634805 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/409d8e8a-a5ce-4940-8d6b-f58c87eeb764-scripts\") pod \"409d8e8a-a5ce-4940-8d6b-f58c87eeb764\" (UID: \"409d8e8a-a5ce-4940-8d6b-f58c87eeb764\") " Nov 28 12:54:24 crc kubenswrapper[4779]: I1128 12:54:24.634846 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/409d8e8a-a5ce-4940-8d6b-f58c87eeb764-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "409d8e8a-a5ce-4940-8d6b-f58c87eeb764" (UID: "409d8e8a-a5ce-4940-8d6b-f58c87eeb764"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:54:24 crc kubenswrapper[4779]: I1128 12:54:24.634865 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/409d8e8a-a5ce-4940-8d6b-f58c87eeb764-var-run\") pod \"409d8e8a-a5ce-4940-8d6b-f58c87eeb764\" (UID: \"409d8e8a-a5ce-4940-8d6b-f58c87eeb764\") " Nov 28 12:54:24 crc kubenswrapper[4779]: I1128 12:54:24.634966 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/409d8e8a-a5ce-4940-8d6b-f58c87eeb764-additional-scripts\") pod \"409d8e8a-a5ce-4940-8d6b-f58c87eeb764\" (UID: \"409d8e8a-a5ce-4940-8d6b-f58c87eeb764\") " Nov 28 12:54:24 crc kubenswrapper[4779]: I1128 12:54:24.634995 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/409d8e8a-a5ce-4940-8d6b-f58c87eeb764-var-log-ovn\") pod \"409d8e8a-a5ce-4940-8d6b-f58c87eeb764\" (UID: \"409d8e8a-a5ce-4940-8d6b-f58c87eeb764\") " Nov 28 12:54:24 crc kubenswrapper[4779]: I1128 12:54:24.635070 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k28jm\" (UniqueName: \"kubernetes.io/projected/409d8e8a-a5ce-4940-8d6b-f58c87eeb764-kube-api-access-k28jm\") pod \"409d8e8a-a5ce-4940-8d6b-f58c87eeb764\" (UID: \"409d8e8a-a5ce-4940-8d6b-f58c87eeb764\") " Nov 28 12:54:24 crc kubenswrapper[4779]: I1128 12:54:24.635409 4779 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/409d8e8a-a5ce-4940-8d6b-f58c87eeb764-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 28 12:54:24 crc kubenswrapper[4779]: I1128 12:54:24.636024 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/409d8e8a-a5ce-4940-8d6b-f58c87eeb764-scripts" (OuterVolumeSpecName: "scripts") pod "409d8e8a-a5ce-4940-8d6b-f58c87eeb764" (UID: "409d8e8a-a5ce-4940-8d6b-f58c87eeb764"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:54:24 crc kubenswrapper[4779]: I1128 12:54:24.636656 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/409d8e8a-a5ce-4940-8d6b-f58c87eeb764-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "409d8e8a-a5ce-4940-8d6b-f58c87eeb764" (UID: "409d8e8a-a5ce-4940-8d6b-f58c87eeb764"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:54:24 crc kubenswrapper[4779]: I1128 12:54:24.636684 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/409d8e8a-a5ce-4940-8d6b-f58c87eeb764-var-run" (OuterVolumeSpecName: "var-run") pod "409d8e8a-a5ce-4940-8d6b-f58c87eeb764" (UID: "409d8e8a-a5ce-4940-8d6b-f58c87eeb764"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:54:24 crc kubenswrapper[4779]: I1128 12:54:24.636704 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/409d8e8a-a5ce-4940-8d6b-f58c87eeb764-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "409d8e8a-a5ce-4940-8d6b-f58c87eeb764" (UID: "409d8e8a-a5ce-4940-8d6b-f58c87eeb764"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:54:24 crc kubenswrapper[4779]: I1128 12:54:24.648322 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/409d8e8a-a5ce-4940-8d6b-f58c87eeb764-kube-api-access-k28jm" (OuterVolumeSpecName: "kube-api-access-k28jm") pod "409d8e8a-a5ce-4940-8d6b-f58c87eeb764" (UID: "409d8e8a-a5ce-4940-8d6b-f58c87eeb764"). InnerVolumeSpecName "kube-api-access-k28jm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:54:24 crc kubenswrapper[4779]: I1128 12:54:24.712916 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-js645" Nov 28 12:54:24 crc kubenswrapper[4779]: I1128 12:54:24.721217 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-h4bzm" Nov 28 12:54:24 crc kubenswrapper[4779]: I1128 12:54:24.735786 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mxq8\" (UniqueName: \"kubernetes.io/projected/4e0e6aa9-aad0-4d46-85c9-11cf40ac2928-kube-api-access-2mxq8\") pod \"4e0e6aa9-aad0-4d46-85c9-11cf40ac2928\" (UID: \"4e0e6aa9-aad0-4d46-85c9-11cf40ac2928\") " Nov 28 12:54:24 crc kubenswrapper[4779]: I1128 12:54:24.736007 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e0e6aa9-aad0-4d46-85c9-11cf40ac2928-operator-scripts\") pod \"4e0e6aa9-aad0-4d46-85c9-11cf40ac2928\" (UID: \"4e0e6aa9-aad0-4d46-85c9-11cf40ac2928\") " Nov 28 12:54:24 crc kubenswrapper[4779]: I1128 12:54:24.736492 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k28jm\" (UniqueName: \"kubernetes.io/projected/409d8e8a-a5ce-4940-8d6b-f58c87eeb764-kube-api-access-k28jm\") on node \"crc\" DevicePath \"\"" Nov 28 12:54:24 crc kubenswrapper[4779]: I1128 12:54:24.736515 4779 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/409d8e8a-a5ce-4940-8d6b-f58c87eeb764-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:54:24 crc kubenswrapper[4779]: I1128 12:54:24.736524 4779 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/409d8e8a-a5ce-4940-8d6b-f58c87eeb764-var-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:54:24 crc kubenswrapper[4779]: I1128 12:54:24.736533 4779 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/409d8e8a-a5ce-4940-8d6b-f58c87eeb764-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:54:24 crc kubenswrapper[4779]: I1128 12:54:24.736541 4779 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/409d8e8a-a5ce-4940-8d6b-f58c87eeb764-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 28 12:54:24 crc kubenswrapper[4779]: I1128 12:54:24.737218 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e0e6aa9-aad0-4d46-85c9-11cf40ac2928-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4e0e6aa9-aad0-4d46-85c9-11cf40ac2928" (UID: "4e0e6aa9-aad0-4d46-85c9-11cf40ac2928"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:54:24 crc kubenswrapper[4779]: I1128 12:54:24.753293 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e0e6aa9-aad0-4d46-85c9-11cf40ac2928-kube-api-access-2mxq8" (OuterVolumeSpecName: "kube-api-access-2mxq8") pod "4e0e6aa9-aad0-4d46-85c9-11cf40ac2928" (UID: "4e0e6aa9-aad0-4d46-85c9-11cf40ac2928"). InnerVolumeSpecName "kube-api-access-2mxq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:54:24 crc kubenswrapper[4779]: I1128 12:54:24.838001 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdtvc\" (UniqueName: \"kubernetes.io/projected/07dc1232-19a1-43de-9b7b-9613e964a39b-kube-api-access-qdtvc\") pod \"07dc1232-19a1-43de-9b7b-9613e964a39b\" (UID: \"07dc1232-19a1-43de-9b7b-9613e964a39b\") " Nov 28 12:54:24 crc kubenswrapper[4779]: I1128 12:54:24.838378 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07dc1232-19a1-43de-9b7b-9613e964a39b-operator-scripts\") pod \"07dc1232-19a1-43de-9b7b-9613e964a39b\" (UID: \"07dc1232-19a1-43de-9b7b-9613e964a39b\") " Nov 28 12:54:24 crc kubenswrapper[4779]: I1128 12:54:24.838933 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07dc1232-19a1-43de-9b7b-9613e964a39b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "07dc1232-19a1-43de-9b7b-9613e964a39b" (UID: "07dc1232-19a1-43de-9b7b-9613e964a39b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:54:24 crc kubenswrapper[4779]: I1128 12:54:24.839060 4779 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e0e6aa9-aad0-4d46-85c9-11cf40ac2928-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:54:24 crc kubenswrapper[4779]: I1128 12:54:24.839080 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mxq8\" (UniqueName: \"kubernetes.io/projected/4e0e6aa9-aad0-4d46-85c9-11cf40ac2928-kube-api-access-2mxq8\") on node \"crc\" DevicePath \"\"" Nov 28 12:54:24 crc kubenswrapper[4779]: I1128 12:54:24.839107 4779 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07dc1232-19a1-43de-9b7b-9613e964a39b-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:54:24 crc kubenswrapper[4779]: I1128 12:54:24.840758 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07dc1232-19a1-43de-9b7b-9613e964a39b-kube-api-access-qdtvc" (OuterVolumeSpecName: "kube-api-access-qdtvc") pod "07dc1232-19a1-43de-9b7b-9613e964a39b" (UID: "07dc1232-19a1-43de-9b7b-9613e964a39b"). InnerVolumeSpecName "kube-api-access-qdtvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:54:24 crc kubenswrapper[4779]: I1128 12:54:24.940890 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdtvc\" (UniqueName: \"kubernetes.io/projected/07dc1232-19a1-43de-9b7b-9613e964a39b-kube-api-access-qdtvc\") on node \"crc\" DevicePath \"\"" Nov 28 12:54:25 crc kubenswrapper[4779]: I1128 12:54:25.245539 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7bg4l-config-4xlc9" event={"ID":"409d8e8a-a5ce-4940-8d6b-f58c87eeb764","Type":"ContainerDied","Data":"233277d5d3923a3c23fb3cc76abb54a5200d84e41c18f4d8409b1a3065b6192c"} Nov 28 12:54:25 crc kubenswrapper[4779]: I1128 12:54:25.245590 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="233277d5d3923a3c23fb3cc76abb54a5200d84e41c18f4d8409b1a3065b6192c" Nov 28 12:54:25 crc kubenswrapper[4779]: I1128 12:54:25.247316 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7bg4l-config-4xlc9" Nov 28 12:54:25 crc kubenswrapper[4779]: I1128 12:54:25.248859 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-h4bzm" event={"ID":"07dc1232-19a1-43de-9b7b-9613e964a39b","Type":"ContainerDied","Data":"e210e5512549df865c70e3089453b20c9f9e1441b364b600ab7eb6d1f6077e94"} Nov 28 12:54:25 crc kubenswrapper[4779]: I1128 12:54:25.248908 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-h4bzm" Nov 28 12:54:25 crc kubenswrapper[4779]: I1128 12:54:25.248919 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e210e5512549df865c70e3089453b20c9f9e1441b364b600ab7eb6d1f6077e94" Nov 28 12:54:25 crc kubenswrapper[4779]: I1128 12:54:25.250341 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-js645" event={"ID":"4e0e6aa9-aad0-4d46-85c9-11cf40ac2928","Type":"ContainerDied","Data":"47b529d67444668042864d5eedf38794ece7b5e06a0175f71f0117b8f3a7b4e8"} Nov 28 12:54:25 crc kubenswrapper[4779]: I1128 12:54:25.250374 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47b529d67444668042864d5eedf38794ece7b5e06a0175f71f0117b8f3a7b4e8" Nov 28 12:54:25 crc kubenswrapper[4779]: I1128 12:54:25.250334 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-js645" Nov 28 12:54:25 crc kubenswrapper[4779]: I1128 12:54:25.687073 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-7bg4l-config-4xlc9"] Nov 28 12:54:25 crc kubenswrapper[4779]: I1128 12:54:25.694246 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-7bg4l-config-4xlc9"] Nov 28 12:54:25 crc kubenswrapper[4779]: I1128 12:54:25.744581 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="409d8e8a-a5ce-4940-8d6b-f58c87eeb764" path="/var/lib/kubelet/pods/409d8e8a-a5ce-4940-8d6b-f58c87eeb764/volumes" Nov 28 12:54:27 crc kubenswrapper[4779]: I1128 12:54:27.773454 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-bhx9p" Nov 28 12:54:27 crc kubenswrapper[4779]: I1128 12:54:27.777318 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-bl4b8" Nov 28 12:54:27 crc kubenswrapper[4779]: I1128 12:54:27.781698 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-36a9-account-create-update-blv9g" Nov 28 12:54:27 crc kubenswrapper[4779]: I1128 12:54:27.846862 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6c6a-account-create-update-cvfsb" Nov 28 12:54:27 crc kubenswrapper[4779]: I1128 12:54:27.856800 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-05a1-account-create-update-bjnnp" Nov 28 12:54:27 crc kubenswrapper[4779]: I1128 12:54:27.864491 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a859-account-create-update-82cnk" Nov 28 12:54:27 crc kubenswrapper[4779]: I1128 12:54:27.914035 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfbdg\" (UniqueName: \"kubernetes.io/projected/0aafeb02-0f52-4cf1-b856-e357be7e80b2-kube-api-access-qfbdg\") pod \"0aafeb02-0f52-4cf1-b856-e357be7e80b2\" (UID: \"0aafeb02-0f52-4cf1-b856-e357be7e80b2\") " Nov 28 12:54:27 crc kubenswrapper[4779]: I1128 12:54:27.914123 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zt64q\" (UniqueName: \"kubernetes.io/projected/917a2830-66a9-4c55-9cf0-74c6dac98030-kube-api-access-zt64q\") pod \"917a2830-66a9-4c55-9cf0-74c6dac98030\" (UID: \"917a2830-66a9-4c55-9cf0-74c6dac98030\") " Nov 28 12:54:27 crc kubenswrapper[4779]: I1128 12:54:27.914228 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b001146a-ebfc-4821-b7b8-3dbbf14749c9-operator-scripts\") pod \"b001146a-ebfc-4821-b7b8-3dbbf14749c9\" (UID: \"b001146a-ebfc-4821-b7b8-3dbbf14749c9\") " Nov 28 12:54:27 crc kubenswrapper[4779]: I1128 12:54:27.914261 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4k89\" (UniqueName: \"kubernetes.io/projected/b3ebdf4b-0d38-4646-9e9d-742c3152849c-kube-api-access-r4k89\") pod \"b3ebdf4b-0d38-4646-9e9d-742c3152849c\" (UID: \"b3ebdf4b-0d38-4646-9e9d-742c3152849c\") " Nov 28 12:54:27 crc kubenswrapper[4779]: I1128 12:54:27.914302 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7b4f\" (UniqueName: \"kubernetes.io/projected/73eaf386-eead-4fbe-bbfb-a41423521b9f-kube-api-access-l7b4f\") pod \"73eaf386-eead-4fbe-bbfb-a41423521b9f\" (UID: \"73eaf386-eead-4fbe-bbfb-a41423521b9f\") " Nov 28 12:54:27 crc kubenswrapper[4779]: I1128 12:54:27.914371 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6tlx\" (UniqueName: \"kubernetes.io/projected/b001146a-ebfc-4821-b7b8-3dbbf14749c9-kube-api-access-v6tlx\") pod \"b001146a-ebfc-4821-b7b8-3dbbf14749c9\" (UID: \"b001146a-ebfc-4821-b7b8-3dbbf14749c9\") " Nov 28 12:54:27 crc kubenswrapper[4779]: I1128 12:54:27.914406 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sngpx\" (UniqueName: \"kubernetes.io/projected/ef019325-ce2d-4119-85d3-eac3868665ce-kube-api-access-sngpx\") pod \"ef019325-ce2d-4119-85d3-eac3868665ce\" (UID: \"ef019325-ce2d-4119-85d3-eac3868665ce\") " Nov 28 12:54:27 crc kubenswrapper[4779]: I1128 12:54:27.914453 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/917a2830-66a9-4c55-9cf0-74c6dac98030-operator-scripts\") pod \"917a2830-66a9-4c55-9cf0-74c6dac98030\" (UID: \"917a2830-66a9-4c55-9cf0-74c6dac98030\") " Nov 28 12:54:27 crc kubenswrapper[4779]: I1128 12:54:27.914492 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3ebdf4b-0d38-4646-9e9d-742c3152849c-operator-scripts\") pod \"b3ebdf4b-0d38-4646-9e9d-742c3152849c\" (UID: \"b3ebdf4b-0d38-4646-9e9d-742c3152849c\") " Nov 28 12:54:27 crc kubenswrapper[4779]: I1128 12:54:27.914555 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef019325-ce2d-4119-85d3-eac3868665ce-operator-scripts\") pod \"ef019325-ce2d-4119-85d3-eac3868665ce\" (UID: \"ef019325-ce2d-4119-85d3-eac3868665ce\") " Nov 28 12:54:27 crc kubenswrapper[4779]: I1128 12:54:27.914600 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0aafeb02-0f52-4cf1-b856-e357be7e80b2-operator-scripts\") pod \"0aafeb02-0f52-4cf1-b856-e357be7e80b2\" (UID: \"0aafeb02-0f52-4cf1-b856-e357be7e80b2\") " Nov 28 12:54:27 crc kubenswrapper[4779]: I1128 12:54:27.914650 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73eaf386-eead-4fbe-bbfb-a41423521b9f-operator-scripts\") pod \"73eaf386-eead-4fbe-bbfb-a41423521b9f\" (UID: \"73eaf386-eead-4fbe-bbfb-a41423521b9f\") " Nov 28 12:54:27 crc kubenswrapper[4779]: I1128 12:54:27.917292 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73eaf386-eead-4fbe-bbfb-a41423521b9f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "73eaf386-eead-4fbe-bbfb-a41423521b9f" (UID: "73eaf386-eead-4fbe-bbfb-a41423521b9f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:54:27 crc kubenswrapper[4779]: I1128 12:54:27.920343 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3ebdf4b-0d38-4646-9e9d-742c3152849c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b3ebdf4b-0d38-4646-9e9d-742c3152849c" (UID: "b3ebdf4b-0d38-4646-9e9d-742c3152849c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:54:27 crc kubenswrapper[4779]: I1128 12:54:27.920616 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef019325-ce2d-4119-85d3-eac3868665ce-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ef019325-ce2d-4119-85d3-eac3868665ce" (UID: "ef019325-ce2d-4119-85d3-eac3868665ce"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:54:27 crc kubenswrapper[4779]: I1128 12:54:27.920641 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/917a2830-66a9-4c55-9cf0-74c6dac98030-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "917a2830-66a9-4c55-9cf0-74c6dac98030" (UID: "917a2830-66a9-4c55-9cf0-74c6dac98030"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:54:27 crc kubenswrapper[4779]: I1128 12:54:27.920954 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0aafeb02-0f52-4cf1-b856-e357be7e80b2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0aafeb02-0f52-4cf1-b856-e357be7e80b2" (UID: "0aafeb02-0f52-4cf1-b856-e357be7e80b2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:54:27 crc kubenswrapper[4779]: I1128 12:54:27.921063 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0aafeb02-0f52-4cf1-b856-e357be7e80b2-kube-api-access-qfbdg" (OuterVolumeSpecName: "kube-api-access-qfbdg") pod "0aafeb02-0f52-4cf1-b856-e357be7e80b2" (UID: "0aafeb02-0f52-4cf1-b856-e357be7e80b2"). InnerVolumeSpecName "kube-api-access-qfbdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:54:27 crc kubenswrapper[4779]: I1128 12:54:27.921333 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b001146a-ebfc-4821-b7b8-3dbbf14749c9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b001146a-ebfc-4821-b7b8-3dbbf14749c9" (UID: "b001146a-ebfc-4821-b7b8-3dbbf14749c9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:54:27 crc kubenswrapper[4779]: I1128 12:54:27.924486 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b001146a-ebfc-4821-b7b8-3dbbf14749c9-kube-api-access-v6tlx" (OuterVolumeSpecName: "kube-api-access-v6tlx") pod "b001146a-ebfc-4821-b7b8-3dbbf14749c9" (UID: "b001146a-ebfc-4821-b7b8-3dbbf14749c9"). InnerVolumeSpecName "kube-api-access-v6tlx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:54:27 crc kubenswrapper[4779]: I1128 12:54:27.926261 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3ebdf4b-0d38-4646-9e9d-742c3152849c-kube-api-access-r4k89" (OuterVolumeSpecName: "kube-api-access-r4k89") pod "b3ebdf4b-0d38-4646-9e9d-742c3152849c" (UID: "b3ebdf4b-0d38-4646-9e9d-742c3152849c"). InnerVolumeSpecName "kube-api-access-r4k89". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:54:27 crc kubenswrapper[4779]: I1128 12:54:27.931032 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef019325-ce2d-4119-85d3-eac3868665ce-kube-api-access-sngpx" (OuterVolumeSpecName: "kube-api-access-sngpx") pod "ef019325-ce2d-4119-85d3-eac3868665ce" (UID: "ef019325-ce2d-4119-85d3-eac3868665ce"). InnerVolumeSpecName "kube-api-access-sngpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:54:27 crc kubenswrapper[4779]: I1128 12:54:27.931206 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73eaf386-eead-4fbe-bbfb-a41423521b9f-kube-api-access-l7b4f" (OuterVolumeSpecName: "kube-api-access-l7b4f") pod "73eaf386-eead-4fbe-bbfb-a41423521b9f" (UID: "73eaf386-eead-4fbe-bbfb-a41423521b9f"). InnerVolumeSpecName "kube-api-access-l7b4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:54:27 crc kubenswrapper[4779]: I1128 12:54:27.938620 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/917a2830-66a9-4c55-9cf0-74c6dac98030-kube-api-access-zt64q" (OuterVolumeSpecName: "kube-api-access-zt64q") pod "917a2830-66a9-4c55-9cf0-74c6dac98030" (UID: "917a2830-66a9-4c55-9cf0-74c6dac98030"). InnerVolumeSpecName "kube-api-access-zt64q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:54:28 crc kubenswrapper[4779]: I1128 12:54:28.017269 4779 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b001146a-ebfc-4821-b7b8-3dbbf14749c9-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:54:28 crc kubenswrapper[4779]: I1128 12:54:28.017319 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4k89\" (UniqueName: \"kubernetes.io/projected/b3ebdf4b-0d38-4646-9e9d-742c3152849c-kube-api-access-r4k89\") on node \"crc\" DevicePath \"\"" Nov 28 12:54:28 crc kubenswrapper[4779]: I1128 12:54:28.017343 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7b4f\" (UniqueName: \"kubernetes.io/projected/73eaf386-eead-4fbe-bbfb-a41423521b9f-kube-api-access-l7b4f\") on node \"crc\" DevicePath \"\"" Nov 28 12:54:28 crc kubenswrapper[4779]: I1128 12:54:28.017361 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6tlx\" (UniqueName: \"kubernetes.io/projected/b001146a-ebfc-4821-b7b8-3dbbf14749c9-kube-api-access-v6tlx\") on node \"crc\" DevicePath \"\"" Nov 28 12:54:28 crc kubenswrapper[4779]: I1128 12:54:28.017378 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sngpx\" (UniqueName: \"kubernetes.io/projected/ef019325-ce2d-4119-85d3-eac3868665ce-kube-api-access-sngpx\") on node \"crc\" DevicePath \"\"" Nov 28 12:54:28 crc kubenswrapper[4779]: I1128 12:54:28.017394 4779 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/917a2830-66a9-4c55-9cf0-74c6dac98030-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:54:28 crc kubenswrapper[4779]: I1128 12:54:28.017411 4779 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3ebdf4b-0d38-4646-9e9d-742c3152849c-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:54:28 crc kubenswrapper[4779]: I1128 12:54:28.017431 4779 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef019325-ce2d-4119-85d3-eac3868665ce-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:54:28 crc kubenswrapper[4779]: I1128 12:54:28.017447 4779 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0aafeb02-0f52-4cf1-b856-e357be7e80b2-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:54:28 crc kubenswrapper[4779]: I1128 12:54:28.017466 4779 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73eaf386-eead-4fbe-bbfb-a41423521b9f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:54:28 crc kubenswrapper[4779]: I1128 12:54:28.017483 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfbdg\" (UniqueName: \"kubernetes.io/projected/0aafeb02-0f52-4cf1-b856-e357be7e80b2-kube-api-access-qfbdg\") on node \"crc\" DevicePath \"\"" Nov 28 12:54:28 crc kubenswrapper[4779]: I1128 12:54:28.017500 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zt64q\" (UniqueName: \"kubernetes.io/projected/917a2830-66a9-4c55-9cf0-74c6dac98030-kube-api-access-zt64q\") on node \"crc\" DevicePath \"\"" Nov 28 12:54:28 crc kubenswrapper[4779]: I1128 12:54:28.278543 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-36a9-account-create-update-blv9g" event={"ID":"917a2830-66a9-4c55-9cf0-74c6dac98030","Type":"ContainerDied","Data":"a8eb8e7536632aaadb2bfff75119d503812586bef7c771cc8ff3ff74c84e0066"} Nov 28 12:54:28 crc kubenswrapper[4779]: I1128 12:54:28.278588 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8eb8e7536632aaadb2bfff75119d503812586bef7c771cc8ff3ff74c84e0066" Nov 28 12:54:28 crc kubenswrapper[4779]: I1128 12:54:28.278631 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-36a9-account-create-update-blv9g" Nov 28 12:54:28 crc kubenswrapper[4779]: I1128 12:54:28.280902 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-bhx9p" event={"ID":"b001146a-ebfc-4821-b7b8-3dbbf14749c9","Type":"ContainerDied","Data":"3761697b319ad03057121313b61d4009d0ce2d4f0baa2d32bc20bc78b29537b0"} Nov 28 12:54:28 crc kubenswrapper[4779]: I1128 12:54:28.281027 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3761697b319ad03057121313b61d4009d0ce2d4f0baa2d32bc20bc78b29537b0" Nov 28 12:54:28 crc kubenswrapper[4779]: I1128 12:54:28.281098 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-bhx9p" Nov 28 12:54:28 crc kubenswrapper[4779]: I1128 12:54:28.295993 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-05a1-account-create-update-bjnnp" event={"ID":"b3ebdf4b-0d38-4646-9e9d-742c3152849c","Type":"ContainerDied","Data":"58d67a6b4c688e4d123ace125de249fc25265bb8bbf2240f1855ed600b0632ab"} Nov 28 12:54:28 crc kubenswrapper[4779]: I1128 12:54:28.296062 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58d67a6b4c688e4d123ace125de249fc25265bb8bbf2240f1855ed600b0632ab" Nov 28 12:54:28 crc kubenswrapper[4779]: I1128 12:54:28.296364 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-05a1-account-create-update-bjnnp" Nov 28 12:54:28 crc kubenswrapper[4779]: I1128 12:54:28.298059 4779 generic.go:334] "Generic (PLEG): container finished" podID="30731004-d3bb-4ed7-820a-37fe3e7ee7e1" containerID="1eada839e5267d3245004362b6eb536e129a4d143dc96d5010455efc59426b88" exitCode=0 Nov 28 12:54:28 crc kubenswrapper[4779]: I1128 12:54:28.298169 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-hzq4r" event={"ID":"30731004-d3bb-4ed7-820a-37fe3e7ee7e1","Type":"ContainerDied","Data":"1eada839e5267d3245004362b6eb536e129a4d143dc96d5010455efc59426b88"} Nov 28 12:54:28 crc kubenswrapper[4779]: I1128 12:54:28.301586 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6c6a-account-create-update-cvfsb" event={"ID":"73eaf386-eead-4fbe-bbfb-a41423521b9f","Type":"ContainerDied","Data":"d15dff3c71e5d8f253836d385746f9ab6bc386f64b54d86b8ddee2aa1f78c2d5"} Nov 28 12:54:28 crc kubenswrapper[4779]: I1128 12:54:28.301625 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d15dff3c71e5d8f253836d385746f9ab6bc386f64b54d86b8ddee2aa1f78c2d5" Nov 28 12:54:28 crc kubenswrapper[4779]: I1128 12:54:28.301680 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6c6a-account-create-update-cvfsb" Nov 28 12:54:28 crc kubenswrapper[4779]: I1128 12:54:28.305658 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-vlmfj" event={"ID":"4ae2270c-607f-4315-959e-eb8536afafe9","Type":"ContainerStarted","Data":"26fededc6d08e301a2ca39e554e235beff22fb28cc2965eedeb2d6746ed78e18"} Nov 28 12:54:28 crc kubenswrapper[4779]: I1128 12:54:28.309112 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-bl4b8" Nov 28 12:54:28 crc kubenswrapper[4779]: I1128 12:54:28.309165 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-bl4b8" event={"ID":"0aafeb02-0f52-4cf1-b856-e357be7e80b2","Type":"ContainerDied","Data":"69e517a5ffe52955666da4363bf4e9457fe9e785b8f43f7f8cb2cafdba5a1e79"} Nov 28 12:54:28 crc kubenswrapper[4779]: I1128 12:54:28.310127 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69e517a5ffe52955666da4363bf4e9457fe9e785b8f43f7f8cb2cafdba5a1e79" Nov 28 12:54:28 crc kubenswrapper[4779]: I1128 12:54:28.310753 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a859-account-create-update-82cnk" event={"ID":"ef019325-ce2d-4119-85d3-eac3868665ce","Type":"ContainerDied","Data":"55274084dd42c9521460545b28a4edb14541785ff1daf477b1623b3b4c0b7045"} Nov 28 12:54:28 crc kubenswrapper[4779]: I1128 12:54:28.310796 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55274084dd42c9521460545b28a4edb14541785ff1daf477b1623b3b4c0b7045" Nov 28 12:54:28 crc kubenswrapper[4779]: I1128 12:54:28.310863 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a859-account-create-update-82cnk" Nov 28 12:54:28 crc kubenswrapper[4779]: I1128 12:54:28.358874 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-vlmfj" podStartSLOduration=2.578359195 podStartE2EDuration="7.358852665s" podCreationTimestamp="2025-11-28 12:54:21 +0000 UTC" firstStartedPulling="2025-11-28 12:54:22.864513102 +0000 UTC m=+1123.430188446" lastFinishedPulling="2025-11-28 12:54:27.645006552 +0000 UTC m=+1128.210681916" observedRunningTime="2025-11-28 12:54:28.354308575 +0000 UTC m=+1128.919983969" watchObservedRunningTime="2025-11-28 12:54:28.358852665 +0000 UTC m=+1128.924528029" Nov 28 12:54:29 crc kubenswrapper[4779]: I1128 12:54:29.784299 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-hzq4r" Nov 28 12:54:29 crc kubenswrapper[4779]: I1128 12:54:29.888515 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcfwb\" (UniqueName: \"kubernetes.io/projected/30731004-d3bb-4ed7-820a-37fe3e7ee7e1-kube-api-access-kcfwb\") pod \"30731004-d3bb-4ed7-820a-37fe3e7ee7e1\" (UID: \"30731004-d3bb-4ed7-820a-37fe3e7ee7e1\") " Nov 28 12:54:29 crc kubenswrapper[4779]: I1128 12:54:29.888622 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30731004-d3bb-4ed7-820a-37fe3e7ee7e1-config-data\") pod \"30731004-d3bb-4ed7-820a-37fe3e7ee7e1\" (UID: \"30731004-d3bb-4ed7-820a-37fe3e7ee7e1\") " Nov 28 12:54:29 crc kubenswrapper[4779]: I1128 12:54:29.888738 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30731004-d3bb-4ed7-820a-37fe3e7ee7e1-combined-ca-bundle\") pod \"30731004-d3bb-4ed7-820a-37fe3e7ee7e1\" (UID: \"30731004-d3bb-4ed7-820a-37fe3e7ee7e1\") " Nov 28 12:54:29 crc kubenswrapper[4779]: I1128 12:54:29.888898 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/30731004-d3bb-4ed7-820a-37fe3e7ee7e1-db-sync-config-data\") pod \"30731004-d3bb-4ed7-820a-37fe3e7ee7e1\" (UID: \"30731004-d3bb-4ed7-820a-37fe3e7ee7e1\") " Nov 28 12:54:29 crc kubenswrapper[4779]: I1128 12:54:29.894594 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30731004-d3bb-4ed7-820a-37fe3e7ee7e1-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "30731004-d3bb-4ed7-820a-37fe3e7ee7e1" (UID: "30731004-d3bb-4ed7-820a-37fe3e7ee7e1"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:54:29 crc kubenswrapper[4779]: I1128 12:54:29.896574 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30731004-d3bb-4ed7-820a-37fe3e7ee7e1-kube-api-access-kcfwb" (OuterVolumeSpecName: "kube-api-access-kcfwb") pod "30731004-d3bb-4ed7-820a-37fe3e7ee7e1" (UID: "30731004-d3bb-4ed7-820a-37fe3e7ee7e1"). InnerVolumeSpecName "kube-api-access-kcfwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:54:29 crc kubenswrapper[4779]: I1128 12:54:29.934379 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30731004-d3bb-4ed7-820a-37fe3e7ee7e1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "30731004-d3bb-4ed7-820a-37fe3e7ee7e1" (UID: "30731004-d3bb-4ed7-820a-37fe3e7ee7e1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:54:29 crc kubenswrapper[4779]: I1128 12:54:29.939635 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30731004-d3bb-4ed7-820a-37fe3e7ee7e1-config-data" (OuterVolumeSpecName: "config-data") pod "30731004-d3bb-4ed7-820a-37fe3e7ee7e1" (UID: "30731004-d3bb-4ed7-820a-37fe3e7ee7e1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:54:29 crc kubenswrapper[4779]: I1128 12:54:29.990881 4779 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/30731004-d3bb-4ed7-820a-37fe3e7ee7e1-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:54:29 crc kubenswrapper[4779]: I1128 12:54:29.990910 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcfwb\" (UniqueName: \"kubernetes.io/projected/30731004-d3bb-4ed7-820a-37fe3e7ee7e1-kube-api-access-kcfwb\") on node \"crc\" DevicePath \"\"" Nov 28 12:54:29 crc kubenswrapper[4779]: I1128 12:54:29.990923 4779 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30731004-d3bb-4ed7-820a-37fe3e7ee7e1-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:54:29 crc kubenswrapper[4779]: I1128 12:54:29.990931 4779 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30731004-d3bb-4ed7-820a-37fe3e7ee7e1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:54:30 crc kubenswrapper[4779]: I1128 12:54:30.337905 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-hzq4r" event={"ID":"30731004-d3bb-4ed7-820a-37fe3e7ee7e1","Type":"ContainerDied","Data":"f3ad39691ae1c31af9fa7e486e555a12bb8dc4bc040b63acb098e466679af52c"} Nov 28 12:54:30 crc kubenswrapper[4779]: I1128 12:54:30.337978 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3ad39691ae1c31af9fa7e486e555a12bb8dc4bc040b63acb098e466679af52c" Nov 28 12:54:30 crc kubenswrapper[4779]: I1128 12:54:30.338062 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-hzq4r" Nov 28 12:54:30 crc kubenswrapper[4779]: I1128 12:54:30.784660 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-lf8cn"] Nov 28 12:54:30 crc kubenswrapper[4779]: E1128 12:54:30.785053 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="409d8e8a-a5ce-4940-8d6b-f58c87eeb764" containerName="ovn-config" Nov 28 12:54:30 crc kubenswrapper[4779]: I1128 12:54:30.785071 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="409d8e8a-a5ce-4940-8d6b-f58c87eeb764" containerName="ovn-config" Nov 28 12:54:30 crc kubenswrapper[4779]: E1128 12:54:30.785083 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef019325-ce2d-4119-85d3-eac3868665ce" containerName="mariadb-account-create-update" Nov 28 12:54:30 crc kubenswrapper[4779]: I1128 12:54:30.785092 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef019325-ce2d-4119-85d3-eac3868665ce" containerName="mariadb-account-create-update" Nov 28 12:54:30 crc kubenswrapper[4779]: E1128 12:54:30.785123 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aafeb02-0f52-4cf1-b856-e357be7e80b2" containerName="mariadb-database-create" Nov 28 12:54:30 crc kubenswrapper[4779]: I1128 12:54:30.785129 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aafeb02-0f52-4cf1-b856-e357be7e80b2" containerName="mariadb-database-create" Nov 28 12:54:30 crc kubenswrapper[4779]: E1128 12:54:30.785140 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e0e6aa9-aad0-4d46-85c9-11cf40ac2928" containerName="mariadb-database-create" Nov 28 12:54:30 crc kubenswrapper[4779]: I1128 12:54:30.785145 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e0e6aa9-aad0-4d46-85c9-11cf40ac2928" containerName="mariadb-database-create" Nov 28 12:54:30 crc kubenswrapper[4779]: E1128 12:54:30.785158 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07dc1232-19a1-43de-9b7b-9613e964a39b" containerName="mariadb-database-create" Nov 28 12:54:30 crc kubenswrapper[4779]: I1128 12:54:30.785163 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="07dc1232-19a1-43de-9b7b-9613e964a39b" containerName="mariadb-database-create" Nov 28 12:54:30 crc kubenswrapper[4779]: E1128 12:54:30.785180 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73eaf386-eead-4fbe-bbfb-a41423521b9f" containerName="mariadb-account-create-update" Nov 28 12:54:30 crc kubenswrapper[4779]: I1128 12:54:30.785186 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="73eaf386-eead-4fbe-bbfb-a41423521b9f" containerName="mariadb-account-create-update" Nov 28 12:54:30 crc kubenswrapper[4779]: E1128 12:54:30.785195 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30731004-d3bb-4ed7-820a-37fe3e7ee7e1" containerName="glance-db-sync" Nov 28 12:54:30 crc kubenswrapper[4779]: I1128 12:54:30.785201 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="30731004-d3bb-4ed7-820a-37fe3e7ee7e1" containerName="glance-db-sync" Nov 28 12:54:30 crc kubenswrapper[4779]: E1128 12:54:30.785213 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3ebdf4b-0d38-4646-9e9d-742c3152849c" containerName="mariadb-account-create-update" Nov 28 12:54:30 crc kubenswrapper[4779]: I1128 12:54:30.785219 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3ebdf4b-0d38-4646-9e9d-742c3152849c" containerName="mariadb-account-create-update" Nov 28 12:54:30 crc kubenswrapper[4779]: E1128 12:54:30.785228 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="917a2830-66a9-4c55-9cf0-74c6dac98030" containerName="mariadb-account-create-update" Nov 28 12:54:30 crc kubenswrapper[4779]: I1128 12:54:30.785234 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="917a2830-66a9-4c55-9cf0-74c6dac98030" containerName="mariadb-account-create-update" Nov 28 12:54:30 crc kubenswrapper[4779]: E1128 12:54:30.785248 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b001146a-ebfc-4821-b7b8-3dbbf14749c9" containerName="mariadb-database-create" Nov 28 12:54:30 crc kubenswrapper[4779]: I1128 12:54:30.785255 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="b001146a-ebfc-4821-b7b8-3dbbf14749c9" containerName="mariadb-database-create" Nov 28 12:54:30 crc kubenswrapper[4779]: I1128 12:54:30.785397 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3ebdf4b-0d38-4646-9e9d-742c3152849c" containerName="mariadb-account-create-update" Nov 28 12:54:30 crc kubenswrapper[4779]: I1128 12:54:30.785412 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="73eaf386-eead-4fbe-bbfb-a41423521b9f" containerName="mariadb-account-create-update" Nov 28 12:54:30 crc kubenswrapper[4779]: I1128 12:54:30.785422 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="30731004-d3bb-4ed7-820a-37fe3e7ee7e1" containerName="glance-db-sync" Nov 28 12:54:30 crc kubenswrapper[4779]: I1128 12:54:30.785432 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="917a2830-66a9-4c55-9cf0-74c6dac98030" containerName="mariadb-account-create-update" Nov 28 12:54:30 crc kubenswrapper[4779]: I1128 12:54:30.785442 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="0aafeb02-0f52-4cf1-b856-e357be7e80b2" containerName="mariadb-database-create" Nov 28 12:54:30 crc kubenswrapper[4779]: I1128 12:54:30.785450 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e0e6aa9-aad0-4d46-85c9-11cf40ac2928" containerName="mariadb-database-create" Nov 28 12:54:30 crc kubenswrapper[4779]: I1128 12:54:30.785460 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="409d8e8a-a5ce-4940-8d6b-f58c87eeb764" containerName="ovn-config" Nov 28 12:54:30 crc kubenswrapper[4779]: I1128 12:54:30.785472 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef019325-ce2d-4119-85d3-eac3868665ce" containerName="mariadb-account-create-update" Nov 28 12:54:30 crc kubenswrapper[4779]: I1128 12:54:30.785482 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="07dc1232-19a1-43de-9b7b-9613e964a39b" containerName="mariadb-database-create" Nov 28 12:54:30 crc kubenswrapper[4779]: I1128 12:54:30.785494 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="b001146a-ebfc-4821-b7b8-3dbbf14749c9" containerName="mariadb-database-create" Nov 28 12:54:30 crc kubenswrapper[4779]: I1128 12:54:30.786514 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-lf8cn" Nov 28 12:54:30 crc kubenswrapper[4779]: I1128 12:54:30.792553 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-lf8cn"] Nov 28 12:54:30 crc kubenswrapper[4779]: I1128 12:54:30.914341 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/53ec5a58-e98c-4b0b-a711-b52e332ba26c-dns-svc\") pod \"dnsmasq-dns-7ff5475cc9-lf8cn\" (UID: \"53ec5a58-e98c-4b0b-a711-b52e332ba26c\") " pod="openstack/dnsmasq-dns-7ff5475cc9-lf8cn" Nov 28 12:54:30 crc kubenswrapper[4779]: I1128 12:54:30.914380 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/53ec5a58-e98c-4b0b-a711-b52e332ba26c-ovsdbserver-sb\") pod \"dnsmasq-dns-7ff5475cc9-lf8cn\" (UID: \"53ec5a58-e98c-4b0b-a711-b52e332ba26c\") " pod="openstack/dnsmasq-dns-7ff5475cc9-lf8cn" Nov 28 12:54:30 crc kubenswrapper[4779]: I1128 12:54:30.914420 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhgbc\" (UniqueName: \"kubernetes.io/projected/53ec5a58-e98c-4b0b-a711-b52e332ba26c-kube-api-access-jhgbc\") pod \"dnsmasq-dns-7ff5475cc9-lf8cn\" (UID: \"53ec5a58-e98c-4b0b-a711-b52e332ba26c\") " pod="openstack/dnsmasq-dns-7ff5475cc9-lf8cn" Nov 28 12:54:30 crc kubenswrapper[4779]: I1128 12:54:30.914442 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/53ec5a58-e98c-4b0b-a711-b52e332ba26c-dns-swift-storage-0\") pod \"dnsmasq-dns-7ff5475cc9-lf8cn\" (UID: \"53ec5a58-e98c-4b0b-a711-b52e332ba26c\") " pod="openstack/dnsmasq-dns-7ff5475cc9-lf8cn" Nov 28 12:54:30 crc kubenswrapper[4779]: I1128 12:54:30.914495 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53ec5a58-e98c-4b0b-a711-b52e332ba26c-config\") pod \"dnsmasq-dns-7ff5475cc9-lf8cn\" (UID: \"53ec5a58-e98c-4b0b-a711-b52e332ba26c\") " pod="openstack/dnsmasq-dns-7ff5475cc9-lf8cn" Nov 28 12:54:30 crc kubenswrapper[4779]: I1128 12:54:30.914514 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/53ec5a58-e98c-4b0b-a711-b52e332ba26c-ovsdbserver-nb\") pod \"dnsmasq-dns-7ff5475cc9-lf8cn\" (UID: \"53ec5a58-e98c-4b0b-a711-b52e332ba26c\") " pod="openstack/dnsmasq-dns-7ff5475cc9-lf8cn" Nov 28 12:54:31 crc kubenswrapper[4779]: I1128 12:54:31.015963 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/53ec5a58-e98c-4b0b-a711-b52e332ba26c-dns-svc\") pod \"dnsmasq-dns-7ff5475cc9-lf8cn\" (UID: \"53ec5a58-e98c-4b0b-a711-b52e332ba26c\") " pod="openstack/dnsmasq-dns-7ff5475cc9-lf8cn" Nov 28 12:54:31 crc kubenswrapper[4779]: I1128 12:54:31.015999 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/53ec5a58-e98c-4b0b-a711-b52e332ba26c-ovsdbserver-sb\") pod \"dnsmasq-dns-7ff5475cc9-lf8cn\" (UID: \"53ec5a58-e98c-4b0b-a711-b52e332ba26c\") " pod="openstack/dnsmasq-dns-7ff5475cc9-lf8cn" Nov 28 12:54:31 crc kubenswrapper[4779]: I1128 12:54:31.016036 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhgbc\" (UniqueName: \"kubernetes.io/projected/53ec5a58-e98c-4b0b-a711-b52e332ba26c-kube-api-access-jhgbc\") pod \"dnsmasq-dns-7ff5475cc9-lf8cn\" (UID: \"53ec5a58-e98c-4b0b-a711-b52e332ba26c\") " pod="openstack/dnsmasq-dns-7ff5475cc9-lf8cn" Nov 28 12:54:31 crc kubenswrapper[4779]: I1128 12:54:31.016055 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/53ec5a58-e98c-4b0b-a711-b52e332ba26c-dns-swift-storage-0\") pod \"dnsmasq-dns-7ff5475cc9-lf8cn\" (UID: \"53ec5a58-e98c-4b0b-a711-b52e332ba26c\") " pod="openstack/dnsmasq-dns-7ff5475cc9-lf8cn" Nov 28 12:54:31 crc kubenswrapper[4779]: I1128 12:54:31.016084 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53ec5a58-e98c-4b0b-a711-b52e332ba26c-config\") pod \"dnsmasq-dns-7ff5475cc9-lf8cn\" (UID: \"53ec5a58-e98c-4b0b-a711-b52e332ba26c\") " pod="openstack/dnsmasq-dns-7ff5475cc9-lf8cn" Nov 28 12:54:31 crc kubenswrapper[4779]: I1128 12:54:31.016105 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/53ec5a58-e98c-4b0b-a711-b52e332ba26c-ovsdbserver-nb\") pod \"dnsmasq-dns-7ff5475cc9-lf8cn\" (UID: \"53ec5a58-e98c-4b0b-a711-b52e332ba26c\") " pod="openstack/dnsmasq-dns-7ff5475cc9-lf8cn" Nov 28 12:54:31 crc kubenswrapper[4779]: I1128 12:54:31.017281 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/53ec5a58-e98c-4b0b-a711-b52e332ba26c-ovsdbserver-nb\") pod \"dnsmasq-dns-7ff5475cc9-lf8cn\" (UID: \"53ec5a58-e98c-4b0b-a711-b52e332ba26c\") " pod="openstack/dnsmasq-dns-7ff5475cc9-lf8cn" Nov 28 12:54:31 crc kubenswrapper[4779]: I1128 12:54:31.017358 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/53ec5a58-e98c-4b0b-a711-b52e332ba26c-ovsdbserver-sb\") pod \"dnsmasq-dns-7ff5475cc9-lf8cn\" (UID: \"53ec5a58-e98c-4b0b-a711-b52e332ba26c\") " pod="openstack/dnsmasq-dns-7ff5475cc9-lf8cn" Nov 28 12:54:31 crc kubenswrapper[4779]: I1128 12:54:31.017482 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/53ec5a58-e98c-4b0b-a711-b52e332ba26c-dns-swift-storage-0\") pod \"dnsmasq-dns-7ff5475cc9-lf8cn\" (UID: \"53ec5a58-e98c-4b0b-a711-b52e332ba26c\") " pod="openstack/dnsmasq-dns-7ff5475cc9-lf8cn" Nov 28 12:54:31 crc kubenswrapper[4779]: I1128 12:54:31.018381 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53ec5a58-e98c-4b0b-a711-b52e332ba26c-config\") pod \"dnsmasq-dns-7ff5475cc9-lf8cn\" (UID: \"53ec5a58-e98c-4b0b-a711-b52e332ba26c\") " pod="openstack/dnsmasq-dns-7ff5475cc9-lf8cn" Nov 28 12:54:31 crc kubenswrapper[4779]: I1128 12:54:31.019604 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/53ec5a58-e98c-4b0b-a711-b52e332ba26c-dns-svc\") pod \"dnsmasq-dns-7ff5475cc9-lf8cn\" (UID: \"53ec5a58-e98c-4b0b-a711-b52e332ba26c\") " pod="openstack/dnsmasq-dns-7ff5475cc9-lf8cn" Nov 28 12:54:31 crc kubenswrapper[4779]: I1128 12:54:31.041072 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhgbc\" (UniqueName: \"kubernetes.io/projected/53ec5a58-e98c-4b0b-a711-b52e332ba26c-kube-api-access-jhgbc\") pod \"dnsmasq-dns-7ff5475cc9-lf8cn\" (UID: \"53ec5a58-e98c-4b0b-a711-b52e332ba26c\") " pod="openstack/dnsmasq-dns-7ff5475cc9-lf8cn" Nov 28 12:54:31 crc kubenswrapper[4779]: I1128 12:54:31.110083 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-lf8cn" Nov 28 12:54:31 crc kubenswrapper[4779]: I1128 12:54:31.370396 4779 generic.go:334] "Generic (PLEG): container finished" podID="4ae2270c-607f-4315-959e-eb8536afafe9" containerID="26fededc6d08e301a2ca39e554e235beff22fb28cc2965eedeb2d6746ed78e18" exitCode=0 Nov 28 12:54:31 crc kubenswrapper[4779]: I1128 12:54:31.370483 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-vlmfj" event={"ID":"4ae2270c-607f-4315-959e-eb8536afafe9","Type":"ContainerDied","Data":"26fededc6d08e301a2ca39e554e235beff22fb28cc2965eedeb2d6746ed78e18"} Nov 28 12:54:31 crc kubenswrapper[4779]: I1128 12:54:31.543826 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-lf8cn"] Nov 28 12:54:31 crc kubenswrapper[4779]: W1128 12:54:31.551826 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53ec5a58_e98c_4b0b_a711_b52e332ba26c.slice/crio-eb77f2bcdc91ceddea3c973a7988aae63ee3875615d127fe490e1ad3de7ec61d WatchSource:0}: Error finding container eb77f2bcdc91ceddea3c973a7988aae63ee3875615d127fe490e1ad3de7ec61d: Status 404 returned error can't find the container with id eb77f2bcdc91ceddea3c973a7988aae63ee3875615d127fe490e1ad3de7ec61d Nov 28 12:54:32 crc kubenswrapper[4779]: I1128 12:54:32.381862 4779 generic.go:334] "Generic (PLEG): container finished" podID="53ec5a58-e98c-4b0b-a711-b52e332ba26c" containerID="2d1d066797a11465febd91edf7346c8960a4e43fa95e3d0776b174be813b95c2" exitCode=0 Nov 28 12:54:32 crc kubenswrapper[4779]: I1128 12:54:32.381945 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-lf8cn" event={"ID":"53ec5a58-e98c-4b0b-a711-b52e332ba26c","Type":"ContainerDied","Data":"2d1d066797a11465febd91edf7346c8960a4e43fa95e3d0776b174be813b95c2"} Nov 28 12:54:32 crc kubenswrapper[4779]: I1128 12:54:32.382360 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-lf8cn" event={"ID":"53ec5a58-e98c-4b0b-a711-b52e332ba26c","Type":"ContainerStarted","Data":"eb77f2bcdc91ceddea3c973a7988aae63ee3875615d127fe490e1ad3de7ec61d"} Nov 28 12:54:32 crc kubenswrapper[4779]: I1128 12:54:32.712355 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-vlmfj" Nov 28 12:54:32 crc kubenswrapper[4779]: I1128 12:54:32.847655 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nldsk\" (UniqueName: \"kubernetes.io/projected/4ae2270c-607f-4315-959e-eb8536afafe9-kube-api-access-nldsk\") pod \"4ae2270c-607f-4315-959e-eb8536afafe9\" (UID: \"4ae2270c-607f-4315-959e-eb8536afafe9\") " Nov 28 12:54:32 crc kubenswrapper[4779]: I1128 12:54:32.847755 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ae2270c-607f-4315-959e-eb8536afafe9-config-data\") pod \"4ae2270c-607f-4315-959e-eb8536afafe9\" (UID: \"4ae2270c-607f-4315-959e-eb8536afafe9\") " Nov 28 12:54:32 crc kubenswrapper[4779]: I1128 12:54:32.847788 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ae2270c-607f-4315-959e-eb8536afafe9-combined-ca-bundle\") pod \"4ae2270c-607f-4315-959e-eb8536afafe9\" (UID: \"4ae2270c-607f-4315-959e-eb8536afafe9\") " Nov 28 12:54:32 crc kubenswrapper[4779]: I1128 12:54:32.856258 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ae2270c-607f-4315-959e-eb8536afafe9-kube-api-access-nldsk" (OuterVolumeSpecName: "kube-api-access-nldsk") pod "4ae2270c-607f-4315-959e-eb8536afafe9" (UID: "4ae2270c-607f-4315-959e-eb8536afafe9"). InnerVolumeSpecName "kube-api-access-nldsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:54:32 crc kubenswrapper[4779]: I1128 12:54:32.886653 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ae2270c-607f-4315-959e-eb8536afafe9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4ae2270c-607f-4315-959e-eb8536afafe9" (UID: "4ae2270c-607f-4315-959e-eb8536afafe9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:54:32 crc kubenswrapper[4779]: I1128 12:54:32.913785 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ae2270c-607f-4315-959e-eb8536afafe9-config-data" (OuterVolumeSpecName: "config-data") pod "4ae2270c-607f-4315-959e-eb8536afafe9" (UID: "4ae2270c-607f-4315-959e-eb8536afafe9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:54:32 crc kubenswrapper[4779]: I1128 12:54:32.950174 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nldsk\" (UniqueName: \"kubernetes.io/projected/4ae2270c-607f-4315-959e-eb8536afafe9-kube-api-access-nldsk\") on node \"crc\" DevicePath \"\"" Nov 28 12:54:32 crc kubenswrapper[4779]: I1128 12:54:32.950220 4779 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ae2270c-607f-4315-959e-eb8536afafe9-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:54:32 crc kubenswrapper[4779]: I1128 12:54:32.950239 4779 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ae2270c-607f-4315-959e-eb8536afafe9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.403974 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-lf8cn" event={"ID":"53ec5a58-e98c-4b0b-a711-b52e332ba26c","Type":"ContainerStarted","Data":"8c6ec04444ea67de9a7c5ddfac993b816c6b31fb52b841e88020ce18857191a9"} Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.404432 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7ff5475cc9-lf8cn" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.407390 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-vlmfj" event={"ID":"4ae2270c-607f-4315-959e-eb8536afafe9","Type":"ContainerDied","Data":"dfe0fa4ff910f5c2776c005e95bd3bfce2e4cc8efe541af563cca16a61fc4f8c"} Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.407537 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfe0fa4ff910f5c2776c005e95bd3bfce2e4cc8efe541af563cca16a61fc4f8c" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.411218 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-vlmfj" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.453869 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7ff5475cc9-lf8cn" podStartSLOduration=3.453845001 podStartE2EDuration="3.453845001s" podCreationTimestamp="2025-11-28 12:54:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:54:33.438560828 +0000 UTC m=+1134.004236222" watchObservedRunningTime="2025-11-28 12:54:33.453845001 +0000 UTC m=+1134.019520365" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.604484 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-n6d4q"] Nov 28 12:54:33 crc kubenswrapper[4779]: E1128 12:54:33.604790 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ae2270c-607f-4315-959e-eb8536afafe9" containerName="keystone-db-sync" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.604806 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ae2270c-607f-4315-959e-eb8536afafe9" containerName="keystone-db-sync" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.604976 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ae2270c-607f-4315-959e-eb8536afafe9" containerName="keystone-db-sync" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.605521 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-n6d4q" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.612769 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.613028 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.613164 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.613350 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-nlxvv" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.613585 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.619251 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-lf8cn"] Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.633754 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-n6d4q"] Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.649916 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-cd7zw"] Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.651290 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-cd7zw" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.692164 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-cd7zw"] Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.747122 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-2rlgj"] Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.748099 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-2rlgj" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.751887 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-prts7" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.752099 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.756234 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-2rlgj"] Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.760876 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e6e28a04-d7f0-46a8-81db-5b23fc3ac835-fernet-keys\") pod \"keystone-bootstrap-n6d4q\" (UID: \"e6e28a04-d7f0-46a8-81db-5b23fc3ac835\") " pod="openstack/keystone-bootstrap-n6d4q" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.760964 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnrlp\" (UniqueName: \"kubernetes.io/projected/67e05920-c609-42bb-95ce-244a7564af1e-kube-api-access-rnrlp\") pod \"dnsmasq-dns-5c5cc7c5ff-cd7zw\" (UID: \"67e05920-c609-42bb-95ce-244a7564af1e\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-cd7zw" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.761008 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/67e05920-c609-42bb-95ce-244a7564af1e-dns-swift-storage-0\") pod \"dnsmasq-dns-5c5cc7c5ff-cd7zw\" (UID: \"67e05920-c609-42bb-95ce-244a7564af1e\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-cd7zw" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.761034 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/67e05920-c609-42bb-95ce-244a7564af1e-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5cc7c5ff-cd7zw\" (UID: \"67e05920-c609-42bb-95ce-244a7564af1e\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-cd7zw" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.761065 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6e28a04-d7f0-46a8-81db-5b23fc3ac835-combined-ca-bundle\") pod \"keystone-bootstrap-n6d4q\" (UID: \"e6e28a04-d7f0-46a8-81db-5b23fc3ac835\") " pod="openstack/keystone-bootstrap-n6d4q" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.761127 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6e28a04-d7f0-46a8-81db-5b23fc3ac835-config-data\") pod \"keystone-bootstrap-n6d4q\" (UID: \"e6e28a04-d7f0-46a8-81db-5b23fc3ac835\") " pod="openstack/keystone-bootstrap-n6d4q" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.761169 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-849rr\" (UniqueName: \"kubernetes.io/projected/e6e28a04-d7f0-46a8-81db-5b23fc3ac835-kube-api-access-849rr\") pod \"keystone-bootstrap-n6d4q\" (UID: \"e6e28a04-d7f0-46a8-81db-5b23fc3ac835\") " pod="openstack/keystone-bootstrap-n6d4q" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.761199 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6e28a04-d7f0-46a8-81db-5b23fc3ac835-scripts\") pod \"keystone-bootstrap-n6d4q\" (UID: \"e6e28a04-d7f0-46a8-81db-5b23fc3ac835\") " pod="openstack/keystone-bootstrap-n6d4q" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.761228 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67e05920-c609-42bb-95ce-244a7564af1e-config\") pod \"dnsmasq-dns-5c5cc7c5ff-cd7zw\" (UID: \"67e05920-c609-42bb-95ce-244a7564af1e\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-cd7zw" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.761249 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/67e05920-c609-42bb-95ce-244a7564af1e-ovsdbserver-sb\") pod \"dnsmasq-dns-5c5cc7c5ff-cd7zw\" (UID: \"67e05920-c609-42bb-95ce-244a7564af1e\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-cd7zw" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.761274 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/67e05920-c609-42bb-95ce-244a7564af1e-dns-svc\") pod \"dnsmasq-dns-5c5cc7c5ff-cd7zw\" (UID: \"67e05920-c609-42bb-95ce-244a7564af1e\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-cd7zw" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.761309 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e6e28a04-d7f0-46a8-81db-5b23fc3ac835-credential-keys\") pod \"keystone-bootstrap-n6d4q\" (UID: \"e6e28a04-d7f0-46a8-81db-5b23fc3ac835\") " pod="openstack/keystone-bootstrap-n6d4q" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.840683 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.847724 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.855037 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.855232 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.858534 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-q7v56"] Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.859776 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-q7v56" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.862559 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnrlp\" (UniqueName: \"kubernetes.io/projected/67e05920-c609-42bb-95ce-244a7564af1e-kube-api-access-rnrlp\") pod \"dnsmasq-dns-5c5cc7c5ff-cd7zw\" (UID: \"67e05920-c609-42bb-95ce-244a7564af1e\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-cd7zw" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.862614 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/67e05920-c609-42bb-95ce-244a7564af1e-dns-swift-storage-0\") pod \"dnsmasq-dns-5c5cc7c5ff-cd7zw\" (UID: \"67e05920-c609-42bb-95ce-244a7564af1e\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-cd7zw" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.862635 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/67e05920-c609-42bb-95ce-244a7564af1e-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5cc7c5ff-cd7zw\" (UID: \"67e05920-c609-42bb-95ce-244a7564af1e\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-cd7zw" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.862654 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6e28a04-d7f0-46a8-81db-5b23fc3ac835-combined-ca-bundle\") pod \"keystone-bootstrap-n6d4q\" (UID: \"e6e28a04-d7f0-46a8-81db-5b23fc3ac835\") " pod="openstack/keystone-bootstrap-n6d4q" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.862674 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsmk9\" (UniqueName: \"kubernetes.io/projected/090eca16-3536-4b84-85c8-e9a0d3a7deb6-kube-api-access-tsmk9\") pod \"heat-db-sync-2rlgj\" (UID: \"090eca16-3536-4b84-85c8-e9a0d3a7deb6\") " pod="openstack/heat-db-sync-2rlgj" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.862706 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6e28a04-d7f0-46a8-81db-5b23fc3ac835-config-data\") pod \"keystone-bootstrap-n6d4q\" (UID: \"e6e28a04-d7f0-46a8-81db-5b23fc3ac835\") " pod="openstack/keystone-bootstrap-n6d4q" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.862732 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-849rr\" (UniqueName: \"kubernetes.io/projected/e6e28a04-d7f0-46a8-81db-5b23fc3ac835-kube-api-access-849rr\") pod \"keystone-bootstrap-n6d4q\" (UID: \"e6e28a04-d7f0-46a8-81db-5b23fc3ac835\") " pod="openstack/keystone-bootstrap-n6d4q" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.862752 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6e28a04-d7f0-46a8-81db-5b23fc3ac835-scripts\") pod \"keystone-bootstrap-n6d4q\" (UID: \"e6e28a04-d7f0-46a8-81db-5b23fc3ac835\") " pod="openstack/keystone-bootstrap-n6d4q" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.862772 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67e05920-c609-42bb-95ce-244a7564af1e-config\") pod \"dnsmasq-dns-5c5cc7c5ff-cd7zw\" (UID: \"67e05920-c609-42bb-95ce-244a7564af1e\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-cd7zw" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.862789 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/090eca16-3536-4b84-85c8-e9a0d3a7deb6-config-data\") pod \"heat-db-sync-2rlgj\" (UID: \"090eca16-3536-4b84-85c8-e9a0d3a7deb6\") " pod="openstack/heat-db-sync-2rlgj" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.862806 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/67e05920-c609-42bb-95ce-244a7564af1e-ovsdbserver-sb\") pod \"dnsmasq-dns-5c5cc7c5ff-cd7zw\" (UID: \"67e05920-c609-42bb-95ce-244a7564af1e\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-cd7zw" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.862824 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/67e05920-c609-42bb-95ce-244a7564af1e-dns-svc\") pod \"dnsmasq-dns-5c5cc7c5ff-cd7zw\" (UID: \"67e05920-c609-42bb-95ce-244a7564af1e\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-cd7zw" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.862848 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e6e28a04-d7f0-46a8-81db-5b23fc3ac835-credential-keys\") pod \"keystone-bootstrap-n6d4q\" (UID: \"e6e28a04-d7f0-46a8-81db-5b23fc3ac835\") " pod="openstack/keystone-bootstrap-n6d4q" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.862871 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e6e28a04-d7f0-46a8-81db-5b23fc3ac835-fernet-keys\") pod \"keystone-bootstrap-n6d4q\" (UID: \"e6e28a04-d7f0-46a8-81db-5b23fc3ac835\") " pod="openstack/keystone-bootstrap-n6d4q" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.862885 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/090eca16-3536-4b84-85c8-e9a0d3a7deb6-combined-ca-bundle\") pod \"heat-db-sync-2rlgj\" (UID: \"090eca16-3536-4b84-85c8-e9a0d3a7deb6\") " pod="openstack/heat-db-sync-2rlgj" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.864003 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/67e05920-c609-42bb-95ce-244a7564af1e-dns-swift-storage-0\") pod \"dnsmasq-dns-5c5cc7c5ff-cd7zw\" (UID: \"67e05920-c609-42bb-95ce-244a7564af1e\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-cd7zw" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.864531 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67e05920-c609-42bb-95ce-244a7564af1e-config\") pod \"dnsmasq-dns-5c5cc7c5ff-cd7zw\" (UID: \"67e05920-c609-42bb-95ce-244a7564af1e\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-cd7zw" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.864793 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.865012 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/67e05920-c609-42bb-95ce-244a7564af1e-ovsdbserver-sb\") pod \"dnsmasq-dns-5c5cc7c5ff-cd7zw\" (UID: \"67e05920-c609-42bb-95ce-244a7564af1e\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-cd7zw" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.865030 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-9ppff" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.865514 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.865828 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/67e05920-c609-42bb-95ce-244a7564af1e-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5cc7c5ff-cd7zw\" (UID: \"67e05920-c609-42bb-95ce-244a7564af1e\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-cd7zw" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.866479 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/67e05920-c609-42bb-95ce-244a7564af1e-dns-svc\") pod \"dnsmasq-dns-5c5cc7c5ff-cd7zw\" (UID: \"67e05920-c609-42bb-95ce-244a7564af1e\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-cd7zw" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.867009 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-q7v56"] Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.868854 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e6e28a04-d7f0-46a8-81db-5b23fc3ac835-credential-keys\") pod \"keystone-bootstrap-n6d4q\" (UID: \"e6e28a04-d7f0-46a8-81db-5b23fc3ac835\") " pod="openstack/keystone-bootstrap-n6d4q" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.880281 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.889527 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6e28a04-d7f0-46a8-81db-5b23fc3ac835-scripts\") pod \"keystone-bootstrap-n6d4q\" (UID: \"e6e28a04-d7f0-46a8-81db-5b23fc3ac835\") " pod="openstack/keystone-bootstrap-n6d4q" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.889602 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-sh4hl"] Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.890597 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-sh4hl" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.893449 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-k7xzm" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.895091 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnrlp\" (UniqueName: \"kubernetes.io/projected/67e05920-c609-42bb-95ce-244a7564af1e-kube-api-access-rnrlp\") pod \"dnsmasq-dns-5c5cc7c5ff-cd7zw\" (UID: \"67e05920-c609-42bb-95ce-244a7564af1e\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-cd7zw" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.896409 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.896573 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6e28a04-d7f0-46a8-81db-5b23fc3ac835-combined-ca-bundle\") pod \"keystone-bootstrap-n6d4q\" (UID: \"e6e28a04-d7f0-46a8-81db-5b23fc3ac835\") " pod="openstack/keystone-bootstrap-n6d4q" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.896593 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.897696 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6e28a04-d7f0-46a8-81db-5b23fc3ac835-config-data\") pod \"keystone-bootstrap-n6d4q\" (UID: \"e6e28a04-d7f0-46a8-81db-5b23fc3ac835\") " pod="openstack/keystone-bootstrap-n6d4q" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.897813 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e6e28a04-d7f0-46a8-81db-5b23fc3ac835-fernet-keys\") pod \"keystone-bootstrap-n6d4q\" (UID: \"e6e28a04-d7f0-46a8-81db-5b23fc3ac835\") " pod="openstack/keystone-bootstrap-n6d4q" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.915648 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-849rr\" (UniqueName: \"kubernetes.io/projected/e6e28a04-d7f0-46a8-81db-5b23fc3ac835-kube-api-access-849rr\") pod \"keystone-bootstrap-n6d4q\" (UID: \"e6e28a04-d7f0-46a8-81db-5b23fc3ac835\") " pod="openstack/keystone-bootstrap-n6d4q" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.929483 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-n6d4q" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.946160 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-sh4hl"] Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.964722 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1c512ed-6e02-45b5-a320-0a1b58b074ab-scripts\") pod \"ceilometer-0\" (UID: \"c1c512ed-6e02-45b5-a320-0a1b58b074ab\") " pod="openstack/ceilometer-0" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.964760 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hmfg\" (UniqueName: \"kubernetes.io/projected/082059dc-73e6-482b-a0ad-ed2a62282f61-kube-api-access-6hmfg\") pod \"neutron-db-sync-q7v56\" (UID: \"082059dc-73e6-482b-a0ad-ed2a62282f61\") " pod="openstack/neutron-db-sync-q7v56" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.964805 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsmk9\" (UniqueName: \"kubernetes.io/projected/090eca16-3536-4b84-85c8-e9a0d3a7deb6-kube-api-access-tsmk9\") pod \"heat-db-sync-2rlgj\" (UID: \"090eca16-3536-4b84-85c8-e9a0d3a7deb6\") " pod="openstack/heat-db-sync-2rlgj" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.964828 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1c512ed-6e02-45b5-a320-0a1b58b074ab-config-data\") pod \"ceilometer-0\" (UID: \"c1c512ed-6e02-45b5-a320-0a1b58b074ab\") " pod="openstack/ceilometer-0" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.964847 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1c512ed-6e02-45b5-a320-0a1b58b074ab-log-httpd\") pod \"ceilometer-0\" (UID: \"c1c512ed-6e02-45b5-a320-0a1b58b074ab\") " pod="openstack/ceilometer-0" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.964870 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntq2b\" (UniqueName: \"kubernetes.io/projected/c1c512ed-6e02-45b5-a320-0a1b58b074ab-kube-api-access-ntq2b\") pod \"ceilometer-0\" (UID: \"c1c512ed-6e02-45b5-a320-0a1b58b074ab\") " pod="openstack/ceilometer-0" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.964902 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/090eca16-3536-4b84-85c8-e9a0d3a7deb6-config-data\") pod \"heat-db-sync-2rlgj\" (UID: \"090eca16-3536-4b84-85c8-e9a0d3a7deb6\") " pod="openstack/heat-db-sync-2rlgj" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.964921 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1c512ed-6e02-45b5-a320-0a1b58b074ab-run-httpd\") pod \"ceilometer-0\" (UID: \"c1c512ed-6e02-45b5-a320-0a1b58b074ab\") " pod="openstack/ceilometer-0" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.964940 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1c512ed-6e02-45b5-a320-0a1b58b074ab-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c1c512ed-6e02-45b5-a320-0a1b58b074ab\") " pod="openstack/ceilometer-0" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.964956 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/082059dc-73e6-482b-a0ad-ed2a62282f61-config\") pod \"neutron-db-sync-q7v56\" (UID: \"082059dc-73e6-482b-a0ad-ed2a62282f61\") " pod="openstack/neutron-db-sync-q7v56" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.964973 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/082059dc-73e6-482b-a0ad-ed2a62282f61-combined-ca-bundle\") pod \"neutron-db-sync-q7v56\" (UID: \"082059dc-73e6-482b-a0ad-ed2a62282f61\") " pod="openstack/neutron-db-sync-q7v56" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.965022 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/090eca16-3536-4b84-85c8-e9a0d3a7deb6-combined-ca-bundle\") pod \"heat-db-sync-2rlgj\" (UID: \"090eca16-3536-4b84-85c8-e9a0d3a7deb6\") " pod="openstack/heat-db-sync-2rlgj" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.965062 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1c512ed-6e02-45b5-a320-0a1b58b074ab-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c1c512ed-6e02-45b5-a320-0a1b58b074ab\") " pod="openstack/ceilometer-0" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.973774 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-cd7zw" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.979403 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/090eca16-3536-4b84-85c8-e9a0d3a7deb6-config-data\") pod \"heat-db-sync-2rlgj\" (UID: \"090eca16-3536-4b84-85c8-e9a0d3a7deb6\") " pod="openstack/heat-db-sync-2rlgj" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.983939 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/090eca16-3536-4b84-85c8-e9a0d3a7deb6-combined-ca-bundle\") pod \"heat-db-sync-2rlgj\" (UID: \"090eca16-3536-4b84-85c8-e9a0d3a7deb6\") " pod="openstack/heat-db-sync-2rlgj" Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.995975 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-ggv2n"] Nov 28 12:54:33 crc kubenswrapper[4779]: I1128 12:54:33.997023 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-ggv2n" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.003024 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.012029 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-xmvlq" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.015768 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsmk9\" (UniqueName: \"kubernetes.io/projected/090eca16-3536-4b84-85c8-e9a0d3a7deb6-kube-api-access-tsmk9\") pod \"heat-db-sync-2rlgj\" (UID: \"090eca16-3536-4b84-85c8-e9a0d3a7deb6\") " pod="openstack/heat-db-sync-2rlgj" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.019948 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-ggv2n"] Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.030313 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-nrmk4"] Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.031381 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-nrmk4" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.033742 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-hn9ct" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.034024 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.034353 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.037742 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-nrmk4"] Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.070411 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-2rlgj" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.071680 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhrsh\" (UniqueName: \"kubernetes.io/projected/aaa51f35-9ab4-4629-ae5a-349484d0917d-kube-api-access-nhrsh\") pod \"cinder-db-sync-sh4hl\" (UID: \"aaa51f35-9ab4-4629-ae5a-349484d0917d\") " pod="openstack/cinder-db-sync-sh4hl" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.071731 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1c512ed-6e02-45b5-a320-0a1b58b074ab-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c1c512ed-6e02-45b5-a320-0a1b58b074ab\") " pod="openstack/ceilometer-0" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.071754 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1c512ed-6e02-45b5-a320-0a1b58b074ab-scripts\") pod \"ceilometer-0\" (UID: \"c1c512ed-6e02-45b5-a320-0a1b58b074ab\") " pod="openstack/ceilometer-0" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.071760 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-cd7zw"] Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.071771 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaa51f35-9ab4-4629-ae5a-349484d0917d-combined-ca-bundle\") pod \"cinder-db-sync-sh4hl\" (UID: \"aaa51f35-9ab4-4629-ae5a-349484d0917d\") " pod="openstack/cinder-db-sync-sh4hl" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.071870 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hmfg\" (UniqueName: \"kubernetes.io/projected/082059dc-73e6-482b-a0ad-ed2a62282f61-kube-api-access-6hmfg\") pod \"neutron-db-sync-q7v56\" (UID: \"082059dc-73e6-482b-a0ad-ed2a62282f61\") " pod="openstack/neutron-db-sync-q7v56" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.071990 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1c512ed-6e02-45b5-a320-0a1b58b074ab-config-data\") pod \"ceilometer-0\" (UID: \"c1c512ed-6e02-45b5-a320-0a1b58b074ab\") " pod="openstack/ceilometer-0" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.072014 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aaa51f35-9ab4-4629-ae5a-349484d0917d-etc-machine-id\") pod \"cinder-db-sync-sh4hl\" (UID: \"aaa51f35-9ab4-4629-ae5a-349484d0917d\") " pod="openstack/cinder-db-sync-sh4hl" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.072046 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1c512ed-6e02-45b5-a320-0a1b58b074ab-log-httpd\") pod \"ceilometer-0\" (UID: \"c1c512ed-6e02-45b5-a320-0a1b58b074ab\") " pod="openstack/ceilometer-0" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.072073 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/aaa51f35-9ab4-4629-ae5a-349484d0917d-db-sync-config-data\") pod \"cinder-db-sync-sh4hl\" (UID: \"aaa51f35-9ab4-4629-ae5a-349484d0917d\") " pod="openstack/cinder-db-sync-sh4hl" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.072124 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntq2b\" (UniqueName: \"kubernetes.io/projected/c1c512ed-6e02-45b5-a320-0a1b58b074ab-kube-api-access-ntq2b\") pod \"ceilometer-0\" (UID: \"c1c512ed-6e02-45b5-a320-0a1b58b074ab\") " pod="openstack/ceilometer-0" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.072204 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaa51f35-9ab4-4629-ae5a-349484d0917d-config-data\") pod \"cinder-db-sync-sh4hl\" (UID: \"aaa51f35-9ab4-4629-ae5a-349484d0917d\") " pod="openstack/cinder-db-sync-sh4hl" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.072218 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aaa51f35-9ab4-4629-ae5a-349484d0917d-scripts\") pod \"cinder-db-sync-sh4hl\" (UID: \"aaa51f35-9ab4-4629-ae5a-349484d0917d\") " pod="openstack/cinder-db-sync-sh4hl" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.072237 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1c512ed-6e02-45b5-a320-0a1b58b074ab-run-httpd\") pod \"ceilometer-0\" (UID: \"c1c512ed-6e02-45b5-a320-0a1b58b074ab\") " pod="openstack/ceilometer-0" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.072276 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1c512ed-6e02-45b5-a320-0a1b58b074ab-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c1c512ed-6e02-45b5-a320-0a1b58b074ab\") " pod="openstack/ceilometer-0" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.072292 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/082059dc-73e6-482b-a0ad-ed2a62282f61-config\") pod \"neutron-db-sync-q7v56\" (UID: \"082059dc-73e6-482b-a0ad-ed2a62282f61\") " pod="openstack/neutron-db-sync-q7v56" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.072321 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/082059dc-73e6-482b-a0ad-ed2a62282f61-combined-ca-bundle\") pod \"neutron-db-sync-q7v56\" (UID: \"082059dc-73e6-482b-a0ad-ed2a62282f61\") " pod="openstack/neutron-db-sync-q7v56" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.073217 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1c512ed-6e02-45b5-a320-0a1b58b074ab-run-httpd\") pod \"ceilometer-0\" (UID: \"c1c512ed-6e02-45b5-a320-0a1b58b074ab\") " pod="openstack/ceilometer-0" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.073250 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1c512ed-6e02-45b5-a320-0a1b58b074ab-log-httpd\") pod \"ceilometer-0\" (UID: \"c1c512ed-6e02-45b5-a320-0a1b58b074ab\") " pod="openstack/ceilometer-0" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.084385 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1c512ed-6e02-45b5-a320-0a1b58b074ab-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c1c512ed-6e02-45b5-a320-0a1b58b074ab\") " pod="openstack/ceilometer-0" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.087208 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/082059dc-73e6-482b-a0ad-ed2a62282f61-combined-ca-bundle\") pod \"neutron-db-sync-q7v56\" (UID: \"082059dc-73e6-482b-a0ad-ed2a62282f61\") " pod="openstack/neutron-db-sync-q7v56" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.088645 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-5bfp4"] Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.090290 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-5bfp4" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.090425 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1c512ed-6e02-45b5-a320-0a1b58b074ab-scripts\") pod \"ceilometer-0\" (UID: \"c1c512ed-6e02-45b5-a320-0a1b58b074ab\") " pod="openstack/ceilometer-0" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.092721 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1c512ed-6e02-45b5-a320-0a1b58b074ab-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c1c512ed-6e02-45b5-a320-0a1b58b074ab\") " pod="openstack/ceilometer-0" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.093991 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1c512ed-6e02-45b5-a320-0a1b58b074ab-config-data\") pod \"ceilometer-0\" (UID: \"c1c512ed-6e02-45b5-a320-0a1b58b074ab\") " pod="openstack/ceilometer-0" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.096869 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/082059dc-73e6-482b-a0ad-ed2a62282f61-config\") pod \"neutron-db-sync-q7v56\" (UID: \"082059dc-73e6-482b-a0ad-ed2a62282f61\") " pod="openstack/neutron-db-sync-q7v56" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.099060 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hmfg\" (UniqueName: \"kubernetes.io/projected/082059dc-73e6-482b-a0ad-ed2a62282f61-kube-api-access-6hmfg\") pod \"neutron-db-sync-q7v56\" (UID: \"082059dc-73e6-482b-a0ad-ed2a62282f61\") " pod="openstack/neutron-db-sync-q7v56" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.103892 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-5bfp4"] Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.109195 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntq2b\" (UniqueName: \"kubernetes.io/projected/c1c512ed-6e02-45b5-a320-0a1b58b074ab-kube-api-access-ntq2b\") pod \"ceilometer-0\" (UID: \"c1c512ed-6e02-45b5-a320-0a1b58b074ab\") " pod="openstack/ceilometer-0" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.164354 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-q7v56" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.227452 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aaa51f35-9ab4-4629-ae5a-349484d0917d-etc-machine-id\") pod \"cinder-db-sync-sh4hl\" (UID: \"aaa51f35-9ab4-4629-ae5a-349484d0917d\") " pod="openstack/cinder-db-sync-sh4hl" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.227509 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be93bf1f-510b-4a38-8f85-b59c36b2feb1-config-data\") pod \"placement-db-sync-nrmk4\" (UID: \"be93bf1f-510b-4a38-8f85-b59c36b2feb1\") " pod="openstack/placement-db-sync-nrmk4" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.227537 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdbvk\" (UniqueName: \"kubernetes.io/projected/be93bf1f-510b-4a38-8f85-b59c36b2feb1-kube-api-access-sdbvk\") pod \"placement-db-sync-nrmk4\" (UID: \"be93bf1f-510b-4a38-8f85-b59c36b2feb1\") " pod="openstack/placement-db-sync-nrmk4" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.227559 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be93bf1f-510b-4a38-8f85-b59c36b2feb1-logs\") pod \"placement-db-sync-nrmk4\" (UID: \"be93bf1f-510b-4a38-8f85-b59c36b2feb1\") " pod="openstack/placement-db-sync-nrmk4" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.227599 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/aaa51f35-9ab4-4629-ae5a-349484d0917d-db-sync-config-data\") pod \"cinder-db-sync-sh4hl\" (UID: \"aaa51f35-9ab4-4629-ae5a-349484d0917d\") " pod="openstack/cinder-db-sync-sh4hl" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.227623 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a38e4faf-dc47-411c-94d0-7e143c2540d0-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-5bfp4\" (UID: \"a38e4faf-dc47-411c-94d0-7e143c2540d0\") " pod="openstack/dnsmasq-dns-8b5c85b87-5bfp4" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.227855 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlrz9\" (UniqueName: \"kubernetes.io/projected/1f844f06-a227-4423-9d97-33f9c85c0df8-kube-api-access-nlrz9\") pod \"barbican-db-sync-ggv2n\" (UID: \"1f844f06-a227-4423-9d97-33f9c85c0df8\") " pod="openstack/barbican-db-sync-ggv2n" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.234898 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aaa51f35-9ab4-4629-ae5a-349484d0917d-etc-machine-id\") pod \"cinder-db-sync-sh4hl\" (UID: \"aaa51f35-9ab4-4629-ae5a-349484d0917d\") " pod="openstack/cinder-db-sync-sh4hl" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.235333 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1f844f06-a227-4423-9d97-33f9c85c0df8-db-sync-config-data\") pod \"barbican-db-sync-ggv2n\" (UID: \"1f844f06-a227-4423-9d97-33f9c85c0df8\") " pod="openstack/barbican-db-sync-ggv2n" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.235480 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a38e4faf-dc47-411c-94d0-7e143c2540d0-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-5bfp4\" (UID: \"a38e4faf-dc47-411c-94d0-7e143c2540d0\") " pod="openstack/dnsmasq-dns-8b5c85b87-5bfp4" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.235540 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaa51f35-9ab4-4629-ae5a-349484d0917d-config-data\") pod \"cinder-db-sync-sh4hl\" (UID: \"aaa51f35-9ab4-4629-ae5a-349484d0917d\") " pod="openstack/cinder-db-sync-sh4hl" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.235557 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aaa51f35-9ab4-4629-ae5a-349484d0917d-scripts\") pod \"cinder-db-sync-sh4hl\" (UID: \"aaa51f35-9ab4-4629-ae5a-349484d0917d\") " pod="openstack/cinder-db-sync-sh4hl" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.235573 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a38e4faf-dc47-411c-94d0-7e143c2540d0-config\") pod \"dnsmasq-dns-8b5c85b87-5bfp4\" (UID: \"a38e4faf-dc47-411c-94d0-7e143c2540d0\") " pod="openstack/dnsmasq-dns-8b5c85b87-5bfp4" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.236370 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w45fp\" (UniqueName: \"kubernetes.io/projected/a38e4faf-dc47-411c-94d0-7e143c2540d0-kube-api-access-w45fp\") pod \"dnsmasq-dns-8b5c85b87-5bfp4\" (UID: \"a38e4faf-dc47-411c-94d0-7e143c2540d0\") " pod="openstack/dnsmasq-dns-8b5c85b87-5bfp4" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.236475 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhrsh\" (UniqueName: \"kubernetes.io/projected/aaa51f35-9ab4-4629-ae5a-349484d0917d-kube-api-access-nhrsh\") pod \"cinder-db-sync-sh4hl\" (UID: \"aaa51f35-9ab4-4629-ae5a-349484d0917d\") " pod="openstack/cinder-db-sync-sh4hl" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.236701 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be93bf1f-510b-4a38-8f85-b59c36b2feb1-combined-ca-bundle\") pod \"placement-db-sync-nrmk4\" (UID: \"be93bf1f-510b-4a38-8f85-b59c36b2feb1\") " pod="openstack/placement-db-sync-nrmk4" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.236732 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f844f06-a227-4423-9d97-33f9c85c0df8-combined-ca-bundle\") pod \"barbican-db-sync-ggv2n\" (UID: \"1f844f06-a227-4423-9d97-33f9c85c0df8\") " pod="openstack/barbican-db-sync-ggv2n" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.236788 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be93bf1f-510b-4a38-8f85-b59c36b2feb1-scripts\") pod \"placement-db-sync-nrmk4\" (UID: \"be93bf1f-510b-4a38-8f85-b59c36b2feb1\") " pod="openstack/placement-db-sync-nrmk4" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.236814 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a38e4faf-dc47-411c-94d0-7e143c2540d0-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-5bfp4\" (UID: \"a38e4faf-dc47-411c-94d0-7e143c2540d0\") " pod="openstack/dnsmasq-dns-8b5c85b87-5bfp4" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.238521 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaa51f35-9ab4-4629-ae5a-349484d0917d-combined-ca-bundle\") pod \"cinder-db-sync-sh4hl\" (UID: \"aaa51f35-9ab4-4629-ae5a-349484d0917d\") " pod="openstack/cinder-db-sync-sh4hl" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.240427 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a38e4faf-dc47-411c-94d0-7e143c2540d0-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-5bfp4\" (UID: \"a38e4faf-dc47-411c-94d0-7e143c2540d0\") " pod="openstack/dnsmasq-dns-8b5c85b87-5bfp4" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.245726 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/aaa51f35-9ab4-4629-ae5a-349484d0917d-db-sync-config-data\") pod \"cinder-db-sync-sh4hl\" (UID: \"aaa51f35-9ab4-4629-ae5a-349484d0917d\") " pod="openstack/cinder-db-sync-sh4hl" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.245996 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaa51f35-9ab4-4629-ae5a-349484d0917d-config-data\") pod \"cinder-db-sync-sh4hl\" (UID: \"aaa51f35-9ab4-4629-ae5a-349484d0917d\") " pod="openstack/cinder-db-sync-sh4hl" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.256280 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aaa51f35-9ab4-4629-ae5a-349484d0917d-scripts\") pod \"cinder-db-sync-sh4hl\" (UID: \"aaa51f35-9ab4-4629-ae5a-349484d0917d\") " pod="openstack/cinder-db-sync-sh4hl" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.260151 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaa51f35-9ab4-4629-ae5a-349484d0917d-combined-ca-bundle\") pod \"cinder-db-sync-sh4hl\" (UID: \"aaa51f35-9ab4-4629-ae5a-349484d0917d\") " pod="openstack/cinder-db-sync-sh4hl" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.273802 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhrsh\" (UniqueName: \"kubernetes.io/projected/aaa51f35-9ab4-4629-ae5a-349484d0917d-kube-api-access-nhrsh\") pod \"cinder-db-sync-sh4hl\" (UID: \"aaa51f35-9ab4-4629-ae5a-349484d0917d\") " pod="openstack/cinder-db-sync-sh4hl" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.314956 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-sh4hl" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.334845 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.344615 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w45fp\" (UniqueName: \"kubernetes.io/projected/a38e4faf-dc47-411c-94d0-7e143c2540d0-kube-api-access-w45fp\") pod \"dnsmasq-dns-8b5c85b87-5bfp4\" (UID: \"a38e4faf-dc47-411c-94d0-7e143c2540d0\") " pod="openstack/dnsmasq-dns-8b5c85b87-5bfp4" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.345045 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be93bf1f-510b-4a38-8f85-b59c36b2feb1-combined-ca-bundle\") pod \"placement-db-sync-nrmk4\" (UID: \"be93bf1f-510b-4a38-8f85-b59c36b2feb1\") " pod="openstack/placement-db-sync-nrmk4" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.345080 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f844f06-a227-4423-9d97-33f9c85c0df8-combined-ca-bundle\") pod \"barbican-db-sync-ggv2n\" (UID: \"1f844f06-a227-4423-9d97-33f9c85c0df8\") " pod="openstack/barbican-db-sync-ggv2n" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.345137 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be93bf1f-510b-4a38-8f85-b59c36b2feb1-scripts\") pod \"placement-db-sync-nrmk4\" (UID: \"be93bf1f-510b-4a38-8f85-b59c36b2feb1\") " pod="openstack/placement-db-sync-nrmk4" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.345170 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a38e4faf-dc47-411c-94d0-7e143c2540d0-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-5bfp4\" (UID: \"a38e4faf-dc47-411c-94d0-7e143c2540d0\") " pod="openstack/dnsmasq-dns-8b5c85b87-5bfp4" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.345711 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a38e4faf-dc47-411c-94d0-7e143c2540d0-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-5bfp4\" (UID: \"a38e4faf-dc47-411c-94d0-7e143c2540d0\") " pod="openstack/dnsmasq-dns-8b5c85b87-5bfp4" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.345792 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be93bf1f-510b-4a38-8f85-b59c36b2feb1-config-data\") pod \"placement-db-sync-nrmk4\" (UID: \"be93bf1f-510b-4a38-8f85-b59c36b2feb1\") " pod="openstack/placement-db-sync-nrmk4" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.345822 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdbvk\" (UniqueName: \"kubernetes.io/projected/be93bf1f-510b-4a38-8f85-b59c36b2feb1-kube-api-access-sdbvk\") pod \"placement-db-sync-nrmk4\" (UID: \"be93bf1f-510b-4a38-8f85-b59c36b2feb1\") " pod="openstack/placement-db-sync-nrmk4" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.345844 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be93bf1f-510b-4a38-8f85-b59c36b2feb1-logs\") pod \"placement-db-sync-nrmk4\" (UID: \"be93bf1f-510b-4a38-8f85-b59c36b2feb1\") " pod="openstack/placement-db-sync-nrmk4" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.345893 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a38e4faf-dc47-411c-94d0-7e143c2540d0-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-5bfp4\" (UID: \"a38e4faf-dc47-411c-94d0-7e143c2540d0\") " pod="openstack/dnsmasq-dns-8b5c85b87-5bfp4" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.345928 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlrz9\" (UniqueName: \"kubernetes.io/projected/1f844f06-a227-4423-9d97-33f9c85c0df8-kube-api-access-nlrz9\") pod \"barbican-db-sync-ggv2n\" (UID: \"1f844f06-a227-4423-9d97-33f9c85c0df8\") " pod="openstack/barbican-db-sync-ggv2n" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.345978 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1f844f06-a227-4423-9d97-33f9c85c0df8-db-sync-config-data\") pod \"barbican-db-sync-ggv2n\" (UID: \"1f844f06-a227-4423-9d97-33f9c85c0df8\") " pod="openstack/barbican-db-sync-ggv2n" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.346007 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a38e4faf-dc47-411c-94d0-7e143c2540d0-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-5bfp4\" (UID: \"a38e4faf-dc47-411c-94d0-7e143c2540d0\") " pod="openstack/dnsmasq-dns-8b5c85b87-5bfp4" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.346038 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a38e4faf-dc47-411c-94d0-7e143c2540d0-config\") pod \"dnsmasq-dns-8b5c85b87-5bfp4\" (UID: \"a38e4faf-dc47-411c-94d0-7e143c2540d0\") " pod="openstack/dnsmasq-dns-8b5c85b87-5bfp4" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.349750 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a38e4faf-dc47-411c-94d0-7e143c2540d0-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-5bfp4\" (UID: \"a38e4faf-dc47-411c-94d0-7e143c2540d0\") " pod="openstack/dnsmasq-dns-8b5c85b87-5bfp4" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.350405 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a38e4faf-dc47-411c-94d0-7e143c2540d0-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-5bfp4\" (UID: \"a38e4faf-dc47-411c-94d0-7e143c2540d0\") " pod="openstack/dnsmasq-dns-8b5c85b87-5bfp4" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.350510 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be93bf1f-510b-4a38-8f85-b59c36b2feb1-logs\") pod \"placement-db-sync-nrmk4\" (UID: \"be93bf1f-510b-4a38-8f85-b59c36b2feb1\") " pod="openstack/placement-db-sync-nrmk4" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.351211 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a38e4faf-dc47-411c-94d0-7e143c2540d0-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-5bfp4\" (UID: \"a38e4faf-dc47-411c-94d0-7e143c2540d0\") " pod="openstack/dnsmasq-dns-8b5c85b87-5bfp4" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.351387 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a38e4faf-dc47-411c-94d0-7e143c2540d0-config\") pod \"dnsmasq-dns-8b5c85b87-5bfp4\" (UID: \"a38e4faf-dc47-411c-94d0-7e143c2540d0\") " pod="openstack/dnsmasq-dns-8b5c85b87-5bfp4" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.351729 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a38e4faf-dc47-411c-94d0-7e143c2540d0-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-5bfp4\" (UID: \"a38e4faf-dc47-411c-94d0-7e143c2540d0\") " pod="openstack/dnsmasq-dns-8b5c85b87-5bfp4" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.353295 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f844f06-a227-4423-9d97-33f9c85c0df8-combined-ca-bundle\") pod \"barbican-db-sync-ggv2n\" (UID: \"1f844f06-a227-4423-9d97-33f9c85c0df8\") " pod="openstack/barbican-db-sync-ggv2n" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.355596 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1f844f06-a227-4423-9d97-33f9c85c0df8-db-sync-config-data\") pod \"barbican-db-sync-ggv2n\" (UID: \"1f844f06-a227-4423-9d97-33f9c85c0df8\") " pod="openstack/barbican-db-sync-ggv2n" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.355687 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be93bf1f-510b-4a38-8f85-b59c36b2feb1-config-data\") pod \"placement-db-sync-nrmk4\" (UID: \"be93bf1f-510b-4a38-8f85-b59c36b2feb1\") " pod="openstack/placement-db-sync-nrmk4" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.356074 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be93bf1f-510b-4a38-8f85-b59c36b2feb1-scripts\") pod \"placement-db-sync-nrmk4\" (UID: \"be93bf1f-510b-4a38-8f85-b59c36b2feb1\") " pod="openstack/placement-db-sync-nrmk4" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.359459 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be93bf1f-510b-4a38-8f85-b59c36b2feb1-combined-ca-bundle\") pod \"placement-db-sync-nrmk4\" (UID: \"be93bf1f-510b-4a38-8f85-b59c36b2feb1\") " pod="openstack/placement-db-sync-nrmk4" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.372369 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w45fp\" (UniqueName: \"kubernetes.io/projected/a38e4faf-dc47-411c-94d0-7e143c2540d0-kube-api-access-w45fp\") pod \"dnsmasq-dns-8b5c85b87-5bfp4\" (UID: \"a38e4faf-dc47-411c-94d0-7e143c2540d0\") " pod="openstack/dnsmasq-dns-8b5c85b87-5bfp4" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.377733 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdbvk\" (UniqueName: \"kubernetes.io/projected/be93bf1f-510b-4a38-8f85-b59c36b2feb1-kube-api-access-sdbvk\") pod \"placement-db-sync-nrmk4\" (UID: \"be93bf1f-510b-4a38-8f85-b59c36b2feb1\") " pod="openstack/placement-db-sync-nrmk4" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.378964 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlrz9\" (UniqueName: \"kubernetes.io/projected/1f844f06-a227-4423-9d97-33f9c85c0df8-kube-api-access-nlrz9\") pod \"barbican-db-sync-ggv2n\" (UID: \"1f844f06-a227-4423-9d97-33f9c85c0df8\") " pod="openstack/barbican-db-sync-ggv2n" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.624216 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-n6d4q"] Nov 28 12:54:34 crc kubenswrapper[4779]: W1128 12:54:34.624475 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6e28a04_d7f0_46a8_81db_5b23fc3ac835.slice/crio-0a61cd6a006354998f198bb997566dbecd41083dc2caa074df4fe61716e31e9b WatchSource:0}: Error finding container 0a61cd6a006354998f198bb997566dbecd41083dc2caa074df4fe61716e31e9b: Status 404 returned error can't find the container with id 0a61cd6a006354998f198bb997566dbecd41083dc2caa074df4fe61716e31e9b Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.631379 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-ggv2n" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.644204 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-nrmk4" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.652514 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-5bfp4" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.734294 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-2rlgj"] Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.749204 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.750689 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.752947 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.754614 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.755028 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-q42sg" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.756612 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.756915 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.840870 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.842194 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.844550 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.844624 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.855665 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f8c6c7c-6c23-4457-8f25-65d7def68794-logs\") pod \"glance-default-internal-api-0\" (UID: \"6f8c6c7c-6c23-4457-8f25-65d7def68794\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.855738 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f8c6c7c-6c23-4457-8f25-65d7def68794-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6f8c6c7c-6c23-4457-8f25-65d7def68794\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.855793 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f8c6c7c-6c23-4457-8f25-65d7def68794-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6f8c6c7c-6c23-4457-8f25-65d7def68794\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.855821 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f8c6c7c-6c23-4457-8f25-65d7def68794-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6f8c6c7c-6c23-4457-8f25-65d7def68794\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.855847 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6f8c6c7c-6c23-4457-8f25-65d7def68794-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6f8c6c7c-6c23-4457-8f25-65d7def68794\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.855882 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlh69\" (UniqueName: \"kubernetes.io/projected/6f8c6c7c-6c23-4457-8f25-65d7def68794-kube-api-access-dlh69\") pod \"glance-default-internal-api-0\" (UID: \"6f8c6c7c-6c23-4457-8f25-65d7def68794\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.855914 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"6f8c6c7c-6c23-4457-8f25-65d7def68794\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.855947 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f8c6c7c-6c23-4457-8f25-65d7def68794-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6f8c6c7c-6c23-4457-8f25-65d7def68794\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.868049 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.877863 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-cd7zw"] Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.945130 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-sh4hl"] Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.962061 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"6f8c6c7c-6c23-4457-8f25-65d7def68794\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.963230 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6a188e5-6e45-404a-b021-91592dce265d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c6a188e5-6e45-404a-b021-91592dce265d\") " pod="openstack/glance-default-external-api-0" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.963290 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp8qw\" (UniqueName: \"kubernetes.io/projected/c6a188e5-6e45-404a-b021-91592dce265d-kube-api-access-wp8qw\") pod \"glance-default-external-api-0\" (UID: \"c6a188e5-6e45-404a-b021-91592dce265d\") " pod="openstack/glance-default-external-api-0" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.963315 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f8c6c7c-6c23-4457-8f25-65d7def68794-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6f8c6c7c-6c23-4457-8f25-65d7def68794\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.963378 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6a188e5-6e45-404a-b021-91592dce265d-logs\") pod \"glance-default-external-api-0\" (UID: \"c6a188e5-6e45-404a-b021-91592dce265d\") " pod="openstack/glance-default-external-api-0" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.963394 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c6a188e5-6e45-404a-b021-91592dce265d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c6a188e5-6e45-404a-b021-91592dce265d\") " pod="openstack/glance-default-external-api-0" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.963409 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6a188e5-6e45-404a-b021-91592dce265d-scripts\") pod \"glance-default-external-api-0\" (UID: \"c6a188e5-6e45-404a-b021-91592dce265d\") " pod="openstack/glance-default-external-api-0" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.963456 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6a188e5-6e45-404a-b021-91592dce265d-config-data\") pod \"glance-default-external-api-0\" (UID: \"c6a188e5-6e45-404a-b021-91592dce265d\") " pod="openstack/glance-default-external-api-0" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.963480 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f8c6c7c-6c23-4457-8f25-65d7def68794-logs\") pod \"glance-default-internal-api-0\" (UID: \"6f8c6c7c-6c23-4457-8f25-65d7def68794\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.962542 4779 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"6f8c6c7c-6c23-4457-8f25-65d7def68794\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.969386 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f8c6c7c-6c23-4457-8f25-65d7def68794-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6f8c6c7c-6c23-4457-8f25-65d7def68794\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.969788 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f8c6c7c-6c23-4457-8f25-65d7def68794-logs\") pod \"glance-default-internal-api-0\" (UID: \"6f8c6c7c-6c23-4457-8f25-65d7def68794\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.982890 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-q7v56"] Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.983036 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"c6a188e5-6e45-404a-b021-91592dce265d\") " pod="openstack/glance-default-external-api-0" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.983103 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f8c6c7c-6c23-4457-8f25-65d7def68794-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6f8c6c7c-6c23-4457-8f25-65d7def68794\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.983194 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f8c6c7c-6c23-4457-8f25-65d7def68794-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6f8c6c7c-6c23-4457-8f25-65d7def68794\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.983242 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f8c6c7c-6c23-4457-8f25-65d7def68794-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6f8c6c7c-6c23-4457-8f25-65d7def68794\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.983287 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6a188e5-6e45-404a-b021-91592dce265d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c6a188e5-6e45-404a-b021-91592dce265d\") " pod="openstack/glance-default-external-api-0" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.983311 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6f8c6c7c-6c23-4457-8f25-65d7def68794-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6f8c6c7c-6c23-4457-8f25-65d7def68794\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.983366 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlh69\" (UniqueName: \"kubernetes.io/projected/6f8c6c7c-6c23-4457-8f25-65d7def68794-kube-api-access-dlh69\") pod \"glance-default-internal-api-0\" (UID: \"6f8c6c7c-6c23-4457-8f25-65d7def68794\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.991183 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6f8c6c7c-6c23-4457-8f25-65d7def68794-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6f8c6c7c-6c23-4457-8f25-65d7def68794\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.991878 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f8c6c7c-6c23-4457-8f25-65d7def68794-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6f8c6c7c-6c23-4457-8f25-65d7def68794\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.992994 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f8c6c7c-6c23-4457-8f25-65d7def68794-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6f8c6c7c-6c23-4457-8f25-65d7def68794\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:54:34 crc kubenswrapper[4779]: I1128 12:54:34.997793 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f8c6c7c-6c23-4457-8f25-65d7def68794-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6f8c6c7c-6c23-4457-8f25-65d7def68794\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:54:35 crc kubenswrapper[4779]: I1128 12:54:35.000333 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"6f8c6c7c-6c23-4457-8f25-65d7def68794\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:54:35 crc kubenswrapper[4779]: I1128 12:54:35.001012 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlh69\" (UniqueName: \"kubernetes.io/projected/6f8c6c7c-6c23-4457-8f25-65d7def68794-kube-api-access-dlh69\") pod \"glance-default-internal-api-0\" (UID: \"6f8c6c7c-6c23-4457-8f25-65d7def68794\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:54:35 crc kubenswrapper[4779]: I1128 12:54:35.092132 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6a188e5-6e45-404a-b021-91592dce265d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c6a188e5-6e45-404a-b021-91592dce265d\") " pod="openstack/glance-default-external-api-0" Nov 28 12:54:35 crc kubenswrapper[4779]: I1128 12:54:35.092210 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wp8qw\" (UniqueName: \"kubernetes.io/projected/c6a188e5-6e45-404a-b021-91592dce265d-kube-api-access-wp8qw\") pod \"glance-default-external-api-0\" (UID: \"c6a188e5-6e45-404a-b021-91592dce265d\") " pod="openstack/glance-default-external-api-0" Nov 28 12:54:35 crc kubenswrapper[4779]: I1128 12:54:35.092267 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6a188e5-6e45-404a-b021-91592dce265d-logs\") pod \"glance-default-external-api-0\" (UID: \"c6a188e5-6e45-404a-b021-91592dce265d\") " pod="openstack/glance-default-external-api-0" Nov 28 12:54:35 crc kubenswrapper[4779]: I1128 12:54:35.092285 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c6a188e5-6e45-404a-b021-91592dce265d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c6a188e5-6e45-404a-b021-91592dce265d\") " pod="openstack/glance-default-external-api-0" Nov 28 12:54:35 crc kubenswrapper[4779]: I1128 12:54:35.092307 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6a188e5-6e45-404a-b021-91592dce265d-scripts\") pod \"glance-default-external-api-0\" (UID: \"c6a188e5-6e45-404a-b021-91592dce265d\") " pod="openstack/glance-default-external-api-0" Nov 28 12:54:35 crc kubenswrapper[4779]: I1128 12:54:35.092356 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6a188e5-6e45-404a-b021-91592dce265d-config-data\") pod \"glance-default-external-api-0\" (UID: \"c6a188e5-6e45-404a-b021-91592dce265d\") " pod="openstack/glance-default-external-api-0" Nov 28 12:54:35 crc kubenswrapper[4779]: I1128 12:54:35.092442 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"c6a188e5-6e45-404a-b021-91592dce265d\") " pod="openstack/glance-default-external-api-0" Nov 28 12:54:35 crc kubenswrapper[4779]: I1128 12:54:35.092619 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6a188e5-6e45-404a-b021-91592dce265d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c6a188e5-6e45-404a-b021-91592dce265d\") " pod="openstack/glance-default-external-api-0" Nov 28 12:54:35 crc kubenswrapper[4779]: I1128 12:54:35.093334 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6a188e5-6e45-404a-b021-91592dce265d-logs\") pod \"glance-default-external-api-0\" (UID: \"c6a188e5-6e45-404a-b021-91592dce265d\") " pod="openstack/glance-default-external-api-0" Nov 28 12:54:35 crc kubenswrapper[4779]: I1128 12:54:35.094257 4779 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"c6a188e5-6e45-404a-b021-91592dce265d\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Nov 28 12:54:35 crc kubenswrapper[4779]: I1128 12:54:35.096494 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c6a188e5-6e45-404a-b021-91592dce265d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c6a188e5-6e45-404a-b021-91592dce265d\") " pod="openstack/glance-default-external-api-0" Nov 28 12:54:35 crc kubenswrapper[4779]: I1128 12:54:35.106574 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6a188e5-6e45-404a-b021-91592dce265d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c6a188e5-6e45-404a-b021-91592dce265d\") " pod="openstack/glance-default-external-api-0" Nov 28 12:54:35 crc kubenswrapper[4779]: I1128 12:54:35.106769 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6a188e5-6e45-404a-b021-91592dce265d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c6a188e5-6e45-404a-b021-91592dce265d\") " pod="openstack/glance-default-external-api-0" Nov 28 12:54:35 crc kubenswrapper[4779]: I1128 12:54:35.107139 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6a188e5-6e45-404a-b021-91592dce265d-scripts\") pod \"glance-default-external-api-0\" (UID: \"c6a188e5-6e45-404a-b021-91592dce265d\") " pod="openstack/glance-default-external-api-0" Nov 28 12:54:35 crc kubenswrapper[4779]: I1128 12:54:35.113046 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6a188e5-6e45-404a-b021-91592dce265d-config-data\") pod \"glance-default-external-api-0\" (UID: \"c6a188e5-6e45-404a-b021-91592dce265d\") " pod="openstack/glance-default-external-api-0" Nov 28 12:54:35 crc kubenswrapper[4779]: I1128 12:54:35.113191 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wp8qw\" (UniqueName: \"kubernetes.io/projected/c6a188e5-6e45-404a-b021-91592dce265d-kube-api-access-wp8qw\") pod \"glance-default-external-api-0\" (UID: \"c6a188e5-6e45-404a-b021-91592dce265d\") " pod="openstack/glance-default-external-api-0" Nov 28 12:54:35 crc kubenswrapper[4779]: I1128 12:54:35.126175 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"c6a188e5-6e45-404a-b021-91592dce265d\") " pod="openstack/glance-default-external-api-0" Nov 28 12:54:35 crc kubenswrapper[4779]: I1128 12:54:35.150994 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 12:54:35 crc kubenswrapper[4779]: I1128 12:54:35.168839 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 12:54:35 crc kubenswrapper[4779]: I1128 12:54:35.179691 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 12:54:35 crc kubenswrapper[4779]: I1128 12:54:35.285199 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-ggv2n"] Nov 28 12:54:35 crc kubenswrapper[4779]: I1128 12:54:35.291712 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-5bfp4"] Nov 28 12:54:35 crc kubenswrapper[4779]: I1128 12:54:35.414368 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-nrmk4"] Nov 28 12:54:35 crc kubenswrapper[4779]: I1128 12:54:35.458769 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-sh4hl" event={"ID":"aaa51f35-9ab4-4629-ae5a-349484d0917d","Type":"ContainerStarted","Data":"53d59a2dac967357d2cf1cf7d81ce49e92e6ada155f2f98656c7e16c73de1bf8"} Nov 28 12:54:35 crc kubenswrapper[4779]: I1128 12:54:35.459829 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-q7v56" event={"ID":"082059dc-73e6-482b-a0ad-ed2a62282f61","Type":"ContainerStarted","Data":"8c401f9e3bbc7c8fc2ef8e0bcb612ea0e141b7fe4df0990fe5cbdcb27b0b2f9d"} Nov 28 12:54:35 crc kubenswrapper[4779]: I1128 12:54:35.461173 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-n6d4q" event={"ID":"e6e28a04-d7f0-46a8-81db-5b23fc3ac835","Type":"ContainerStarted","Data":"152891d1ce9830ccd975c1998b9ce771e2bb41f01cfb71a544da9319642e6cae"} Nov 28 12:54:35 crc kubenswrapper[4779]: I1128 12:54:35.461216 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-n6d4q" event={"ID":"e6e28a04-d7f0-46a8-81db-5b23fc3ac835","Type":"ContainerStarted","Data":"0a61cd6a006354998f198bb997566dbecd41083dc2caa074df4fe61716e31e9b"} Nov 28 12:54:35 crc kubenswrapper[4779]: I1128 12:54:35.462024 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-ggv2n" event={"ID":"1f844f06-a227-4423-9d97-33f9c85c0df8","Type":"ContainerStarted","Data":"eec1fe99366f987ed7967e3b7d5fdb7b10bd97bc95b297d7de50d6360eb472f4"} Nov 28 12:54:35 crc kubenswrapper[4779]: I1128 12:54:35.462864 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1c512ed-6e02-45b5-a320-0a1b58b074ab","Type":"ContainerStarted","Data":"3a495fb8bb8a40f48ce1ef32bac92fefc808d97694f4c7a5a08476bf6e07f093"} Nov 28 12:54:35 crc kubenswrapper[4779]: I1128 12:54:35.463885 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5cc7c5ff-cd7zw" event={"ID":"67e05920-c609-42bb-95ce-244a7564af1e","Type":"ContainerStarted","Data":"4d90d1cb2875e6db916b65b59d4d3447f5ba0a427540181827053cc16848f990"} Nov 28 12:54:35 crc kubenswrapper[4779]: I1128 12:54:35.465297 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-5bfp4" event={"ID":"a38e4faf-dc47-411c-94d0-7e143c2540d0","Type":"ContainerStarted","Data":"94dce1ee06460af8d07ee2a95925f556b7c203ef7a3a086b1b5a178b5da10c3e"} Nov 28 12:54:35 crc kubenswrapper[4779]: I1128 12:54:35.466422 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7ff5475cc9-lf8cn" podUID="53ec5a58-e98c-4b0b-a711-b52e332ba26c" containerName="dnsmasq-dns" containerID="cri-o://8c6ec04444ea67de9a7c5ddfac993b816c6b31fb52b841e88020ce18857191a9" gracePeriod=10 Nov 28 12:54:35 crc kubenswrapper[4779]: I1128 12:54:35.466650 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-2rlgj" event={"ID":"090eca16-3536-4b84-85c8-e9a0d3a7deb6","Type":"ContainerStarted","Data":"ded2120ced19628bc0cd364b28c90e052b35e2ef293f36e77172891fb7a940fe"} Nov 28 12:54:35 crc kubenswrapper[4779]: W1128 12:54:35.472990 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe93bf1f_510b_4a38_8f85_b59c36b2feb1.slice/crio-9a376e3d5d23f3a824f1298a4303597ae79cb88f1c0edcaf52e53ab8bd072e34 WatchSource:0}: Error finding container 9a376e3d5d23f3a824f1298a4303597ae79cb88f1c0edcaf52e53ab8bd072e34: Status 404 returned error can't find the container with id 9a376e3d5d23f3a824f1298a4303597ae79cb88f1c0edcaf52e53ab8bd072e34 Nov 28 12:54:35 crc kubenswrapper[4779]: I1128 12:54:35.485746 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-n6d4q" podStartSLOduration=2.485727847 podStartE2EDuration="2.485727847s" podCreationTimestamp="2025-11-28 12:54:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:54:35.47674961 +0000 UTC m=+1136.042424984" watchObservedRunningTime="2025-11-28 12:54:35.485727847 +0000 UTC m=+1136.051403221" Nov 28 12:54:35 crc kubenswrapper[4779]: I1128 12:54:35.593250 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 12:54:35 crc kubenswrapper[4779]: I1128 12:54:35.974452 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 12:54:35 crc kubenswrapper[4779]: W1128 12:54:35.996400 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6a188e5_6e45_404a_b021_91592dce265d.slice/crio-aba7e6b5046ba9b690e1f4a7483bbac9a6a00b102ebec47a656ba1c82ee69e6d WatchSource:0}: Error finding container aba7e6b5046ba9b690e1f4a7483bbac9a6a00b102ebec47a656ba1c82ee69e6d: Status 404 returned error can't find the container with id aba7e6b5046ba9b690e1f4a7483bbac9a6a00b102ebec47a656ba1c82ee69e6d Nov 28 12:54:36 crc kubenswrapper[4779]: I1128 12:54:36.471871 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 12:54:36 crc kubenswrapper[4779]: I1128 12:54:36.561360 4779 generic.go:334] "Generic (PLEG): container finished" podID="a38e4faf-dc47-411c-94d0-7e143c2540d0" containerID="821e42c02099ce1aaf571a616fb0cee538a34cbf3677ff093436d4d87aa399a8" exitCode=0 Nov 28 12:54:36 crc kubenswrapper[4779]: I1128 12:54:36.561444 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-5bfp4" event={"ID":"a38e4faf-dc47-411c-94d0-7e143c2540d0","Type":"ContainerDied","Data":"821e42c02099ce1aaf571a616fb0cee538a34cbf3677ff093436d4d87aa399a8"} Nov 28 12:54:36 crc kubenswrapper[4779]: I1128 12:54:36.590954 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 12:54:36 crc kubenswrapper[4779]: I1128 12:54:36.599704 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 12:54:36 crc kubenswrapper[4779]: I1128 12:54:36.606897 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-lf8cn" Nov 28 12:54:36 crc kubenswrapper[4779]: I1128 12:54:36.612640 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-q7v56" event={"ID":"082059dc-73e6-482b-a0ad-ed2a62282f61","Type":"ContainerStarted","Data":"121b08d2adad132bfcefaa378c89b7d30716fbd3708ae63ea920eecc8466b749"} Nov 28 12:54:36 crc kubenswrapper[4779]: I1128 12:54:36.617995 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-nrmk4" event={"ID":"be93bf1f-510b-4a38-8f85-b59c36b2feb1","Type":"ContainerStarted","Data":"9a376e3d5d23f3a824f1298a4303597ae79cb88f1c0edcaf52e53ab8bd072e34"} Nov 28 12:54:36 crc kubenswrapper[4779]: I1128 12:54:36.629658 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6f8c6c7c-6c23-4457-8f25-65d7def68794","Type":"ContainerStarted","Data":"fd3dbe6043c1304175ff2bf043805f614fc7706fa15b25309e02fd5c50de660f"} Nov 28 12:54:36 crc kubenswrapper[4779]: I1128 12:54:36.642233 4779 generic.go:334] "Generic (PLEG): container finished" podID="53ec5a58-e98c-4b0b-a711-b52e332ba26c" containerID="8c6ec04444ea67de9a7c5ddfac993b816c6b31fb52b841e88020ce18857191a9" exitCode=0 Nov 28 12:54:36 crc kubenswrapper[4779]: I1128 12:54:36.642293 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-lf8cn" event={"ID":"53ec5a58-e98c-4b0b-a711-b52e332ba26c","Type":"ContainerDied","Data":"8c6ec04444ea67de9a7c5ddfac993b816c6b31fb52b841e88020ce18857191a9"} Nov 28 12:54:36 crc kubenswrapper[4779]: I1128 12:54:36.642318 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-lf8cn" event={"ID":"53ec5a58-e98c-4b0b-a711-b52e332ba26c","Type":"ContainerDied","Data":"eb77f2bcdc91ceddea3c973a7988aae63ee3875615d127fe490e1ad3de7ec61d"} Nov 28 12:54:36 crc kubenswrapper[4779]: I1128 12:54:36.642334 4779 scope.go:117] "RemoveContainer" containerID="8c6ec04444ea67de9a7c5ddfac993b816c6b31fb52b841e88020ce18857191a9" Nov 28 12:54:36 crc kubenswrapper[4779]: I1128 12:54:36.642442 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-lf8cn" Nov 28 12:54:36 crc kubenswrapper[4779]: I1128 12:54:36.648454 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c6a188e5-6e45-404a-b021-91592dce265d","Type":"ContainerStarted","Data":"aba7e6b5046ba9b690e1f4a7483bbac9a6a00b102ebec47a656ba1c82ee69e6d"} Nov 28 12:54:36 crc kubenswrapper[4779]: I1128 12:54:36.659162 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-q7v56" podStartSLOduration=3.659144885 podStartE2EDuration="3.659144885s" podCreationTimestamp="2025-11-28 12:54:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:54:36.65366922 +0000 UTC m=+1137.219344574" watchObservedRunningTime="2025-11-28 12:54:36.659144885 +0000 UTC m=+1137.224820239" Nov 28 12:54:36 crc kubenswrapper[4779]: I1128 12:54:36.681704 4779 generic.go:334] "Generic (PLEG): container finished" podID="67e05920-c609-42bb-95ce-244a7564af1e" containerID="18338c0ff4d0c30f8574a84330c48ff87aa81cfc4743a67c1bc11ce557d8b649" exitCode=0 Nov 28 12:54:36 crc kubenswrapper[4779]: I1128 12:54:36.681846 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5cc7c5ff-cd7zw" event={"ID":"67e05920-c609-42bb-95ce-244a7564af1e","Type":"ContainerDied","Data":"18338c0ff4d0c30f8574a84330c48ff87aa81cfc4743a67c1bc11ce557d8b649"} Nov 28 12:54:36 crc kubenswrapper[4779]: I1128 12:54:36.743074 4779 scope.go:117] "RemoveContainer" containerID="2d1d066797a11465febd91edf7346c8960a4e43fa95e3d0776b174be813b95c2" Nov 28 12:54:36 crc kubenswrapper[4779]: I1128 12:54:36.750554 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53ec5a58-e98c-4b0b-a711-b52e332ba26c-config\") pod \"53ec5a58-e98c-4b0b-a711-b52e332ba26c\" (UID: \"53ec5a58-e98c-4b0b-a711-b52e332ba26c\") " Nov 28 12:54:36 crc kubenswrapper[4779]: I1128 12:54:36.755202 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/53ec5a58-e98c-4b0b-a711-b52e332ba26c-ovsdbserver-sb\") pod \"53ec5a58-e98c-4b0b-a711-b52e332ba26c\" (UID: \"53ec5a58-e98c-4b0b-a711-b52e332ba26c\") " Nov 28 12:54:36 crc kubenswrapper[4779]: I1128 12:54:36.755299 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhgbc\" (UniqueName: \"kubernetes.io/projected/53ec5a58-e98c-4b0b-a711-b52e332ba26c-kube-api-access-jhgbc\") pod \"53ec5a58-e98c-4b0b-a711-b52e332ba26c\" (UID: \"53ec5a58-e98c-4b0b-a711-b52e332ba26c\") " Nov 28 12:54:36 crc kubenswrapper[4779]: I1128 12:54:36.755425 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/53ec5a58-e98c-4b0b-a711-b52e332ba26c-ovsdbserver-nb\") pod \"53ec5a58-e98c-4b0b-a711-b52e332ba26c\" (UID: \"53ec5a58-e98c-4b0b-a711-b52e332ba26c\") " Nov 28 12:54:36 crc kubenswrapper[4779]: I1128 12:54:36.755467 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/53ec5a58-e98c-4b0b-a711-b52e332ba26c-dns-svc\") pod \"53ec5a58-e98c-4b0b-a711-b52e332ba26c\" (UID: \"53ec5a58-e98c-4b0b-a711-b52e332ba26c\") " Nov 28 12:54:36 crc kubenswrapper[4779]: I1128 12:54:36.755574 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/53ec5a58-e98c-4b0b-a711-b52e332ba26c-dns-swift-storage-0\") pod \"53ec5a58-e98c-4b0b-a711-b52e332ba26c\" (UID: \"53ec5a58-e98c-4b0b-a711-b52e332ba26c\") " Nov 28 12:54:36 crc kubenswrapper[4779]: I1128 12:54:36.769212 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53ec5a58-e98c-4b0b-a711-b52e332ba26c-kube-api-access-jhgbc" (OuterVolumeSpecName: "kube-api-access-jhgbc") pod "53ec5a58-e98c-4b0b-a711-b52e332ba26c" (UID: "53ec5a58-e98c-4b0b-a711-b52e332ba26c"). InnerVolumeSpecName "kube-api-access-jhgbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:54:36 crc kubenswrapper[4779]: I1128 12:54:36.782888 4779 scope.go:117] "RemoveContainer" containerID="8c6ec04444ea67de9a7c5ddfac993b816c6b31fb52b841e88020ce18857191a9" Nov 28 12:54:36 crc kubenswrapper[4779]: E1128 12:54:36.793307 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c6ec04444ea67de9a7c5ddfac993b816c6b31fb52b841e88020ce18857191a9\": container with ID starting with 8c6ec04444ea67de9a7c5ddfac993b816c6b31fb52b841e88020ce18857191a9 not found: ID does not exist" containerID="8c6ec04444ea67de9a7c5ddfac993b816c6b31fb52b841e88020ce18857191a9" Nov 28 12:54:36 crc kubenswrapper[4779]: I1128 12:54:36.793484 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c6ec04444ea67de9a7c5ddfac993b816c6b31fb52b841e88020ce18857191a9"} err="failed to get container status \"8c6ec04444ea67de9a7c5ddfac993b816c6b31fb52b841e88020ce18857191a9\": rpc error: code = NotFound desc = could not find container \"8c6ec04444ea67de9a7c5ddfac993b816c6b31fb52b841e88020ce18857191a9\": container with ID starting with 8c6ec04444ea67de9a7c5ddfac993b816c6b31fb52b841e88020ce18857191a9 not found: ID does not exist" Nov 28 12:54:36 crc kubenswrapper[4779]: I1128 12:54:36.793530 4779 scope.go:117] "RemoveContainer" containerID="2d1d066797a11465febd91edf7346c8960a4e43fa95e3d0776b174be813b95c2" Nov 28 12:54:36 crc kubenswrapper[4779]: E1128 12:54:36.795251 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d1d066797a11465febd91edf7346c8960a4e43fa95e3d0776b174be813b95c2\": container with ID starting with 2d1d066797a11465febd91edf7346c8960a4e43fa95e3d0776b174be813b95c2 not found: ID does not exist" containerID="2d1d066797a11465febd91edf7346c8960a4e43fa95e3d0776b174be813b95c2" Nov 28 12:54:36 crc kubenswrapper[4779]: I1128 12:54:36.795283 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d1d066797a11465febd91edf7346c8960a4e43fa95e3d0776b174be813b95c2"} err="failed to get container status \"2d1d066797a11465febd91edf7346c8960a4e43fa95e3d0776b174be813b95c2\": rpc error: code = NotFound desc = could not find container \"2d1d066797a11465febd91edf7346c8960a4e43fa95e3d0776b174be813b95c2\": container with ID starting with 2d1d066797a11465febd91edf7346c8960a4e43fa95e3d0776b174be813b95c2 not found: ID does not exist" Nov 28 12:54:36 crc kubenswrapper[4779]: I1128 12:54:36.845647 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53ec5a58-e98c-4b0b-a711-b52e332ba26c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "53ec5a58-e98c-4b0b-a711-b52e332ba26c" (UID: "53ec5a58-e98c-4b0b-a711-b52e332ba26c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:54:36 crc kubenswrapper[4779]: I1128 12:54:36.853814 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53ec5a58-e98c-4b0b-a711-b52e332ba26c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "53ec5a58-e98c-4b0b-a711-b52e332ba26c" (UID: "53ec5a58-e98c-4b0b-a711-b52e332ba26c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:54:36 crc kubenswrapper[4779]: I1128 12:54:36.854166 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53ec5a58-e98c-4b0b-a711-b52e332ba26c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "53ec5a58-e98c-4b0b-a711-b52e332ba26c" (UID: "53ec5a58-e98c-4b0b-a711-b52e332ba26c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:54:36 crc kubenswrapper[4779]: I1128 12:54:36.857633 4779 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/53ec5a58-e98c-4b0b-a711-b52e332ba26c-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 12:54:36 crc kubenswrapper[4779]: I1128 12:54:36.857657 4779 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/53ec5a58-e98c-4b0b-a711-b52e332ba26c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 12:54:36 crc kubenswrapper[4779]: I1128 12:54:36.857667 4779 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/53ec5a58-e98c-4b0b-a711-b52e332ba26c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 12:54:36 crc kubenswrapper[4779]: I1128 12:54:36.857676 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhgbc\" (UniqueName: \"kubernetes.io/projected/53ec5a58-e98c-4b0b-a711-b52e332ba26c-kube-api-access-jhgbc\") on node \"crc\" DevicePath \"\"" Nov 28 12:54:36 crc kubenswrapper[4779]: I1128 12:54:36.859471 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53ec5a58-e98c-4b0b-a711-b52e332ba26c-config" (OuterVolumeSpecName: "config") pod "53ec5a58-e98c-4b0b-a711-b52e332ba26c" (UID: "53ec5a58-e98c-4b0b-a711-b52e332ba26c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:54:36 crc kubenswrapper[4779]: I1128 12:54:36.863415 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53ec5a58-e98c-4b0b-a711-b52e332ba26c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "53ec5a58-e98c-4b0b-a711-b52e332ba26c" (UID: "53ec5a58-e98c-4b0b-a711-b52e332ba26c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:54:36 crc kubenswrapper[4779]: I1128 12:54:36.959056 4779 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53ec5a58-e98c-4b0b-a711-b52e332ba26c-config\") on node \"crc\" DevicePath \"\"" Nov 28 12:54:36 crc kubenswrapper[4779]: I1128 12:54:36.959076 4779 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/53ec5a58-e98c-4b0b-a711-b52e332ba26c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 12:54:36 crc kubenswrapper[4779]: I1128 12:54:36.999182 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-lf8cn"] Nov 28 12:54:37 crc kubenswrapper[4779]: I1128 12:54:37.018902 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-lf8cn"] Nov 28 12:54:37 crc kubenswrapper[4779]: I1128 12:54:37.194657 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-cd7zw" Nov 28 12:54:37 crc kubenswrapper[4779]: I1128 12:54:37.267536 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/67e05920-c609-42bb-95ce-244a7564af1e-ovsdbserver-nb\") pod \"67e05920-c609-42bb-95ce-244a7564af1e\" (UID: \"67e05920-c609-42bb-95ce-244a7564af1e\") " Nov 28 12:54:37 crc kubenswrapper[4779]: I1128 12:54:37.267609 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/67e05920-c609-42bb-95ce-244a7564af1e-dns-svc\") pod \"67e05920-c609-42bb-95ce-244a7564af1e\" (UID: \"67e05920-c609-42bb-95ce-244a7564af1e\") " Nov 28 12:54:37 crc kubenswrapper[4779]: I1128 12:54:37.267655 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/67e05920-c609-42bb-95ce-244a7564af1e-dns-swift-storage-0\") pod \"67e05920-c609-42bb-95ce-244a7564af1e\" (UID: \"67e05920-c609-42bb-95ce-244a7564af1e\") " Nov 28 12:54:37 crc kubenswrapper[4779]: I1128 12:54:37.267712 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnrlp\" (UniqueName: \"kubernetes.io/projected/67e05920-c609-42bb-95ce-244a7564af1e-kube-api-access-rnrlp\") pod \"67e05920-c609-42bb-95ce-244a7564af1e\" (UID: \"67e05920-c609-42bb-95ce-244a7564af1e\") " Nov 28 12:54:37 crc kubenswrapper[4779]: I1128 12:54:37.267787 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/67e05920-c609-42bb-95ce-244a7564af1e-ovsdbserver-sb\") pod \"67e05920-c609-42bb-95ce-244a7564af1e\" (UID: \"67e05920-c609-42bb-95ce-244a7564af1e\") " Nov 28 12:54:37 crc kubenswrapper[4779]: I1128 12:54:37.267837 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67e05920-c609-42bb-95ce-244a7564af1e-config\") pod \"67e05920-c609-42bb-95ce-244a7564af1e\" (UID: \"67e05920-c609-42bb-95ce-244a7564af1e\") " Nov 28 12:54:37 crc kubenswrapper[4779]: I1128 12:54:37.282664 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67e05920-c609-42bb-95ce-244a7564af1e-kube-api-access-rnrlp" (OuterVolumeSpecName: "kube-api-access-rnrlp") pod "67e05920-c609-42bb-95ce-244a7564af1e" (UID: "67e05920-c609-42bb-95ce-244a7564af1e"). InnerVolumeSpecName "kube-api-access-rnrlp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:54:37 crc kubenswrapper[4779]: I1128 12:54:37.305601 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67e05920-c609-42bb-95ce-244a7564af1e-config" (OuterVolumeSpecName: "config") pod "67e05920-c609-42bb-95ce-244a7564af1e" (UID: "67e05920-c609-42bb-95ce-244a7564af1e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:54:37 crc kubenswrapper[4779]: I1128 12:54:37.308540 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67e05920-c609-42bb-95ce-244a7564af1e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "67e05920-c609-42bb-95ce-244a7564af1e" (UID: "67e05920-c609-42bb-95ce-244a7564af1e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:54:37 crc kubenswrapper[4779]: I1128 12:54:37.318172 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67e05920-c609-42bb-95ce-244a7564af1e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "67e05920-c609-42bb-95ce-244a7564af1e" (UID: "67e05920-c609-42bb-95ce-244a7564af1e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:54:37 crc kubenswrapper[4779]: I1128 12:54:37.324152 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67e05920-c609-42bb-95ce-244a7564af1e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "67e05920-c609-42bb-95ce-244a7564af1e" (UID: "67e05920-c609-42bb-95ce-244a7564af1e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:54:37 crc kubenswrapper[4779]: I1128 12:54:37.336716 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67e05920-c609-42bb-95ce-244a7564af1e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "67e05920-c609-42bb-95ce-244a7564af1e" (UID: "67e05920-c609-42bb-95ce-244a7564af1e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:54:37 crc kubenswrapper[4779]: I1128 12:54:37.375612 4779 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/67e05920-c609-42bb-95ce-244a7564af1e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 12:54:37 crc kubenswrapper[4779]: I1128 12:54:37.375666 4779 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67e05920-c609-42bb-95ce-244a7564af1e-config\") on node \"crc\" DevicePath \"\"" Nov 28 12:54:37 crc kubenswrapper[4779]: I1128 12:54:37.375698 4779 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/67e05920-c609-42bb-95ce-244a7564af1e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 12:54:37 crc kubenswrapper[4779]: I1128 12:54:37.375707 4779 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/67e05920-c609-42bb-95ce-244a7564af1e-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 12:54:37 crc kubenswrapper[4779]: I1128 12:54:37.375715 4779 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/67e05920-c609-42bb-95ce-244a7564af1e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 12:54:37 crc kubenswrapper[4779]: I1128 12:54:37.375744 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnrlp\" (UniqueName: \"kubernetes.io/projected/67e05920-c609-42bb-95ce-244a7564af1e-kube-api-access-rnrlp\") on node \"crc\" DevicePath \"\"" Nov 28 12:54:37 crc kubenswrapper[4779]: I1128 12:54:37.703585 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-cd7zw" Nov 28 12:54:37 crc kubenswrapper[4779]: I1128 12:54:37.706539 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5cc7c5ff-cd7zw" event={"ID":"67e05920-c609-42bb-95ce-244a7564af1e","Type":"ContainerDied","Data":"4d90d1cb2875e6db916b65b59d4d3447f5ba0a427540181827053cc16848f990"} Nov 28 12:54:37 crc kubenswrapper[4779]: I1128 12:54:37.706591 4779 scope.go:117] "RemoveContainer" containerID="18338c0ff4d0c30f8574a84330c48ff87aa81cfc4743a67c1bc11ce557d8b649" Nov 28 12:54:37 crc kubenswrapper[4779]: I1128 12:54:37.709423 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6f8c6c7c-6c23-4457-8f25-65d7def68794","Type":"ContainerStarted","Data":"cc88550e5a67e418e7de34985a6c442e7dc3248050bdaef472f7e10398197d6f"} Nov 28 12:54:37 crc kubenswrapper[4779]: I1128 12:54:37.782343 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53ec5a58-e98c-4b0b-a711-b52e332ba26c" path="/var/lib/kubelet/pods/53ec5a58-e98c-4b0b-a711-b52e332ba26c/volumes" Nov 28 12:54:37 crc kubenswrapper[4779]: I1128 12:54:37.802998 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-cd7zw"] Nov 28 12:54:37 crc kubenswrapper[4779]: I1128 12:54:37.810167 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-cd7zw"] Nov 28 12:54:39 crc kubenswrapper[4779]: I1128 12:54:39.747627 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67e05920-c609-42bb-95ce-244a7564af1e" path="/var/lib/kubelet/pods/67e05920-c609-42bb-95ce-244a7564af1e/volumes" Nov 28 12:54:42 crc kubenswrapper[4779]: I1128 12:54:42.179677 4779 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-mqs42" podUID="70ee469b-f21f-4b94-9f6a-1b79db90e4fd" containerName="nmstate-handler" probeResult="failure" output="command timed out" Nov 28 12:54:43 crc kubenswrapper[4779]: I1128 12:54:43.797146 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-5bfp4" event={"ID":"a38e4faf-dc47-411c-94d0-7e143c2540d0","Type":"ContainerStarted","Data":"2e0c27204fe3dd9b95ffe2fb40d5c34c90ad5a8ccc758ab0bd761dece8d38f07"} Nov 28 12:54:43 crc kubenswrapper[4779]: I1128 12:54:43.797529 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8b5c85b87-5bfp4" Nov 28 12:54:43 crc kubenswrapper[4779]: I1128 12:54:43.804252 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6f8c6c7c-6c23-4457-8f25-65d7def68794","Type":"ContainerStarted","Data":"e2559aee7f593592775ce559417caadebd36aa18a22fe7f843a1a46efc7584e0"} Nov 28 12:54:43 crc kubenswrapper[4779]: I1128 12:54:43.807116 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c6a188e5-6e45-404a-b021-91592dce265d","Type":"ContainerStarted","Data":"9ecb34006f6759c31013e30318a9b29c315f49da4b35e69a9e3f9dc764a06c2d"} Nov 28 12:54:43 crc kubenswrapper[4779]: I1128 12:54:43.826274 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8b5c85b87-5bfp4" podStartSLOduration=10.826250688 podStartE2EDuration="10.826250688s" podCreationTimestamp="2025-11-28 12:54:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:54:43.823460894 +0000 UTC m=+1144.389136268" watchObservedRunningTime="2025-11-28 12:54:43.826250688 +0000 UTC m=+1144.391926082" Nov 28 12:54:44 crc kubenswrapper[4779]: I1128 12:54:44.814112 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="6f8c6c7c-6c23-4457-8f25-65d7def68794" containerName="glance-log" containerID="cri-o://cc88550e5a67e418e7de34985a6c442e7dc3248050bdaef472f7e10398197d6f" gracePeriod=30 Nov 28 12:54:44 crc kubenswrapper[4779]: I1128 12:54:44.814213 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="6f8c6c7c-6c23-4457-8f25-65d7def68794" containerName="glance-httpd" containerID="cri-o://e2559aee7f593592775ce559417caadebd36aa18a22fe7f843a1a46efc7584e0" gracePeriod=30 Nov 28 12:54:44 crc kubenswrapper[4779]: I1128 12:54:44.840989 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=11.840967298 podStartE2EDuration="11.840967298s" podCreationTimestamp="2025-11-28 12:54:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:54:44.835610516 +0000 UTC m=+1145.401285870" watchObservedRunningTime="2025-11-28 12:54:44.840967298 +0000 UTC m=+1145.406642652" Nov 28 12:54:45 crc kubenswrapper[4779]: I1128 12:54:45.827381 4779 generic.go:334] "Generic (PLEG): container finished" podID="6f8c6c7c-6c23-4457-8f25-65d7def68794" containerID="e2559aee7f593592775ce559417caadebd36aa18a22fe7f843a1a46efc7584e0" exitCode=0 Nov 28 12:54:45 crc kubenswrapper[4779]: I1128 12:54:45.827415 4779 generic.go:334] "Generic (PLEG): container finished" podID="6f8c6c7c-6c23-4457-8f25-65d7def68794" containerID="cc88550e5a67e418e7de34985a6c442e7dc3248050bdaef472f7e10398197d6f" exitCode=143 Nov 28 12:54:45 crc kubenswrapper[4779]: I1128 12:54:45.827490 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6f8c6c7c-6c23-4457-8f25-65d7def68794","Type":"ContainerDied","Data":"e2559aee7f593592775ce559417caadebd36aa18a22fe7f843a1a46efc7584e0"} Nov 28 12:54:45 crc kubenswrapper[4779]: I1128 12:54:45.827560 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6f8c6c7c-6c23-4457-8f25-65d7def68794","Type":"ContainerDied","Data":"cc88550e5a67e418e7de34985a6c442e7dc3248050bdaef472f7e10398197d6f"} Nov 28 12:54:45 crc kubenswrapper[4779]: I1128 12:54:45.829306 4779 generic.go:334] "Generic (PLEG): container finished" podID="e6e28a04-d7f0-46a8-81db-5b23fc3ac835" containerID="152891d1ce9830ccd975c1998b9ce771e2bb41f01cfb71a544da9319642e6cae" exitCode=0 Nov 28 12:54:45 crc kubenswrapper[4779]: I1128 12:54:45.829335 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-n6d4q" event={"ID":"e6e28a04-d7f0-46a8-81db-5b23fc3ac835","Type":"ContainerDied","Data":"152891d1ce9830ccd975c1998b9ce771e2bb41f01cfb71a544da9319642e6cae"} Nov 28 12:54:46 crc kubenswrapper[4779]: I1128 12:54:46.285192 4779 patch_prober.go:28] interesting pod/machine-config-daemon-kj9g2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 12:54:46 crc kubenswrapper[4779]: I1128 12:54:46.285468 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 12:54:49 crc kubenswrapper[4779]: I1128 12:54:49.655303 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8b5c85b87-5bfp4" Nov 28 12:54:49 crc kubenswrapper[4779]: I1128 12:54:49.716900 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-vjtpr"] Nov 28 12:54:49 crc kubenswrapper[4779]: I1128 12:54:49.717226 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77585f5f8c-vjtpr" podUID="8367a732-6c2b-4fbd-8325-0e3c6eabc40e" containerName="dnsmasq-dns" containerID="cri-o://279416cdd5a7057b391b215ca9fea4e4dc0ce87b39f1f4adb3528ccd3038908c" gracePeriod=10 Nov 28 12:54:50 crc kubenswrapper[4779]: I1128 12:54:50.646792 4779 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-vjtpr" podUID="8367a732-6c2b-4fbd-8325-0e3c6eabc40e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.123:5353: connect: connection refused" Nov 28 12:54:50 crc kubenswrapper[4779]: I1128 12:54:50.880573 4779 generic.go:334] "Generic (PLEG): container finished" podID="8367a732-6c2b-4fbd-8325-0e3c6eabc40e" containerID="279416cdd5a7057b391b215ca9fea4e4dc0ce87b39f1f4adb3528ccd3038908c" exitCode=0 Nov 28 12:54:50 crc kubenswrapper[4779]: I1128 12:54:50.880620 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-vjtpr" event={"ID":"8367a732-6c2b-4fbd-8325-0e3c6eabc40e","Type":"ContainerDied","Data":"279416cdd5a7057b391b215ca9fea4e4dc0ce87b39f1f4adb3528ccd3038908c"} Nov 28 12:54:55 crc kubenswrapper[4779]: I1128 12:54:55.642866 4779 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-vjtpr" podUID="8367a732-6c2b-4fbd-8325-0e3c6eabc40e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.123:5353: connect: connection refused" Nov 28 12:54:59 crc kubenswrapper[4779]: E1128 12:54:59.927470 4779 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Nov 28 12:54:59 crc kubenswrapper[4779]: E1128 12:54:59.927971 4779 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sdbvk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-nrmk4_openstack(be93bf1f-510b-4a38-8f85-b59c36b2feb1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 12:54:59 crc kubenswrapper[4779]: E1128 12:54:59.929728 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-nrmk4" podUID="be93bf1f-510b-4a38-8f85-b59c36b2feb1" Nov 28 12:54:59 crc kubenswrapper[4779]: E1128 12:54:59.971831 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-nrmk4" podUID="be93bf1f-510b-4a38-8f85-b59c36b2feb1" Nov 28 12:55:00 crc kubenswrapper[4779]: I1128 12:55:00.643466 4779 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-vjtpr" podUID="8367a732-6c2b-4fbd-8325-0e3c6eabc40e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.123:5353: connect: connection refused" Nov 28 12:55:00 crc kubenswrapper[4779]: I1128 12:55:00.643894 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77585f5f8c-vjtpr" Nov 28 12:55:03 crc kubenswrapper[4779]: I1128 12:55:03.513214 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 12:55:03 crc kubenswrapper[4779]: I1128 12:55:03.521222 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-n6d4q" Nov 28 12:55:03 crc kubenswrapper[4779]: I1128 12:55:03.589882 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f8c6c7c-6c23-4457-8f25-65d7def68794-internal-tls-certs\") pod \"6f8c6c7c-6c23-4457-8f25-65d7def68794\" (UID: \"6f8c6c7c-6c23-4457-8f25-65d7def68794\") " Nov 28 12:55:03 crc kubenswrapper[4779]: I1128 12:55:03.589935 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-849rr\" (UniqueName: \"kubernetes.io/projected/e6e28a04-d7f0-46a8-81db-5b23fc3ac835-kube-api-access-849rr\") pod \"e6e28a04-d7f0-46a8-81db-5b23fc3ac835\" (UID: \"e6e28a04-d7f0-46a8-81db-5b23fc3ac835\") " Nov 28 12:55:03 crc kubenswrapper[4779]: I1128 12:55:03.589962 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f8c6c7c-6c23-4457-8f25-65d7def68794-logs\") pod \"6f8c6c7c-6c23-4457-8f25-65d7def68794\" (UID: \"6f8c6c7c-6c23-4457-8f25-65d7def68794\") " Nov 28 12:55:03 crc kubenswrapper[4779]: I1128 12:55:03.590040 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6e28a04-d7f0-46a8-81db-5b23fc3ac835-scripts\") pod \"e6e28a04-d7f0-46a8-81db-5b23fc3ac835\" (UID: \"e6e28a04-d7f0-46a8-81db-5b23fc3ac835\") " Nov 28 12:55:03 crc kubenswrapper[4779]: I1128 12:55:03.590618 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f8c6c7c-6c23-4457-8f25-65d7def68794-logs" (OuterVolumeSpecName: "logs") pod "6f8c6c7c-6c23-4457-8f25-65d7def68794" (UID: "6f8c6c7c-6c23-4457-8f25-65d7def68794"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:55:03 crc kubenswrapper[4779]: I1128 12:55:03.590713 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f8c6c7c-6c23-4457-8f25-65d7def68794-scripts\") pod \"6f8c6c7c-6c23-4457-8f25-65d7def68794\" (UID: \"6f8c6c7c-6c23-4457-8f25-65d7def68794\") " Nov 28 12:55:03 crc kubenswrapper[4779]: I1128 12:55:03.590745 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"6f8c6c7c-6c23-4457-8f25-65d7def68794\" (UID: \"6f8c6c7c-6c23-4457-8f25-65d7def68794\") " Nov 28 12:55:03 crc kubenswrapper[4779]: I1128 12:55:03.590804 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f8c6c7c-6c23-4457-8f25-65d7def68794-config-data\") pod \"6f8c6c7c-6c23-4457-8f25-65d7def68794\" (UID: \"6f8c6c7c-6c23-4457-8f25-65d7def68794\") " Nov 28 12:55:03 crc kubenswrapper[4779]: I1128 12:55:03.590832 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e6e28a04-d7f0-46a8-81db-5b23fc3ac835-credential-keys\") pod \"e6e28a04-d7f0-46a8-81db-5b23fc3ac835\" (UID: \"e6e28a04-d7f0-46a8-81db-5b23fc3ac835\") " Nov 28 12:55:03 crc kubenswrapper[4779]: I1128 12:55:03.590885 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6f8c6c7c-6c23-4457-8f25-65d7def68794-httpd-run\") pod \"6f8c6c7c-6c23-4457-8f25-65d7def68794\" (UID: \"6f8c6c7c-6c23-4457-8f25-65d7def68794\") " Nov 28 12:55:03 crc kubenswrapper[4779]: I1128 12:55:03.591143 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6e28a04-d7f0-46a8-81db-5b23fc3ac835-combined-ca-bundle\") pod \"e6e28a04-d7f0-46a8-81db-5b23fc3ac835\" (UID: \"e6e28a04-d7f0-46a8-81db-5b23fc3ac835\") " Nov 28 12:55:03 crc kubenswrapper[4779]: I1128 12:55:03.591195 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e6e28a04-d7f0-46a8-81db-5b23fc3ac835-fernet-keys\") pod \"e6e28a04-d7f0-46a8-81db-5b23fc3ac835\" (UID: \"e6e28a04-d7f0-46a8-81db-5b23fc3ac835\") " Nov 28 12:55:03 crc kubenswrapper[4779]: I1128 12:55:03.591223 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6e28a04-d7f0-46a8-81db-5b23fc3ac835-config-data\") pod \"e6e28a04-d7f0-46a8-81db-5b23fc3ac835\" (UID: \"e6e28a04-d7f0-46a8-81db-5b23fc3ac835\") " Nov 28 12:55:03 crc kubenswrapper[4779]: I1128 12:55:03.591249 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlh69\" (UniqueName: \"kubernetes.io/projected/6f8c6c7c-6c23-4457-8f25-65d7def68794-kube-api-access-dlh69\") pod \"6f8c6c7c-6c23-4457-8f25-65d7def68794\" (UID: \"6f8c6c7c-6c23-4457-8f25-65d7def68794\") " Nov 28 12:55:03 crc kubenswrapper[4779]: I1128 12:55:03.591326 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f8c6c7c-6c23-4457-8f25-65d7def68794-combined-ca-bundle\") pod \"6f8c6c7c-6c23-4457-8f25-65d7def68794\" (UID: \"6f8c6c7c-6c23-4457-8f25-65d7def68794\") " Nov 28 12:55:03 crc kubenswrapper[4779]: I1128 12:55:03.591719 4779 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f8c6c7c-6c23-4457-8f25-65d7def68794-logs\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:03 crc kubenswrapper[4779]: I1128 12:55:03.592070 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f8c6c7c-6c23-4457-8f25-65d7def68794-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6f8c6c7c-6c23-4457-8f25-65d7def68794" (UID: "6f8c6c7c-6c23-4457-8f25-65d7def68794"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:55:03 crc kubenswrapper[4779]: I1128 12:55:03.597289 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f8c6c7c-6c23-4457-8f25-65d7def68794-kube-api-access-dlh69" (OuterVolumeSpecName: "kube-api-access-dlh69") pod "6f8c6c7c-6c23-4457-8f25-65d7def68794" (UID: "6f8c6c7c-6c23-4457-8f25-65d7def68794"). InnerVolumeSpecName "kube-api-access-dlh69". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:55:03 crc kubenswrapper[4779]: I1128 12:55:03.605771 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6e28a04-d7f0-46a8-81db-5b23fc3ac835-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e6e28a04-d7f0-46a8-81db-5b23fc3ac835" (UID: "e6e28a04-d7f0-46a8-81db-5b23fc3ac835"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:55:03 crc kubenswrapper[4779]: I1128 12:55:03.606487 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6e28a04-d7f0-46a8-81db-5b23fc3ac835-kube-api-access-849rr" (OuterVolumeSpecName: "kube-api-access-849rr") pod "e6e28a04-d7f0-46a8-81db-5b23fc3ac835" (UID: "e6e28a04-d7f0-46a8-81db-5b23fc3ac835"). InnerVolumeSpecName "kube-api-access-849rr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:55:03 crc kubenswrapper[4779]: I1128 12:55:03.607219 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6e28a04-d7f0-46a8-81db-5b23fc3ac835-scripts" (OuterVolumeSpecName: "scripts") pod "e6e28a04-d7f0-46a8-81db-5b23fc3ac835" (UID: "e6e28a04-d7f0-46a8-81db-5b23fc3ac835"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:55:03 crc kubenswrapper[4779]: I1128 12:55:03.610305 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f8c6c7c-6c23-4457-8f25-65d7def68794-scripts" (OuterVolumeSpecName: "scripts") pod "6f8c6c7c-6c23-4457-8f25-65d7def68794" (UID: "6f8c6c7c-6c23-4457-8f25-65d7def68794"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:55:03 crc kubenswrapper[4779]: I1128 12:55:03.614273 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "6f8c6c7c-6c23-4457-8f25-65d7def68794" (UID: "6f8c6c7c-6c23-4457-8f25-65d7def68794"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:55:03 crc kubenswrapper[4779]: I1128 12:55:03.632065 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6e28a04-d7f0-46a8-81db-5b23fc3ac835-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "e6e28a04-d7f0-46a8-81db-5b23fc3ac835" (UID: "e6e28a04-d7f0-46a8-81db-5b23fc3ac835"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:55:03 crc kubenswrapper[4779]: I1128 12:55:03.636860 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6e28a04-d7f0-46a8-81db-5b23fc3ac835-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e6e28a04-d7f0-46a8-81db-5b23fc3ac835" (UID: "e6e28a04-d7f0-46a8-81db-5b23fc3ac835"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:55:03 crc kubenswrapper[4779]: I1128 12:55:03.640422 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6e28a04-d7f0-46a8-81db-5b23fc3ac835-config-data" (OuterVolumeSpecName: "config-data") pod "e6e28a04-d7f0-46a8-81db-5b23fc3ac835" (UID: "e6e28a04-d7f0-46a8-81db-5b23fc3ac835"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:55:03 crc kubenswrapper[4779]: I1128 12:55:03.644418 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f8c6c7c-6c23-4457-8f25-65d7def68794-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6f8c6c7c-6c23-4457-8f25-65d7def68794" (UID: "6f8c6c7c-6c23-4457-8f25-65d7def68794"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:55:03 crc kubenswrapper[4779]: I1128 12:55:03.655499 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f8c6c7c-6c23-4457-8f25-65d7def68794-config-data" (OuterVolumeSpecName: "config-data") pod "6f8c6c7c-6c23-4457-8f25-65d7def68794" (UID: "6f8c6c7c-6c23-4457-8f25-65d7def68794"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:55:03 crc kubenswrapper[4779]: I1128 12:55:03.665536 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f8c6c7c-6c23-4457-8f25-65d7def68794-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "6f8c6c7c-6c23-4457-8f25-65d7def68794" (UID: "6f8c6c7c-6c23-4457-8f25-65d7def68794"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:55:03 crc kubenswrapper[4779]: I1128 12:55:03.694001 4779 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f8c6c7c-6c23-4457-8f25-65d7def68794-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:03 crc kubenswrapper[4779]: I1128 12:55:03.694035 4779 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f8c6c7c-6c23-4457-8f25-65d7def68794-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:03 crc kubenswrapper[4779]: I1128 12:55:03.694045 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-849rr\" (UniqueName: \"kubernetes.io/projected/e6e28a04-d7f0-46a8-81db-5b23fc3ac835-kube-api-access-849rr\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:03 crc kubenswrapper[4779]: I1128 12:55:03.694058 4779 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6e28a04-d7f0-46a8-81db-5b23fc3ac835-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:03 crc kubenswrapper[4779]: I1128 12:55:03.694067 4779 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f8c6c7c-6c23-4457-8f25-65d7def68794-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:03 crc kubenswrapper[4779]: I1128 12:55:03.694103 4779 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Nov 28 12:55:03 crc kubenswrapper[4779]: I1128 12:55:03.694112 4779 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f8c6c7c-6c23-4457-8f25-65d7def68794-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:03 crc kubenswrapper[4779]: I1128 12:55:03.694120 4779 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e6e28a04-d7f0-46a8-81db-5b23fc3ac835-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:03 crc kubenswrapper[4779]: I1128 12:55:03.694129 4779 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6f8c6c7c-6c23-4457-8f25-65d7def68794-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:03 crc kubenswrapper[4779]: I1128 12:55:03.694137 4779 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6e28a04-d7f0-46a8-81db-5b23fc3ac835-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:03 crc kubenswrapper[4779]: I1128 12:55:03.694146 4779 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e6e28a04-d7f0-46a8-81db-5b23fc3ac835-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:03 crc kubenswrapper[4779]: I1128 12:55:03.694155 4779 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6e28a04-d7f0-46a8-81db-5b23fc3ac835-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:03 crc kubenswrapper[4779]: I1128 12:55:03.694165 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlh69\" (UniqueName: \"kubernetes.io/projected/6f8c6c7c-6c23-4457-8f25-65d7def68794-kube-api-access-dlh69\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:03 crc kubenswrapper[4779]: I1128 12:55:03.716399 4779 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Nov 28 12:55:03 crc kubenswrapper[4779]: I1128 12:55:03.796931 4779 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.006416 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6f8c6c7c-6c23-4457-8f25-65d7def68794","Type":"ContainerDied","Data":"fd3dbe6043c1304175ff2bf043805f614fc7706fa15b25309e02fd5c50de660f"} Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.006479 4779 scope.go:117] "RemoveContainer" containerID="e2559aee7f593592775ce559417caadebd36aa18a22fe7f843a1a46efc7584e0" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.006607 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.010328 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-n6d4q" event={"ID":"e6e28a04-d7f0-46a8-81db-5b23fc3ac835","Type":"ContainerDied","Data":"0a61cd6a006354998f198bb997566dbecd41083dc2caa074df4fe61716e31e9b"} Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.010407 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-n6d4q" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.010913 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a61cd6a006354998f198bb997566dbecd41083dc2caa074df4fe61716e31e9b" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.055917 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.071391 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.071442 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 12:55:04 crc kubenswrapper[4779]: E1128 12:55:04.071717 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53ec5a58-e98c-4b0b-a711-b52e332ba26c" containerName="init" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.071728 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="53ec5a58-e98c-4b0b-a711-b52e332ba26c" containerName="init" Nov 28 12:55:04 crc kubenswrapper[4779]: E1128 12:55:04.071739 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f8c6c7c-6c23-4457-8f25-65d7def68794" containerName="glance-log" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.071745 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f8c6c7c-6c23-4457-8f25-65d7def68794" containerName="glance-log" Nov 28 12:55:04 crc kubenswrapper[4779]: E1128 12:55:04.071760 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67e05920-c609-42bb-95ce-244a7564af1e" containerName="init" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.071765 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="67e05920-c609-42bb-95ce-244a7564af1e" containerName="init" Nov 28 12:55:04 crc kubenswrapper[4779]: E1128 12:55:04.071781 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53ec5a58-e98c-4b0b-a711-b52e332ba26c" containerName="dnsmasq-dns" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.071787 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="53ec5a58-e98c-4b0b-a711-b52e332ba26c" containerName="dnsmasq-dns" Nov 28 12:55:04 crc kubenswrapper[4779]: E1128 12:55:04.071795 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f8c6c7c-6c23-4457-8f25-65d7def68794" containerName="glance-httpd" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.071802 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f8c6c7c-6c23-4457-8f25-65d7def68794" containerName="glance-httpd" Nov 28 12:55:04 crc kubenswrapper[4779]: E1128 12:55:04.071811 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6e28a04-d7f0-46a8-81db-5b23fc3ac835" containerName="keystone-bootstrap" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.071816 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6e28a04-d7f0-46a8-81db-5b23fc3ac835" containerName="keystone-bootstrap" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.071960 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6e28a04-d7f0-46a8-81db-5b23fc3ac835" containerName="keystone-bootstrap" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.071973 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="67e05920-c609-42bb-95ce-244a7564af1e" containerName="init" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.071994 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f8c6c7c-6c23-4457-8f25-65d7def68794" containerName="glance-log" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.072003 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="53ec5a58-e98c-4b0b-a711-b52e332ba26c" containerName="dnsmasq-dns" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.072014 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f8c6c7c-6c23-4457-8f25-65d7def68794" containerName="glance-httpd" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.072809 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.080588 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.104769 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.107542 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.206658 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/892bc73b-f7ab-40c1-af7b-9540725292f1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"892bc73b-f7ab-40c1-af7b-9540725292f1\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.206695 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/892bc73b-f7ab-40c1-af7b-9540725292f1-logs\") pod \"glance-default-internal-api-0\" (UID: \"892bc73b-f7ab-40c1-af7b-9540725292f1\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.206821 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/892bc73b-f7ab-40c1-af7b-9540725292f1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"892bc73b-f7ab-40c1-af7b-9540725292f1\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.206926 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/892bc73b-f7ab-40c1-af7b-9540725292f1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"892bc73b-f7ab-40c1-af7b-9540725292f1\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.206956 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/892bc73b-f7ab-40c1-af7b-9540725292f1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"892bc73b-f7ab-40c1-af7b-9540725292f1\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.207017 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6k9g\" (UniqueName: \"kubernetes.io/projected/892bc73b-f7ab-40c1-af7b-9540725292f1-kube-api-access-v6k9g\") pod \"glance-default-internal-api-0\" (UID: \"892bc73b-f7ab-40c1-af7b-9540725292f1\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.207136 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/892bc73b-f7ab-40c1-af7b-9540725292f1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"892bc73b-f7ab-40c1-af7b-9540725292f1\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.207219 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"892bc73b-f7ab-40c1-af7b-9540725292f1\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.308640 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/892bc73b-f7ab-40c1-af7b-9540725292f1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"892bc73b-f7ab-40c1-af7b-9540725292f1\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.308969 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"892bc73b-f7ab-40c1-af7b-9540725292f1\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.308995 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/892bc73b-f7ab-40c1-af7b-9540725292f1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"892bc73b-f7ab-40c1-af7b-9540725292f1\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.309012 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/892bc73b-f7ab-40c1-af7b-9540725292f1-logs\") pod \"glance-default-internal-api-0\" (UID: \"892bc73b-f7ab-40c1-af7b-9540725292f1\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.309171 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/892bc73b-f7ab-40c1-af7b-9540725292f1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"892bc73b-f7ab-40c1-af7b-9540725292f1\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.309239 4779 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"892bc73b-f7ab-40c1-af7b-9540725292f1\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.309468 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/892bc73b-f7ab-40c1-af7b-9540725292f1-logs\") pod \"glance-default-internal-api-0\" (UID: \"892bc73b-f7ab-40c1-af7b-9540725292f1\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.309069 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/892bc73b-f7ab-40c1-af7b-9540725292f1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"892bc73b-f7ab-40c1-af7b-9540725292f1\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.309541 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/892bc73b-f7ab-40c1-af7b-9540725292f1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"892bc73b-f7ab-40c1-af7b-9540725292f1\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.309568 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/892bc73b-f7ab-40c1-af7b-9540725292f1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"892bc73b-f7ab-40c1-af7b-9540725292f1\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.309593 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6k9g\" (UniqueName: \"kubernetes.io/projected/892bc73b-f7ab-40c1-af7b-9540725292f1-kube-api-access-v6k9g\") pod \"glance-default-internal-api-0\" (UID: \"892bc73b-f7ab-40c1-af7b-9540725292f1\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.314448 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/892bc73b-f7ab-40c1-af7b-9540725292f1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"892bc73b-f7ab-40c1-af7b-9540725292f1\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.314610 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/892bc73b-f7ab-40c1-af7b-9540725292f1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"892bc73b-f7ab-40c1-af7b-9540725292f1\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.315063 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/892bc73b-f7ab-40c1-af7b-9540725292f1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"892bc73b-f7ab-40c1-af7b-9540725292f1\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.315518 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/892bc73b-f7ab-40c1-af7b-9540725292f1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"892bc73b-f7ab-40c1-af7b-9540725292f1\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.329631 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6k9g\" (UniqueName: \"kubernetes.io/projected/892bc73b-f7ab-40c1-af7b-9540725292f1-kube-api-access-v6k9g\") pod \"glance-default-internal-api-0\" (UID: \"892bc73b-f7ab-40c1-af7b-9540725292f1\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.341502 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"892bc73b-f7ab-40c1-af7b-9540725292f1\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.419014 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.625955 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-n6d4q"] Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.632942 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-n6d4q"] Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.752913 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-hgpw7"] Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.755066 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hgpw7" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.756970 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.761954 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.765263 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.765413 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.765893 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-nlxvv" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.767539 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hgpw7"] Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.849838 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a4dca0b7-4681-4e3c-8602-b777c31b27f1-fernet-keys\") pod \"keystone-bootstrap-hgpw7\" (UID: \"a4dca0b7-4681-4e3c-8602-b777c31b27f1\") " pod="openstack/keystone-bootstrap-hgpw7" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.849894 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52jjl\" (UniqueName: \"kubernetes.io/projected/a4dca0b7-4681-4e3c-8602-b777c31b27f1-kube-api-access-52jjl\") pod \"keystone-bootstrap-hgpw7\" (UID: \"a4dca0b7-4681-4e3c-8602-b777c31b27f1\") " pod="openstack/keystone-bootstrap-hgpw7" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.849948 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4dca0b7-4681-4e3c-8602-b777c31b27f1-config-data\") pod \"keystone-bootstrap-hgpw7\" (UID: \"a4dca0b7-4681-4e3c-8602-b777c31b27f1\") " pod="openstack/keystone-bootstrap-hgpw7" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.849978 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4dca0b7-4681-4e3c-8602-b777c31b27f1-scripts\") pod \"keystone-bootstrap-hgpw7\" (UID: \"a4dca0b7-4681-4e3c-8602-b777c31b27f1\") " pod="openstack/keystone-bootstrap-hgpw7" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.850009 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a4dca0b7-4681-4e3c-8602-b777c31b27f1-credential-keys\") pod \"keystone-bootstrap-hgpw7\" (UID: \"a4dca0b7-4681-4e3c-8602-b777c31b27f1\") " pod="openstack/keystone-bootstrap-hgpw7" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.850202 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4dca0b7-4681-4e3c-8602-b777c31b27f1-combined-ca-bundle\") pod \"keystone-bootstrap-hgpw7\" (UID: \"a4dca0b7-4681-4e3c-8602-b777c31b27f1\") " pod="openstack/keystone-bootstrap-hgpw7" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.952325 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4dca0b7-4681-4e3c-8602-b777c31b27f1-scripts\") pod \"keystone-bootstrap-hgpw7\" (UID: \"a4dca0b7-4681-4e3c-8602-b777c31b27f1\") " pod="openstack/keystone-bootstrap-hgpw7" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.952367 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a4dca0b7-4681-4e3c-8602-b777c31b27f1-credential-keys\") pod \"keystone-bootstrap-hgpw7\" (UID: \"a4dca0b7-4681-4e3c-8602-b777c31b27f1\") " pod="openstack/keystone-bootstrap-hgpw7" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.952493 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4dca0b7-4681-4e3c-8602-b777c31b27f1-combined-ca-bundle\") pod \"keystone-bootstrap-hgpw7\" (UID: \"a4dca0b7-4681-4e3c-8602-b777c31b27f1\") " pod="openstack/keystone-bootstrap-hgpw7" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.952561 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a4dca0b7-4681-4e3c-8602-b777c31b27f1-fernet-keys\") pod \"keystone-bootstrap-hgpw7\" (UID: \"a4dca0b7-4681-4e3c-8602-b777c31b27f1\") " pod="openstack/keystone-bootstrap-hgpw7" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.952581 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52jjl\" (UniqueName: \"kubernetes.io/projected/a4dca0b7-4681-4e3c-8602-b777c31b27f1-kube-api-access-52jjl\") pod \"keystone-bootstrap-hgpw7\" (UID: \"a4dca0b7-4681-4e3c-8602-b777c31b27f1\") " pod="openstack/keystone-bootstrap-hgpw7" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.952607 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4dca0b7-4681-4e3c-8602-b777c31b27f1-config-data\") pod \"keystone-bootstrap-hgpw7\" (UID: \"a4dca0b7-4681-4e3c-8602-b777c31b27f1\") " pod="openstack/keystone-bootstrap-hgpw7" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.959154 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a4dca0b7-4681-4e3c-8602-b777c31b27f1-fernet-keys\") pod \"keystone-bootstrap-hgpw7\" (UID: \"a4dca0b7-4681-4e3c-8602-b777c31b27f1\") " pod="openstack/keystone-bootstrap-hgpw7" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.959546 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a4dca0b7-4681-4e3c-8602-b777c31b27f1-credential-keys\") pod \"keystone-bootstrap-hgpw7\" (UID: \"a4dca0b7-4681-4e3c-8602-b777c31b27f1\") " pod="openstack/keystone-bootstrap-hgpw7" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.959798 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4dca0b7-4681-4e3c-8602-b777c31b27f1-combined-ca-bundle\") pod \"keystone-bootstrap-hgpw7\" (UID: \"a4dca0b7-4681-4e3c-8602-b777c31b27f1\") " pod="openstack/keystone-bootstrap-hgpw7" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.959811 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4dca0b7-4681-4e3c-8602-b777c31b27f1-scripts\") pod \"keystone-bootstrap-hgpw7\" (UID: \"a4dca0b7-4681-4e3c-8602-b777c31b27f1\") " pod="openstack/keystone-bootstrap-hgpw7" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.961608 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4dca0b7-4681-4e3c-8602-b777c31b27f1-config-data\") pod \"keystone-bootstrap-hgpw7\" (UID: \"a4dca0b7-4681-4e3c-8602-b777c31b27f1\") " pod="openstack/keystone-bootstrap-hgpw7" Nov 28 12:55:04 crc kubenswrapper[4779]: I1128 12:55:04.971224 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52jjl\" (UniqueName: \"kubernetes.io/projected/a4dca0b7-4681-4e3c-8602-b777c31b27f1-kube-api-access-52jjl\") pod \"keystone-bootstrap-hgpw7\" (UID: \"a4dca0b7-4681-4e3c-8602-b777c31b27f1\") " pod="openstack/keystone-bootstrap-hgpw7" Nov 28 12:55:05 crc kubenswrapper[4779]: I1128 12:55:05.086336 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hgpw7" Nov 28 12:55:05 crc kubenswrapper[4779]: E1128 12:55:05.456575 4779 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Nov 28 12:55:05 crc kubenswrapper[4779]: E1128 12:55:05.457692 4779 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nhrsh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-sh4hl_openstack(aaa51f35-9ab4-4629-ae5a-349484d0917d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 12:55:05 crc kubenswrapper[4779]: E1128 12:55:05.459146 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-sh4hl" podUID="aaa51f35-9ab4-4629-ae5a-349484d0917d" Nov 28 12:55:05 crc kubenswrapper[4779]: I1128 12:55:05.737850 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f8c6c7c-6c23-4457-8f25-65d7def68794" path="/var/lib/kubelet/pods/6f8c6c7c-6c23-4457-8f25-65d7def68794/volumes" Nov 28 12:55:05 crc kubenswrapper[4779]: I1128 12:55:05.739867 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6e28a04-d7f0-46a8-81db-5b23fc3ac835" path="/var/lib/kubelet/pods/e6e28a04-d7f0-46a8-81db-5b23fc3ac835/volumes" Nov 28 12:55:05 crc kubenswrapper[4779]: E1128 12:55:05.940114 4779 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Nov 28 12:55:05 crc kubenswrapper[4779]: E1128 12:55:05.940266 4779 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nlrz9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-ggv2n_openstack(1f844f06-a227-4423-9d97-33f9c85c0df8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 12:55:05 crc kubenswrapper[4779]: E1128 12:55:05.941477 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-ggv2n" podUID="1f844f06-a227-4423-9d97-33f9c85c0df8" Nov 28 12:55:06 crc kubenswrapper[4779]: E1128 12:55:06.029389 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-ggv2n" podUID="1f844f06-a227-4423-9d97-33f9c85c0df8" Nov 28 12:55:06 crc kubenswrapper[4779]: E1128 12:55:06.032014 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-sh4hl" podUID="aaa51f35-9ab4-4629-ae5a-349484d0917d" Nov 28 12:55:06 crc kubenswrapper[4779]: I1128 12:55:06.253958 4779 scope.go:117] "RemoveContainer" containerID="cc88550e5a67e418e7de34985a6c442e7dc3248050bdaef472f7e10398197d6f" Nov 28 12:55:06 crc kubenswrapper[4779]: E1128 12:55:06.271533 4779 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Nov 28 12:55:06 crc kubenswrapper[4779]: E1128 12:55:06.271725 4779 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tsmk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-2rlgj_openstack(090eca16-3536-4b84-85c8-e9a0d3a7deb6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 12:55:06 crc kubenswrapper[4779]: E1128 12:55:06.273256 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-2rlgj" podUID="090eca16-3536-4b84-85c8-e9a0d3a7deb6" Nov 28 12:55:06 crc kubenswrapper[4779]: I1128 12:55:06.434549 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-vjtpr" Nov 28 12:55:06 crc kubenswrapper[4779]: I1128 12:55:06.583314 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-km8m4\" (UniqueName: \"kubernetes.io/projected/8367a732-6c2b-4fbd-8325-0e3c6eabc40e-kube-api-access-km8m4\") pod \"8367a732-6c2b-4fbd-8325-0e3c6eabc40e\" (UID: \"8367a732-6c2b-4fbd-8325-0e3c6eabc40e\") " Nov 28 12:55:06 crc kubenswrapper[4779]: I1128 12:55:06.583641 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8367a732-6c2b-4fbd-8325-0e3c6eabc40e-dns-swift-storage-0\") pod \"8367a732-6c2b-4fbd-8325-0e3c6eabc40e\" (UID: \"8367a732-6c2b-4fbd-8325-0e3c6eabc40e\") " Nov 28 12:55:06 crc kubenswrapper[4779]: I1128 12:55:06.583672 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8367a732-6c2b-4fbd-8325-0e3c6eabc40e-ovsdbserver-nb\") pod \"8367a732-6c2b-4fbd-8325-0e3c6eabc40e\" (UID: \"8367a732-6c2b-4fbd-8325-0e3c6eabc40e\") " Nov 28 12:55:06 crc kubenswrapper[4779]: I1128 12:55:06.583754 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8367a732-6c2b-4fbd-8325-0e3c6eabc40e-dns-svc\") pod \"8367a732-6c2b-4fbd-8325-0e3c6eabc40e\" (UID: \"8367a732-6c2b-4fbd-8325-0e3c6eabc40e\") " Nov 28 12:55:06 crc kubenswrapper[4779]: I1128 12:55:06.583773 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8367a732-6c2b-4fbd-8325-0e3c6eabc40e-config\") pod \"8367a732-6c2b-4fbd-8325-0e3c6eabc40e\" (UID: \"8367a732-6c2b-4fbd-8325-0e3c6eabc40e\") " Nov 28 12:55:06 crc kubenswrapper[4779]: I1128 12:55:06.583812 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8367a732-6c2b-4fbd-8325-0e3c6eabc40e-ovsdbserver-sb\") pod \"8367a732-6c2b-4fbd-8325-0e3c6eabc40e\" (UID: \"8367a732-6c2b-4fbd-8325-0e3c6eabc40e\") " Nov 28 12:55:06 crc kubenswrapper[4779]: I1128 12:55:06.591902 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8367a732-6c2b-4fbd-8325-0e3c6eabc40e-kube-api-access-km8m4" (OuterVolumeSpecName: "kube-api-access-km8m4") pod "8367a732-6c2b-4fbd-8325-0e3c6eabc40e" (UID: "8367a732-6c2b-4fbd-8325-0e3c6eabc40e"). InnerVolumeSpecName "kube-api-access-km8m4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:55:06 crc kubenswrapper[4779]: I1128 12:55:06.676771 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8367a732-6c2b-4fbd-8325-0e3c6eabc40e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8367a732-6c2b-4fbd-8325-0e3c6eabc40e" (UID: "8367a732-6c2b-4fbd-8325-0e3c6eabc40e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:55:06 crc kubenswrapper[4779]: I1128 12:55:06.682270 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8367a732-6c2b-4fbd-8325-0e3c6eabc40e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8367a732-6c2b-4fbd-8325-0e3c6eabc40e" (UID: "8367a732-6c2b-4fbd-8325-0e3c6eabc40e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:55:06 crc kubenswrapper[4779]: I1128 12:55:06.685972 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8367a732-6c2b-4fbd-8325-0e3c6eabc40e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8367a732-6c2b-4fbd-8325-0e3c6eabc40e" (UID: "8367a732-6c2b-4fbd-8325-0e3c6eabc40e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:55:06 crc kubenswrapper[4779]: I1128 12:55:06.686413 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-km8m4\" (UniqueName: \"kubernetes.io/projected/8367a732-6c2b-4fbd-8325-0e3c6eabc40e-kube-api-access-km8m4\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:06 crc kubenswrapper[4779]: I1128 12:55:06.686430 4779 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8367a732-6c2b-4fbd-8325-0e3c6eabc40e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:06 crc kubenswrapper[4779]: I1128 12:55:06.686440 4779 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8367a732-6c2b-4fbd-8325-0e3c6eabc40e-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:06 crc kubenswrapper[4779]: I1128 12:55:06.686448 4779 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8367a732-6c2b-4fbd-8325-0e3c6eabc40e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:06 crc kubenswrapper[4779]: I1128 12:55:06.687182 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8367a732-6c2b-4fbd-8325-0e3c6eabc40e-config" (OuterVolumeSpecName: "config") pod "8367a732-6c2b-4fbd-8325-0e3c6eabc40e" (UID: "8367a732-6c2b-4fbd-8325-0e3c6eabc40e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:55:06 crc kubenswrapper[4779]: I1128 12:55:06.698631 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8367a732-6c2b-4fbd-8325-0e3c6eabc40e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8367a732-6c2b-4fbd-8325-0e3c6eabc40e" (UID: "8367a732-6c2b-4fbd-8325-0e3c6eabc40e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:55:07 crc kubenswrapper[4779]: I1128 12:55:06.766201 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hgpw7"] Nov 28 12:55:07 crc kubenswrapper[4779]: W1128 12:55:06.767636 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4dca0b7_4681_4e3c_8602_b777c31b27f1.slice/crio-50eb37b30f64ae38468311a98bb692b8345b932a303140ca9819e4ed67b51045 WatchSource:0}: Error finding container 50eb37b30f64ae38468311a98bb692b8345b932a303140ca9819e4ed67b51045: Status 404 returned error can't find the container with id 50eb37b30f64ae38468311a98bb692b8345b932a303140ca9819e4ed67b51045 Nov 28 12:55:07 crc kubenswrapper[4779]: I1128 12:55:06.788216 4779 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8367a732-6c2b-4fbd-8325-0e3c6eabc40e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:07 crc kubenswrapper[4779]: I1128 12:55:06.788240 4779 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8367a732-6c2b-4fbd-8325-0e3c6eabc40e-config\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:07 crc kubenswrapper[4779]: I1128 12:55:06.981265 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 12:55:07 crc kubenswrapper[4779]: W1128 12:55:06.982334 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod892bc73b_f7ab_40c1_af7b_9540725292f1.slice/crio-1383e0c9debe5aa7b1c9e3386675f5558c7623a8eee377273f41f0e7c538f46a WatchSource:0}: Error finding container 1383e0c9debe5aa7b1c9e3386675f5558c7623a8eee377273f41f0e7c538f46a: Status 404 returned error can't find the container with id 1383e0c9debe5aa7b1c9e3386675f5558c7623a8eee377273f41f0e7c538f46a Nov 28 12:55:07 crc kubenswrapper[4779]: I1128 12:55:07.037883 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hgpw7" event={"ID":"a4dca0b7-4681-4e3c-8602-b777c31b27f1","Type":"ContainerStarted","Data":"50eb37b30f64ae38468311a98bb692b8345b932a303140ca9819e4ed67b51045"} Nov 28 12:55:07 crc kubenswrapper[4779]: I1128 12:55:07.040298 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1c512ed-6e02-45b5-a320-0a1b58b074ab","Type":"ContainerStarted","Data":"a843fd57f43ab914ddbc61aac1c2dfb4f6a765d67e387788f015fb4ca36eac19"} Nov 28 12:55:07 crc kubenswrapper[4779]: I1128 12:55:07.041109 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"892bc73b-f7ab-40c1-af7b-9540725292f1","Type":"ContainerStarted","Data":"1383e0c9debe5aa7b1c9e3386675f5558c7623a8eee377273f41f0e7c538f46a"} Nov 28 12:55:07 crc kubenswrapper[4779]: I1128 12:55:07.046833 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-vjtpr" Nov 28 12:55:07 crc kubenswrapper[4779]: I1128 12:55:07.048005 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-vjtpr" event={"ID":"8367a732-6c2b-4fbd-8325-0e3c6eabc40e","Type":"ContainerDied","Data":"762c62bca407da7bef127c3d83c28d28337ae4fdc3c3e9d8ad7a97d582763e73"} Nov 28 12:55:07 crc kubenswrapper[4779]: I1128 12:55:07.048125 4779 scope.go:117] "RemoveContainer" containerID="279416cdd5a7057b391b215ca9fea4e4dc0ce87b39f1f4adb3528ccd3038908c" Nov 28 12:55:07 crc kubenswrapper[4779]: E1128 12:55:07.049305 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-2rlgj" podUID="090eca16-3536-4b84-85c8-e9a0d3a7deb6" Nov 28 12:55:07 crc kubenswrapper[4779]: I1128 12:55:07.098977 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-vjtpr"] Nov 28 12:55:07 crc kubenswrapper[4779]: I1128 12:55:07.109235 4779 scope.go:117] "RemoveContainer" containerID="bdd2c1b22e61b1c91b586239352a97b6ac5bbdb40d348f8411441e27bec1dc43" Nov 28 12:55:07 crc kubenswrapper[4779]: I1128 12:55:07.110048 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-vjtpr"] Nov 28 12:55:07 crc kubenswrapper[4779]: I1128 12:55:07.742480 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8367a732-6c2b-4fbd-8325-0e3c6eabc40e" path="/var/lib/kubelet/pods/8367a732-6c2b-4fbd-8325-0e3c6eabc40e/volumes" Nov 28 12:55:07 crc kubenswrapper[4779]: E1128 12:55:07.749416 4779 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod082059dc_73e6_482b_a0ad_ed2a62282f61.slice/crio-121b08d2adad132bfcefaa378c89b7d30716fbd3708ae63ea920eecc8466b749.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod082059dc_73e6_482b_a0ad_ed2a62282f61.slice/crio-conmon-121b08d2adad132bfcefaa378c89b7d30716fbd3708ae63ea920eecc8466b749.scope\": RecentStats: unable to find data in memory cache]" Nov 28 12:55:08 crc kubenswrapper[4779]: I1128 12:55:08.055240 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hgpw7" event={"ID":"a4dca0b7-4681-4e3c-8602-b777c31b27f1","Type":"ContainerStarted","Data":"d1d8f787176c8df9ef9f55761ae58991d4c78c9e80ce202068bfc1496ae1f167"} Nov 28 12:55:08 crc kubenswrapper[4779]: I1128 12:55:08.056914 4779 generic.go:334] "Generic (PLEG): container finished" podID="082059dc-73e6-482b-a0ad-ed2a62282f61" containerID="121b08d2adad132bfcefaa378c89b7d30716fbd3708ae63ea920eecc8466b749" exitCode=0 Nov 28 12:55:08 crc kubenswrapper[4779]: I1128 12:55:08.056965 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-q7v56" event={"ID":"082059dc-73e6-482b-a0ad-ed2a62282f61","Type":"ContainerDied","Data":"121b08d2adad132bfcefaa378c89b7d30716fbd3708ae63ea920eecc8466b749"} Nov 28 12:55:08 crc kubenswrapper[4779]: I1128 12:55:08.058879 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"892bc73b-f7ab-40c1-af7b-9540725292f1","Type":"ContainerStarted","Data":"f6ce8e51262e48f0e55986bcd03ca7e27b32d59ec9661907fe63a77584e36d84"} Nov 28 12:55:08 crc kubenswrapper[4779]: I1128 12:55:08.058914 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"892bc73b-f7ab-40c1-af7b-9540725292f1","Type":"ContainerStarted","Data":"bf54c552df0d3675666c9a5d3ec263945f8d9fd2a5f1772bd9b2fba1af83b947"} Nov 28 12:55:08 crc kubenswrapper[4779]: I1128 12:55:08.061013 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c6a188e5-6e45-404a-b021-91592dce265d","Type":"ContainerStarted","Data":"5428399fc8b580cc25d5e6886dfcf7f71c1e9c8a7e98b87f02a0691147c7ed07"} Nov 28 12:55:08 crc kubenswrapper[4779]: I1128 12:55:08.061175 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c6a188e5-6e45-404a-b021-91592dce265d" containerName="glance-httpd" containerID="cri-o://5428399fc8b580cc25d5e6886dfcf7f71c1e9c8a7e98b87f02a0691147c7ed07" gracePeriod=30 Nov 28 12:55:08 crc kubenswrapper[4779]: I1128 12:55:08.061295 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c6a188e5-6e45-404a-b021-91592dce265d" containerName="glance-log" containerID="cri-o://9ecb34006f6759c31013e30318a9b29c315f49da4b35e69a9e3f9dc764a06c2d" gracePeriod=30 Nov 28 12:55:08 crc kubenswrapper[4779]: I1128 12:55:08.072826 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-hgpw7" podStartSLOduration=4.072814793 podStartE2EDuration="4.072814793s" podCreationTimestamp="2025-11-28 12:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:55:08.070447271 +0000 UTC m=+1168.636122625" watchObservedRunningTime="2025-11-28 12:55:08.072814793 +0000 UTC m=+1168.638490147" Nov 28 12:55:08 crc kubenswrapper[4779]: I1128 12:55:08.114325 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.114307888 podStartE2EDuration="4.114307888s" podCreationTimestamp="2025-11-28 12:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:55:08.105598558 +0000 UTC m=+1168.671273932" watchObservedRunningTime="2025-11-28 12:55:08.114307888 +0000 UTC m=+1168.679983242" Nov 28 12:55:08 crc kubenswrapper[4779]: I1128 12:55:08.134525 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=35.134508431 podStartE2EDuration="35.134508431s" podCreationTimestamp="2025-11-28 12:54:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:55:08.131422329 +0000 UTC m=+1168.697097693" watchObservedRunningTime="2025-11-28 12:55:08.134508431 +0000 UTC m=+1168.700183795" Nov 28 12:55:08 crc kubenswrapper[4779]: I1128 12:55:08.710999 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 12:55:08 crc kubenswrapper[4779]: I1128 12:55:08.819112 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"c6a188e5-6e45-404a-b021-91592dce265d\" (UID: \"c6a188e5-6e45-404a-b021-91592dce265d\") " Nov 28 12:55:08 crc kubenswrapper[4779]: I1128 12:55:08.819214 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6a188e5-6e45-404a-b021-91592dce265d-public-tls-certs\") pod \"c6a188e5-6e45-404a-b021-91592dce265d\" (UID: \"c6a188e5-6e45-404a-b021-91592dce265d\") " Nov 28 12:55:08 crc kubenswrapper[4779]: I1128 12:55:08.819836 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6a188e5-6e45-404a-b021-91592dce265d-scripts\") pod \"c6a188e5-6e45-404a-b021-91592dce265d\" (UID: \"c6a188e5-6e45-404a-b021-91592dce265d\") " Nov 28 12:55:08 crc kubenswrapper[4779]: I1128 12:55:08.819907 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wp8qw\" (UniqueName: \"kubernetes.io/projected/c6a188e5-6e45-404a-b021-91592dce265d-kube-api-access-wp8qw\") pod \"c6a188e5-6e45-404a-b021-91592dce265d\" (UID: \"c6a188e5-6e45-404a-b021-91592dce265d\") " Nov 28 12:55:08 crc kubenswrapper[4779]: I1128 12:55:08.819933 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6a188e5-6e45-404a-b021-91592dce265d-combined-ca-bundle\") pod \"c6a188e5-6e45-404a-b021-91592dce265d\" (UID: \"c6a188e5-6e45-404a-b021-91592dce265d\") " Nov 28 12:55:08 crc kubenswrapper[4779]: I1128 12:55:08.819959 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c6a188e5-6e45-404a-b021-91592dce265d-httpd-run\") pod \"c6a188e5-6e45-404a-b021-91592dce265d\" (UID: \"c6a188e5-6e45-404a-b021-91592dce265d\") " Nov 28 12:55:08 crc kubenswrapper[4779]: I1128 12:55:08.819979 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6a188e5-6e45-404a-b021-91592dce265d-logs\") pod \"c6a188e5-6e45-404a-b021-91592dce265d\" (UID: \"c6a188e5-6e45-404a-b021-91592dce265d\") " Nov 28 12:55:08 crc kubenswrapper[4779]: I1128 12:55:08.819999 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6a188e5-6e45-404a-b021-91592dce265d-config-data\") pod \"c6a188e5-6e45-404a-b021-91592dce265d\" (UID: \"c6a188e5-6e45-404a-b021-91592dce265d\") " Nov 28 12:55:08 crc kubenswrapper[4779]: I1128 12:55:08.820393 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6a188e5-6e45-404a-b021-91592dce265d-logs" (OuterVolumeSpecName: "logs") pod "c6a188e5-6e45-404a-b021-91592dce265d" (UID: "c6a188e5-6e45-404a-b021-91592dce265d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:55:08 crc kubenswrapper[4779]: I1128 12:55:08.820525 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6a188e5-6e45-404a-b021-91592dce265d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c6a188e5-6e45-404a-b021-91592dce265d" (UID: "c6a188e5-6e45-404a-b021-91592dce265d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:55:08 crc kubenswrapper[4779]: I1128 12:55:08.824561 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6a188e5-6e45-404a-b021-91592dce265d-kube-api-access-wp8qw" (OuterVolumeSpecName: "kube-api-access-wp8qw") pod "c6a188e5-6e45-404a-b021-91592dce265d" (UID: "c6a188e5-6e45-404a-b021-91592dce265d"). InnerVolumeSpecName "kube-api-access-wp8qw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:55:08 crc kubenswrapper[4779]: I1128 12:55:08.825208 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6a188e5-6e45-404a-b021-91592dce265d-scripts" (OuterVolumeSpecName: "scripts") pod "c6a188e5-6e45-404a-b021-91592dce265d" (UID: "c6a188e5-6e45-404a-b021-91592dce265d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:55:08 crc kubenswrapper[4779]: I1128 12:55:08.836198 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "c6a188e5-6e45-404a-b021-91592dce265d" (UID: "c6a188e5-6e45-404a-b021-91592dce265d"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:55:08 crc kubenswrapper[4779]: I1128 12:55:08.859835 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6a188e5-6e45-404a-b021-91592dce265d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c6a188e5-6e45-404a-b021-91592dce265d" (UID: "c6a188e5-6e45-404a-b021-91592dce265d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:55:08 crc kubenswrapper[4779]: I1128 12:55:08.883147 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6a188e5-6e45-404a-b021-91592dce265d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c6a188e5-6e45-404a-b021-91592dce265d" (UID: "c6a188e5-6e45-404a-b021-91592dce265d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:55:08 crc kubenswrapper[4779]: I1128 12:55:08.897717 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6a188e5-6e45-404a-b021-91592dce265d-config-data" (OuterVolumeSpecName: "config-data") pod "c6a188e5-6e45-404a-b021-91592dce265d" (UID: "c6a188e5-6e45-404a-b021-91592dce265d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:55:08 crc kubenswrapper[4779]: I1128 12:55:08.922199 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wp8qw\" (UniqueName: \"kubernetes.io/projected/c6a188e5-6e45-404a-b021-91592dce265d-kube-api-access-wp8qw\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:08 crc kubenswrapper[4779]: I1128 12:55:08.922235 4779 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6a188e5-6e45-404a-b021-91592dce265d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:08 crc kubenswrapper[4779]: I1128 12:55:08.922250 4779 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c6a188e5-6e45-404a-b021-91592dce265d-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:08 crc kubenswrapper[4779]: I1128 12:55:08.922263 4779 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6a188e5-6e45-404a-b021-91592dce265d-logs\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:08 crc kubenswrapper[4779]: I1128 12:55:08.922274 4779 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6a188e5-6e45-404a-b021-91592dce265d-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:08 crc kubenswrapper[4779]: I1128 12:55:08.922311 4779 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Nov 28 12:55:08 crc kubenswrapper[4779]: I1128 12:55:08.922325 4779 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6a188e5-6e45-404a-b021-91592dce265d-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:08 crc kubenswrapper[4779]: I1128 12:55:08.922340 4779 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6a188e5-6e45-404a-b021-91592dce265d-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:08 crc kubenswrapper[4779]: I1128 12:55:08.940734 4779 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.024589 4779 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.089566 4779 generic.go:334] "Generic (PLEG): container finished" podID="c6a188e5-6e45-404a-b021-91592dce265d" containerID="5428399fc8b580cc25d5e6886dfcf7f71c1e9c8a7e98b87f02a0691147c7ed07" exitCode=0 Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.089607 4779 generic.go:334] "Generic (PLEG): container finished" podID="c6a188e5-6e45-404a-b021-91592dce265d" containerID="9ecb34006f6759c31013e30318a9b29c315f49da4b35e69a9e3f9dc764a06c2d" exitCode=143 Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.089623 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.089683 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c6a188e5-6e45-404a-b021-91592dce265d","Type":"ContainerDied","Data":"5428399fc8b580cc25d5e6886dfcf7f71c1e9c8a7e98b87f02a0691147c7ed07"} Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.089746 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c6a188e5-6e45-404a-b021-91592dce265d","Type":"ContainerDied","Data":"9ecb34006f6759c31013e30318a9b29c315f49da4b35e69a9e3f9dc764a06c2d"} Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.089770 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c6a188e5-6e45-404a-b021-91592dce265d","Type":"ContainerDied","Data":"aba7e6b5046ba9b690e1f4a7483bbac9a6a00b102ebec47a656ba1c82ee69e6d"} Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.089797 4779 scope.go:117] "RemoveContainer" containerID="5428399fc8b580cc25d5e6886dfcf7f71c1e9c8a7e98b87f02a0691147c7ed07" Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.092148 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1c512ed-6e02-45b5-a320-0a1b58b074ab","Type":"ContainerStarted","Data":"d6f059866bd005fa18d51c3b07f06cfc7707528011b5c5beea2756640d4de1ce"} Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.120199 4779 scope.go:117] "RemoveContainer" containerID="9ecb34006f6759c31013e30318a9b29c315f49da4b35e69a9e3f9dc764a06c2d" Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.127567 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.136678 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.155554 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 12:55:09 crc kubenswrapper[4779]: E1128 12:55:09.155990 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8367a732-6c2b-4fbd-8325-0e3c6eabc40e" containerName="dnsmasq-dns" Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.156003 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="8367a732-6c2b-4fbd-8325-0e3c6eabc40e" containerName="dnsmasq-dns" Nov 28 12:55:09 crc kubenswrapper[4779]: E1128 12:55:09.156013 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6a188e5-6e45-404a-b021-91592dce265d" containerName="glance-httpd" Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.156020 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6a188e5-6e45-404a-b021-91592dce265d" containerName="glance-httpd" Nov 28 12:55:09 crc kubenswrapper[4779]: E1128 12:55:09.156041 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6a188e5-6e45-404a-b021-91592dce265d" containerName="glance-log" Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.156046 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6a188e5-6e45-404a-b021-91592dce265d" containerName="glance-log" Nov 28 12:55:09 crc kubenswrapper[4779]: E1128 12:55:09.156060 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8367a732-6c2b-4fbd-8325-0e3c6eabc40e" containerName="init" Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.156065 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="8367a732-6c2b-4fbd-8325-0e3c6eabc40e" containerName="init" Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.156226 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6a188e5-6e45-404a-b021-91592dce265d" containerName="glance-log" Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.156245 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="8367a732-6c2b-4fbd-8325-0e3c6eabc40e" containerName="dnsmasq-dns" Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.156297 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6a188e5-6e45-404a-b021-91592dce265d" containerName="glance-httpd" Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.157397 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.200245 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.200536 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.201508 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.212250 4779 scope.go:117] "RemoveContainer" containerID="5428399fc8b580cc25d5e6886dfcf7f71c1e9c8a7e98b87f02a0691147c7ed07" Nov 28 12:55:09 crc kubenswrapper[4779]: E1128 12:55:09.212634 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5428399fc8b580cc25d5e6886dfcf7f71c1e9c8a7e98b87f02a0691147c7ed07\": container with ID starting with 5428399fc8b580cc25d5e6886dfcf7f71c1e9c8a7e98b87f02a0691147c7ed07 not found: ID does not exist" containerID="5428399fc8b580cc25d5e6886dfcf7f71c1e9c8a7e98b87f02a0691147c7ed07" Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.212673 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5428399fc8b580cc25d5e6886dfcf7f71c1e9c8a7e98b87f02a0691147c7ed07"} err="failed to get container status \"5428399fc8b580cc25d5e6886dfcf7f71c1e9c8a7e98b87f02a0691147c7ed07\": rpc error: code = NotFound desc = could not find container \"5428399fc8b580cc25d5e6886dfcf7f71c1e9c8a7e98b87f02a0691147c7ed07\": container with ID starting with 5428399fc8b580cc25d5e6886dfcf7f71c1e9c8a7e98b87f02a0691147c7ed07 not found: ID does not exist" Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.212698 4779 scope.go:117] "RemoveContainer" containerID="9ecb34006f6759c31013e30318a9b29c315f49da4b35e69a9e3f9dc764a06c2d" Nov 28 12:55:09 crc kubenswrapper[4779]: E1128 12:55:09.213028 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ecb34006f6759c31013e30318a9b29c315f49da4b35e69a9e3f9dc764a06c2d\": container with ID starting with 9ecb34006f6759c31013e30318a9b29c315f49da4b35e69a9e3f9dc764a06c2d not found: ID does not exist" containerID="9ecb34006f6759c31013e30318a9b29c315f49da4b35e69a9e3f9dc764a06c2d" Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.213047 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ecb34006f6759c31013e30318a9b29c315f49da4b35e69a9e3f9dc764a06c2d"} err="failed to get container status \"9ecb34006f6759c31013e30318a9b29c315f49da4b35e69a9e3f9dc764a06c2d\": rpc error: code = NotFound desc = could not find container \"9ecb34006f6759c31013e30318a9b29c315f49da4b35e69a9e3f9dc764a06c2d\": container with ID starting with 9ecb34006f6759c31013e30318a9b29c315f49da4b35e69a9e3f9dc764a06c2d not found: ID does not exist" Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.213059 4779 scope.go:117] "RemoveContainer" containerID="5428399fc8b580cc25d5e6886dfcf7f71c1e9c8a7e98b87f02a0691147c7ed07" Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.213359 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5428399fc8b580cc25d5e6886dfcf7f71c1e9c8a7e98b87f02a0691147c7ed07"} err="failed to get container status \"5428399fc8b580cc25d5e6886dfcf7f71c1e9c8a7e98b87f02a0691147c7ed07\": rpc error: code = NotFound desc = could not find container \"5428399fc8b580cc25d5e6886dfcf7f71c1e9c8a7e98b87f02a0691147c7ed07\": container with ID starting with 5428399fc8b580cc25d5e6886dfcf7f71c1e9c8a7e98b87f02a0691147c7ed07 not found: ID does not exist" Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.213379 4779 scope.go:117] "RemoveContainer" containerID="9ecb34006f6759c31013e30318a9b29c315f49da4b35e69a9e3f9dc764a06c2d" Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.213665 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ecb34006f6759c31013e30318a9b29c315f49da4b35e69a9e3f9dc764a06c2d"} err="failed to get container status \"9ecb34006f6759c31013e30318a9b29c315f49da4b35e69a9e3f9dc764a06c2d\": rpc error: code = NotFound desc = could not find container \"9ecb34006f6759c31013e30318a9b29c315f49da4b35e69a9e3f9dc764a06c2d\": container with ID starting with 9ecb34006f6759c31013e30318a9b29c315f49da4b35e69a9e3f9dc764a06c2d not found: ID does not exist" Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.332203 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f67e7c90-06fb-42ba-98c6-b30f8f9d2829-scripts\") pod \"glance-default-external-api-0\" (UID: \"f67e7c90-06fb-42ba-98c6-b30f8f9d2829\") " pod="openstack/glance-default-external-api-0" Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.332513 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f67e7c90-06fb-42ba-98c6-b30f8f9d2829-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f67e7c90-06fb-42ba-98c6-b30f8f9d2829\") " pod="openstack/glance-default-external-api-0" Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.332675 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"f67e7c90-06fb-42ba-98c6-b30f8f9d2829\") " pod="openstack/glance-default-external-api-0" Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.332743 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f67e7c90-06fb-42ba-98c6-b30f8f9d2829-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f67e7c90-06fb-42ba-98c6-b30f8f9d2829\") " pod="openstack/glance-default-external-api-0" Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.332898 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f67e7c90-06fb-42ba-98c6-b30f8f9d2829-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f67e7c90-06fb-42ba-98c6-b30f8f9d2829\") " pod="openstack/glance-default-external-api-0" Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.332968 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7brf\" (UniqueName: \"kubernetes.io/projected/f67e7c90-06fb-42ba-98c6-b30f8f9d2829-kube-api-access-b7brf\") pod \"glance-default-external-api-0\" (UID: \"f67e7c90-06fb-42ba-98c6-b30f8f9d2829\") " pod="openstack/glance-default-external-api-0" Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.333002 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f67e7c90-06fb-42ba-98c6-b30f8f9d2829-logs\") pod \"glance-default-external-api-0\" (UID: \"f67e7c90-06fb-42ba-98c6-b30f8f9d2829\") " pod="openstack/glance-default-external-api-0" Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.333044 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f67e7c90-06fb-42ba-98c6-b30f8f9d2829-config-data\") pod \"glance-default-external-api-0\" (UID: \"f67e7c90-06fb-42ba-98c6-b30f8f9d2829\") " pod="openstack/glance-default-external-api-0" Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.434761 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"f67e7c90-06fb-42ba-98c6-b30f8f9d2829\") " pod="openstack/glance-default-external-api-0" Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.434809 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f67e7c90-06fb-42ba-98c6-b30f8f9d2829-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f67e7c90-06fb-42ba-98c6-b30f8f9d2829\") " pod="openstack/glance-default-external-api-0" Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.434862 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f67e7c90-06fb-42ba-98c6-b30f8f9d2829-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f67e7c90-06fb-42ba-98c6-b30f8f9d2829\") " pod="openstack/glance-default-external-api-0" Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.434883 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7brf\" (UniqueName: \"kubernetes.io/projected/f67e7c90-06fb-42ba-98c6-b30f8f9d2829-kube-api-access-b7brf\") pod \"glance-default-external-api-0\" (UID: \"f67e7c90-06fb-42ba-98c6-b30f8f9d2829\") " pod="openstack/glance-default-external-api-0" Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.434903 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f67e7c90-06fb-42ba-98c6-b30f8f9d2829-logs\") pod \"glance-default-external-api-0\" (UID: \"f67e7c90-06fb-42ba-98c6-b30f8f9d2829\") " pod="openstack/glance-default-external-api-0" Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.434925 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f67e7c90-06fb-42ba-98c6-b30f8f9d2829-config-data\") pod \"glance-default-external-api-0\" (UID: \"f67e7c90-06fb-42ba-98c6-b30f8f9d2829\") " pod="openstack/glance-default-external-api-0" Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.434957 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f67e7c90-06fb-42ba-98c6-b30f8f9d2829-scripts\") pod \"glance-default-external-api-0\" (UID: \"f67e7c90-06fb-42ba-98c6-b30f8f9d2829\") " pod="openstack/glance-default-external-api-0" Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.434991 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f67e7c90-06fb-42ba-98c6-b30f8f9d2829-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f67e7c90-06fb-42ba-98c6-b30f8f9d2829\") " pod="openstack/glance-default-external-api-0" Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.436392 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f67e7c90-06fb-42ba-98c6-b30f8f9d2829-logs\") pod \"glance-default-external-api-0\" (UID: \"f67e7c90-06fb-42ba-98c6-b30f8f9d2829\") " pod="openstack/glance-default-external-api-0" Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.436545 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f67e7c90-06fb-42ba-98c6-b30f8f9d2829-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f67e7c90-06fb-42ba-98c6-b30f8f9d2829\") " pod="openstack/glance-default-external-api-0" Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.436757 4779 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"f67e7c90-06fb-42ba-98c6-b30f8f9d2829\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.448799 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f67e7c90-06fb-42ba-98c6-b30f8f9d2829-scripts\") pod \"glance-default-external-api-0\" (UID: \"f67e7c90-06fb-42ba-98c6-b30f8f9d2829\") " pod="openstack/glance-default-external-api-0" Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.449057 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f67e7c90-06fb-42ba-98c6-b30f8f9d2829-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f67e7c90-06fb-42ba-98c6-b30f8f9d2829\") " pod="openstack/glance-default-external-api-0" Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.449259 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f67e7c90-06fb-42ba-98c6-b30f8f9d2829-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f67e7c90-06fb-42ba-98c6-b30f8f9d2829\") " pod="openstack/glance-default-external-api-0" Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.449538 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f67e7c90-06fb-42ba-98c6-b30f8f9d2829-config-data\") pod \"glance-default-external-api-0\" (UID: \"f67e7c90-06fb-42ba-98c6-b30f8f9d2829\") " pod="openstack/glance-default-external-api-0" Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.452792 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7brf\" (UniqueName: \"kubernetes.io/projected/f67e7c90-06fb-42ba-98c6-b30f8f9d2829-kube-api-access-b7brf\") pod \"glance-default-external-api-0\" (UID: \"f67e7c90-06fb-42ba-98c6-b30f8f9d2829\") " pod="openstack/glance-default-external-api-0" Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.485701 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"f67e7c90-06fb-42ba-98c6-b30f8f9d2829\") " pod="openstack/glance-default-external-api-0" Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.523141 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.549320 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-q7v56" Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.637263 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/082059dc-73e6-482b-a0ad-ed2a62282f61-config\") pod \"082059dc-73e6-482b-a0ad-ed2a62282f61\" (UID: \"082059dc-73e6-482b-a0ad-ed2a62282f61\") " Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.637355 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/082059dc-73e6-482b-a0ad-ed2a62282f61-combined-ca-bundle\") pod \"082059dc-73e6-482b-a0ad-ed2a62282f61\" (UID: \"082059dc-73e6-482b-a0ad-ed2a62282f61\") " Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.637401 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hmfg\" (UniqueName: \"kubernetes.io/projected/082059dc-73e6-482b-a0ad-ed2a62282f61-kube-api-access-6hmfg\") pod \"082059dc-73e6-482b-a0ad-ed2a62282f61\" (UID: \"082059dc-73e6-482b-a0ad-ed2a62282f61\") " Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.642663 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/082059dc-73e6-482b-a0ad-ed2a62282f61-kube-api-access-6hmfg" (OuterVolumeSpecName: "kube-api-access-6hmfg") pod "082059dc-73e6-482b-a0ad-ed2a62282f61" (UID: "082059dc-73e6-482b-a0ad-ed2a62282f61"). InnerVolumeSpecName "kube-api-access-6hmfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.659956 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/082059dc-73e6-482b-a0ad-ed2a62282f61-config" (OuterVolumeSpecName: "config") pod "082059dc-73e6-482b-a0ad-ed2a62282f61" (UID: "082059dc-73e6-482b-a0ad-ed2a62282f61"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.667649 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/082059dc-73e6-482b-a0ad-ed2a62282f61-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "082059dc-73e6-482b-a0ad-ed2a62282f61" (UID: "082059dc-73e6-482b-a0ad-ed2a62282f61"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.739359 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6a188e5-6e45-404a-b021-91592dce265d" path="/var/lib/kubelet/pods/c6a188e5-6e45-404a-b021-91592dce265d/volumes" Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.739618 4779 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/082059dc-73e6-482b-a0ad-ed2a62282f61-config\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.739641 4779 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/082059dc-73e6-482b-a0ad-ed2a62282f61-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:09 crc kubenswrapper[4779]: I1128 12:55:09.739651 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hmfg\" (UniqueName: \"kubernetes.io/projected/082059dc-73e6-482b-a0ad-ed2a62282f61-kube-api-access-6hmfg\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:10 crc kubenswrapper[4779]: I1128 12:55:10.033431 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 12:55:10 crc kubenswrapper[4779]: I1128 12:55:10.107363 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-q7v56" event={"ID":"082059dc-73e6-482b-a0ad-ed2a62282f61","Type":"ContainerDied","Data":"8c401f9e3bbc7c8fc2ef8e0bcb612ea0e141b7fe4df0990fe5cbdcb27b0b2f9d"} Nov 28 12:55:10 crc kubenswrapper[4779]: I1128 12:55:10.107401 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c401f9e3bbc7c8fc2ef8e0bcb612ea0e141b7fe4df0990fe5cbdcb27b0b2f9d" Nov 28 12:55:10 crc kubenswrapper[4779]: I1128 12:55:10.107403 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-q7v56" Nov 28 12:55:10 crc kubenswrapper[4779]: I1128 12:55:10.110273 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f67e7c90-06fb-42ba-98c6-b30f8f9d2829","Type":"ContainerStarted","Data":"5af88a65c055621a82d5fa5518209c209add1ed2427cf5662605b5c9e6590b92"} Nov 28 12:55:10 crc kubenswrapper[4779]: I1128 12:55:10.129144 4779 generic.go:334] "Generic (PLEG): container finished" podID="a4dca0b7-4681-4e3c-8602-b777c31b27f1" containerID="d1d8f787176c8df9ef9f55761ae58991d4c78c9e80ce202068bfc1496ae1f167" exitCode=0 Nov 28 12:55:10 crc kubenswrapper[4779]: I1128 12:55:10.129186 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hgpw7" event={"ID":"a4dca0b7-4681-4e3c-8602-b777c31b27f1","Type":"ContainerDied","Data":"d1d8f787176c8df9ef9f55761ae58991d4c78c9e80ce202068bfc1496ae1f167"} Nov 28 12:55:10 crc kubenswrapper[4779]: I1128 12:55:10.261746 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-wsv2w"] Nov 28 12:55:10 crc kubenswrapper[4779]: E1128 12:55:10.263635 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="082059dc-73e6-482b-a0ad-ed2a62282f61" containerName="neutron-db-sync" Nov 28 12:55:10 crc kubenswrapper[4779]: I1128 12:55:10.263667 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="082059dc-73e6-482b-a0ad-ed2a62282f61" containerName="neutron-db-sync" Nov 28 12:55:10 crc kubenswrapper[4779]: I1128 12:55:10.263839 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="082059dc-73e6-482b-a0ad-ed2a62282f61" containerName="neutron-db-sync" Nov 28 12:55:10 crc kubenswrapper[4779]: I1128 12:55:10.264765 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-wsv2w" Nov 28 12:55:10 crc kubenswrapper[4779]: I1128 12:55:10.278956 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-wsv2w"] Nov 28 12:55:10 crc kubenswrapper[4779]: I1128 12:55:10.378506 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6cf455dd68-ljtxn"] Nov 28 12:55:10 crc kubenswrapper[4779]: I1128 12:55:10.379927 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6cf455dd68-ljtxn" Nov 28 12:55:10 crc kubenswrapper[4779]: I1128 12:55:10.382523 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 28 12:55:10 crc kubenswrapper[4779]: I1128 12:55:10.382577 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 28 12:55:10 crc kubenswrapper[4779]: I1128 12:55:10.383328 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 28 12:55:10 crc kubenswrapper[4779]: I1128 12:55:10.383577 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-9ppff" Nov 28 12:55:10 crc kubenswrapper[4779]: I1128 12:55:10.389189 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6cf455dd68-ljtxn"] Nov 28 12:55:10 crc kubenswrapper[4779]: I1128 12:55:10.452649 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fe5791bd-f850-498b-8dfe-bef249904487-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-wsv2w\" (UID: \"fe5791bd-f850-498b-8dfe-bef249904487\") " pod="openstack/dnsmasq-dns-84b966f6c9-wsv2w" Nov 28 12:55:10 crc kubenswrapper[4779]: I1128 12:55:10.452715 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5blg\" (UniqueName: \"kubernetes.io/projected/fe5791bd-f850-498b-8dfe-bef249904487-kube-api-access-k5blg\") pod \"dnsmasq-dns-84b966f6c9-wsv2w\" (UID: \"fe5791bd-f850-498b-8dfe-bef249904487\") " pod="openstack/dnsmasq-dns-84b966f6c9-wsv2w" Nov 28 12:55:10 crc kubenswrapper[4779]: I1128 12:55:10.452794 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe5791bd-f850-498b-8dfe-bef249904487-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-wsv2w\" (UID: \"fe5791bd-f850-498b-8dfe-bef249904487\") " pod="openstack/dnsmasq-dns-84b966f6c9-wsv2w" Nov 28 12:55:10 crc kubenswrapper[4779]: I1128 12:55:10.453319 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe5791bd-f850-498b-8dfe-bef249904487-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-wsv2w\" (UID: \"fe5791bd-f850-498b-8dfe-bef249904487\") " pod="openstack/dnsmasq-dns-84b966f6c9-wsv2w" Nov 28 12:55:10 crc kubenswrapper[4779]: I1128 12:55:10.453412 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe5791bd-f850-498b-8dfe-bef249904487-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-wsv2w\" (UID: \"fe5791bd-f850-498b-8dfe-bef249904487\") " pod="openstack/dnsmasq-dns-84b966f6c9-wsv2w" Nov 28 12:55:10 crc kubenswrapper[4779]: I1128 12:55:10.453489 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe5791bd-f850-498b-8dfe-bef249904487-config\") pod \"dnsmasq-dns-84b966f6c9-wsv2w\" (UID: \"fe5791bd-f850-498b-8dfe-bef249904487\") " pod="openstack/dnsmasq-dns-84b966f6c9-wsv2w" Nov 28 12:55:10 crc kubenswrapper[4779]: I1128 12:55:10.554842 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe5791bd-f850-498b-8dfe-bef249904487-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-wsv2w\" (UID: \"fe5791bd-f850-498b-8dfe-bef249904487\") " pod="openstack/dnsmasq-dns-84b966f6c9-wsv2w" Nov 28 12:55:10 crc kubenswrapper[4779]: I1128 12:55:10.554897 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe5791bd-f850-498b-8dfe-bef249904487-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-wsv2w\" (UID: \"fe5791bd-f850-498b-8dfe-bef249904487\") " pod="openstack/dnsmasq-dns-84b966f6c9-wsv2w" Nov 28 12:55:10 crc kubenswrapper[4779]: I1128 12:55:10.554947 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe5791bd-f850-498b-8dfe-bef249904487-config\") pod \"dnsmasq-dns-84b966f6c9-wsv2w\" (UID: \"fe5791bd-f850-498b-8dfe-bef249904487\") " pod="openstack/dnsmasq-dns-84b966f6c9-wsv2w" Nov 28 12:55:10 crc kubenswrapper[4779]: I1128 12:55:10.554980 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fe5791bd-f850-498b-8dfe-bef249904487-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-wsv2w\" (UID: \"fe5791bd-f850-498b-8dfe-bef249904487\") " pod="openstack/dnsmasq-dns-84b966f6c9-wsv2w" Nov 28 12:55:10 crc kubenswrapper[4779]: I1128 12:55:10.555010 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ebb4112-c634-428c-ae8a-55682be30c80-combined-ca-bundle\") pod \"neutron-6cf455dd68-ljtxn\" (UID: \"4ebb4112-c634-428c-ae8a-55682be30c80\") " pod="openstack/neutron-6cf455dd68-ljtxn" Nov 28 12:55:10 crc kubenswrapper[4779]: I1128 12:55:10.555033 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4ebb4112-c634-428c-ae8a-55682be30c80-config\") pod \"neutron-6cf455dd68-ljtxn\" (UID: \"4ebb4112-c634-428c-ae8a-55682be30c80\") " pod="openstack/neutron-6cf455dd68-ljtxn" Nov 28 12:55:10 crc kubenswrapper[4779]: I1128 12:55:10.555064 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5blg\" (UniqueName: \"kubernetes.io/projected/fe5791bd-f850-498b-8dfe-bef249904487-kube-api-access-k5blg\") pod \"dnsmasq-dns-84b966f6c9-wsv2w\" (UID: \"fe5791bd-f850-498b-8dfe-bef249904487\") " pod="openstack/dnsmasq-dns-84b966f6c9-wsv2w" Nov 28 12:55:10 crc kubenswrapper[4779]: I1128 12:55:10.555142 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4ebb4112-c634-428c-ae8a-55682be30c80-httpd-config\") pod \"neutron-6cf455dd68-ljtxn\" (UID: \"4ebb4112-c634-428c-ae8a-55682be30c80\") " pod="openstack/neutron-6cf455dd68-ljtxn" Nov 28 12:55:10 crc kubenswrapper[4779]: I1128 12:55:10.555163 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml677\" (UniqueName: \"kubernetes.io/projected/4ebb4112-c634-428c-ae8a-55682be30c80-kube-api-access-ml677\") pod \"neutron-6cf455dd68-ljtxn\" (UID: \"4ebb4112-c634-428c-ae8a-55682be30c80\") " pod="openstack/neutron-6cf455dd68-ljtxn" Nov 28 12:55:10 crc kubenswrapper[4779]: I1128 12:55:10.555191 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe5791bd-f850-498b-8dfe-bef249904487-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-wsv2w\" (UID: \"fe5791bd-f850-498b-8dfe-bef249904487\") " pod="openstack/dnsmasq-dns-84b966f6c9-wsv2w" Nov 28 12:55:10 crc kubenswrapper[4779]: I1128 12:55:10.555214 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ebb4112-c634-428c-ae8a-55682be30c80-ovndb-tls-certs\") pod \"neutron-6cf455dd68-ljtxn\" (UID: \"4ebb4112-c634-428c-ae8a-55682be30c80\") " pod="openstack/neutron-6cf455dd68-ljtxn" Nov 28 12:55:10 crc kubenswrapper[4779]: I1128 12:55:10.556531 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe5791bd-f850-498b-8dfe-bef249904487-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-wsv2w\" (UID: \"fe5791bd-f850-498b-8dfe-bef249904487\") " pod="openstack/dnsmasq-dns-84b966f6c9-wsv2w" Nov 28 12:55:10 crc kubenswrapper[4779]: I1128 12:55:10.557171 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe5791bd-f850-498b-8dfe-bef249904487-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-wsv2w\" (UID: \"fe5791bd-f850-498b-8dfe-bef249904487\") " pod="openstack/dnsmasq-dns-84b966f6c9-wsv2w" Nov 28 12:55:10 crc kubenswrapper[4779]: I1128 12:55:10.557339 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe5791bd-f850-498b-8dfe-bef249904487-config\") pod \"dnsmasq-dns-84b966f6c9-wsv2w\" (UID: \"fe5791bd-f850-498b-8dfe-bef249904487\") " pod="openstack/dnsmasq-dns-84b966f6c9-wsv2w" Nov 28 12:55:10 crc kubenswrapper[4779]: I1128 12:55:10.557505 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fe5791bd-f850-498b-8dfe-bef249904487-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-wsv2w\" (UID: \"fe5791bd-f850-498b-8dfe-bef249904487\") " pod="openstack/dnsmasq-dns-84b966f6c9-wsv2w" Nov 28 12:55:10 crc kubenswrapper[4779]: I1128 12:55:10.557810 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe5791bd-f850-498b-8dfe-bef249904487-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-wsv2w\" (UID: \"fe5791bd-f850-498b-8dfe-bef249904487\") " pod="openstack/dnsmasq-dns-84b966f6c9-wsv2w" Nov 28 12:55:10 crc kubenswrapper[4779]: I1128 12:55:10.594168 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5blg\" (UniqueName: \"kubernetes.io/projected/fe5791bd-f850-498b-8dfe-bef249904487-kube-api-access-k5blg\") pod \"dnsmasq-dns-84b966f6c9-wsv2w\" (UID: \"fe5791bd-f850-498b-8dfe-bef249904487\") " pod="openstack/dnsmasq-dns-84b966f6c9-wsv2w" Nov 28 12:55:10 crc kubenswrapper[4779]: I1128 12:55:10.623490 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-wsv2w" Nov 28 12:55:10 crc kubenswrapper[4779]: I1128 12:55:10.643475 4779 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-vjtpr" podUID="8367a732-6c2b-4fbd-8325-0e3c6eabc40e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.123:5353: i/o timeout" Nov 28 12:55:10 crc kubenswrapper[4779]: I1128 12:55:10.656170 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4ebb4112-c634-428c-ae8a-55682be30c80-httpd-config\") pod \"neutron-6cf455dd68-ljtxn\" (UID: \"4ebb4112-c634-428c-ae8a-55682be30c80\") " pod="openstack/neutron-6cf455dd68-ljtxn" Nov 28 12:55:10 crc kubenswrapper[4779]: I1128 12:55:10.656214 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ml677\" (UniqueName: \"kubernetes.io/projected/4ebb4112-c634-428c-ae8a-55682be30c80-kube-api-access-ml677\") pod \"neutron-6cf455dd68-ljtxn\" (UID: \"4ebb4112-c634-428c-ae8a-55682be30c80\") " pod="openstack/neutron-6cf455dd68-ljtxn" Nov 28 12:55:10 crc kubenswrapper[4779]: I1128 12:55:10.656254 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ebb4112-c634-428c-ae8a-55682be30c80-ovndb-tls-certs\") pod \"neutron-6cf455dd68-ljtxn\" (UID: \"4ebb4112-c634-428c-ae8a-55682be30c80\") " pod="openstack/neutron-6cf455dd68-ljtxn" Nov 28 12:55:10 crc kubenswrapper[4779]: I1128 12:55:10.656305 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ebb4112-c634-428c-ae8a-55682be30c80-combined-ca-bundle\") pod \"neutron-6cf455dd68-ljtxn\" (UID: \"4ebb4112-c634-428c-ae8a-55682be30c80\") " pod="openstack/neutron-6cf455dd68-ljtxn" Nov 28 12:55:10 crc kubenswrapper[4779]: I1128 12:55:10.656323 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4ebb4112-c634-428c-ae8a-55682be30c80-config\") pod \"neutron-6cf455dd68-ljtxn\" (UID: \"4ebb4112-c634-428c-ae8a-55682be30c80\") " pod="openstack/neutron-6cf455dd68-ljtxn" Nov 28 12:55:10 crc kubenswrapper[4779]: I1128 12:55:10.660924 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ebb4112-c634-428c-ae8a-55682be30c80-ovndb-tls-certs\") pod \"neutron-6cf455dd68-ljtxn\" (UID: \"4ebb4112-c634-428c-ae8a-55682be30c80\") " pod="openstack/neutron-6cf455dd68-ljtxn" Nov 28 12:55:10 crc kubenswrapper[4779]: I1128 12:55:10.661560 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4ebb4112-c634-428c-ae8a-55682be30c80-httpd-config\") pod \"neutron-6cf455dd68-ljtxn\" (UID: \"4ebb4112-c634-428c-ae8a-55682be30c80\") " pod="openstack/neutron-6cf455dd68-ljtxn" Nov 28 12:55:10 crc kubenswrapper[4779]: I1128 12:55:10.661575 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4ebb4112-c634-428c-ae8a-55682be30c80-config\") pod \"neutron-6cf455dd68-ljtxn\" (UID: \"4ebb4112-c634-428c-ae8a-55682be30c80\") " pod="openstack/neutron-6cf455dd68-ljtxn" Nov 28 12:55:10 crc kubenswrapper[4779]: I1128 12:55:10.662912 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ebb4112-c634-428c-ae8a-55682be30c80-combined-ca-bundle\") pod \"neutron-6cf455dd68-ljtxn\" (UID: \"4ebb4112-c634-428c-ae8a-55682be30c80\") " pod="openstack/neutron-6cf455dd68-ljtxn" Nov 28 12:55:10 crc kubenswrapper[4779]: I1128 12:55:10.676837 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ml677\" (UniqueName: \"kubernetes.io/projected/4ebb4112-c634-428c-ae8a-55682be30c80-kube-api-access-ml677\") pod \"neutron-6cf455dd68-ljtxn\" (UID: \"4ebb4112-c634-428c-ae8a-55682be30c80\") " pod="openstack/neutron-6cf455dd68-ljtxn" Nov 28 12:55:10 crc kubenswrapper[4779]: I1128 12:55:10.705706 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6cf455dd68-ljtxn" Nov 28 12:55:11 crc kubenswrapper[4779]: I1128 12:55:11.155076 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f67e7c90-06fb-42ba-98c6-b30f8f9d2829","Type":"ContainerStarted","Data":"483276b16bb4b3bf69e3be86d05e08d9a9522bf4c858a5c52c1b459d2a00c5c8"} Nov 28 12:55:12 crc kubenswrapper[4779]: I1128 12:55:12.200125 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7756d796d9-vcgbk"] Nov 28 12:55:12 crc kubenswrapper[4779]: I1128 12:55:12.205199 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7756d796d9-vcgbk" Nov 28 12:55:12 crc kubenswrapper[4779]: I1128 12:55:12.213748 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 28 12:55:12 crc kubenswrapper[4779]: I1128 12:55:12.214082 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 28 12:55:12 crc kubenswrapper[4779]: I1128 12:55:12.216650 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7756d796d9-vcgbk"] Nov 28 12:55:12 crc kubenswrapper[4779]: I1128 12:55:12.405662 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cb2d061-fb70-4108-8204-9bf7e699c89f-combined-ca-bundle\") pod \"neutron-7756d796d9-vcgbk\" (UID: \"0cb2d061-fb70-4108-8204-9bf7e699c89f\") " pod="openstack/neutron-7756d796d9-vcgbk" Nov 28 12:55:12 crc kubenswrapper[4779]: I1128 12:55:12.405748 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lm65\" (UniqueName: \"kubernetes.io/projected/0cb2d061-fb70-4108-8204-9bf7e699c89f-kube-api-access-8lm65\") pod \"neutron-7756d796d9-vcgbk\" (UID: \"0cb2d061-fb70-4108-8204-9bf7e699c89f\") " pod="openstack/neutron-7756d796d9-vcgbk" Nov 28 12:55:12 crc kubenswrapper[4779]: I1128 12:55:12.405778 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cb2d061-fb70-4108-8204-9bf7e699c89f-public-tls-certs\") pod \"neutron-7756d796d9-vcgbk\" (UID: \"0cb2d061-fb70-4108-8204-9bf7e699c89f\") " pod="openstack/neutron-7756d796d9-vcgbk" Nov 28 12:55:12 crc kubenswrapper[4779]: I1128 12:55:12.405811 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0cb2d061-fb70-4108-8204-9bf7e699c89f-config\") pod \"neutron-7756d796d9-vcgbk\" (UID: \"0cb2d061-fb70-4108-8204-9bf7e699c89f\") " pod="openstack/neutron-7756d796d9-vcgbk" Nov 28 12:55:12 crc kubenswrapper[4779]: I1128 12:55:12.405852 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cb2d061-fb70-4108-8204-9bf7e699c89f-ovndb-tls-certs\") pod \"neutron-7756d796d9-vcgbk\" (UID: \"0cb2d061-fb70-4108-8204-9bf7e699c89f\") " pod="openstack/neutron-7756d796d9-vcgbk" Nov 28 12:55:12 crc kubenswrapper[4779]: I1128 12:55:12.405926 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0cb2d061-fb70-4108-8204-9bf7e699c89f-httpd-config\") pod \"neutron-7756d796d9-vcgbk\" (UID: \"0cb2d061-fb70-4108-8204-9bf7e699c89f\") " pod="openstack/neutron-7756d796d9-vcgbk" Nov 28 12:55:12 crc kubenswrapper[4779]: I1128 12:55:12.406049 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cb2d061-fb70-4108-8204-9bf7e699c89f-internal-tls-certs\") pod \"neutron-7756d796d9-vcgbk\" (UID: \"0cb2d061-fb70-4108-8204-9bf7e699c89f\") " pod="openstack/neutron-7756d796d9-vcgbk" Nov 28 12:55:12 crc kubenswrapper[4779]: I1128 12:55:12.508544 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cb2d061-fb70-4108-8204-9bf7e699c89f-combined-ca-bundle\") pod \"neutron-7756d796d9-vcgbk\" (UID: \"0cb2d061-fb70-4108-8204-9bf7e699c89f\") " pod="openstack/neutron-7756d796d9-vcgbk" Nov 28 12:55:12 crc kubenswrapper[4779]: I1128 12:55:12.508655 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lm65\" (UniqueName: \"kubernetes.io/projected/0cb2d061-fb70-4108-8204-9bf7e699c89f-kube-api-access-8lm65\") pod \"neutron-7756d796d9-vcgbk\" (UID: \"0cb2d061-fb70-4108-8204-9bf7e699c89f\") " pod="openstack/neutron-7756d796d9-vcgbk" Nov 28 12:55:12 crc kubenswrapper[4779]: I1128 12:55:12.508698 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cb2d061-fb70-4108-8204-9bf7e699c89f-public-tls-certs\") pod \"neutron-7756d796d9-vcgbk\" (UID: \"0cb2d061-fb70-4108-8204-9bf7e699c89f\") " pod="openstack/neutron-7756d796d9-vcgbk" Nov 28 12:55:12 crc kubenswrapper[4779]: I1128 12:55:12.508744 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0cb2d061-fb70-4108-8204-9bf7e699c89f-config\") pod \"neutron-7756d796d9-vcgbk\" (UID: \"0cb2d061-fb70-4108-8204-9bf7e699c89f\") " pod="openstack/neutron-7756d796d9-vcgbk" Nov 28 12:55:12 crc kubenswrapper[4779]: I1128 12:55:12.508801 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cb2d061-fb70-4108-8204-9bf7e699c89f-ovndb-tls-certs\") pod \"neutron-7756d796d9-vcgbk\" (UID: \"0cb2d061-fb70-4108-8204-9bf7e699c89f\") " pod="openstack/neutron-7756d796d9-vcgbk" Nov 28 12:55:12 crc kubenswrapper[4779]: I1128 12:55:12.508867 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0cb2d061-fb70-4108-8204-9bf7e699c89f-httpd-config\") pod \"neutron-7756d796d9-vcgbk\" (UID: \"0cb2d061-fb70-4108-8204-9bf7e699c89f\") " pod="openstack/neutron-7756d796d9-vcgbk" Nov 28 12:55:12 crc kubenswrapper[4779]: I1128 12:55:12.508923 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cb2d061-fb70-4108-8204-9bf7e699c89f-internal-tls-certs\") pod \"neutron-7756d796d9-vcgbk\" (UID: \"0cb2d061-fb70-4108-8204-9bf7e699c89f\") " pod="openstack/neutron-7756d796d9-vcgbk" Nov 28 12:55:12 crc kubenswrapper[4779]: I1128 12:55:12.513167 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cb2d061-fb70-4108-8204-9bf7e699c89f-combined-ca-bundle\") pod \"neutron-7756d796d9-vcgbk\" (UID: \"0cb2d061-fb70-4108-8204-9bf7e699c89f\") " pod="openstack/neutron-7756d796d9-vcgbk" Nov 28 12:55:12 crc kubenswrapper[4779]: I1128 12:55:12.513498 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0cb2d061-fb70-4108-8204-9bf7e699c89f-httpd-config\") pod \"neutron-7756d796d9-vcgbk\" (UID: \"0cb2d061-fb70-4108-8204-9bf7e699c89f\") " pod="openstack/neutron-7756d796d9-vcgbk" Nov 28 12:55:12 crc kubenswrapper[4779]: I1128 12:55:12.513761 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0cb2d061-fb70-4108-8204-9bf7e699c89f-config\") pod \"neutron-7756d796d9-vcgbk\" (UID: \"0cb2d061-fb70-4108-8204-9bf7e699c89f\") " pod="openstack/neutron-7756d796d9-vcgbk" Nov 28 12:55:12 crc kubenswrapper[4779]: I1128 12:55:12.515022 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cb2d061-fb70-4108-8204-9bf7e699c89f-internal-tls-certs\") pod \"neutron-7756d796d9-vcgbk\" (UID: \"0cb2d061-fb70-4108-8204-9bf7e699c89f\") " pod="openstack/neutron-7756d796d9-vcgbk" Nov 28 12:55:12 crc kubenswrapper[4779]: I1128 12:55:12.516986 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cb2d061-fb70-4108-8204-9bf7e699c89f-public-tls-certs\") pod \"neutron-7756d796d9-vcgbk\" (UID: \"0cb2d061-fb70-4108-8204-9bf7e699c89f\") " pod="openstack/neutron-7756d796d9-vcgbk" Nov 28 12:55:12 crc kubenswrapper[4779]: I1128 12:55:12.526507 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cb2d061-fb70-4108-8204-9bf7e699c89f-ovndb-tls-certs\") pod \"neutron-7756d796d9-vcgbk\" (UID: \"0cb2d061-fb70-4108-8204-9bf7e699c89f\") " pod="openstack/neutron-7756d796d9-vcgbk" Nov 28 12:55:12 crc kubenswrapper[4779]: I1128 12:55:12.529048 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lm65\" (UniqueName: \"kubernetes.io/projected/0cb2d061-fb70-4108-8204-9bf7e699c89f-kube-api-access-8lm65\") pod \"neutron-7756d796d9-vcgbk\" (UID: \"0cb2d061-fb70-4108-8204-9bf7e699c89f\") " pod="openstack/neutron-7756d796d9-vcgbk" Nov 28 12:55:12 crc kubenswrapper[4779]: I1128 12:55:12.542961 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7756d796d9-vcgbk" Nov 28 12:55:14 crc kubenswrapper[4779]: I1128 12:55:14.055360 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hgpw7" Nov 28 12:55:14 crc kubenswrapper[4779]: I1128 12:55:14.193854 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hgpw7" event={"ID":"a4dca0b7-4681-4e3c-8602-b777c31b27f1","Type":"ContainerDied","Data":"50eb37b30f64ae38468311a98bb692b8345b932a303140ca9819e4ed67b51045"} Nov 28 12:55:14 crc kubenswrapper[4779]: I1128 12:55:14.193900 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50eb37b30f64ae38468311a98bb692b8345b932a303140ca9819e4ed67b51045" Nov 28 12:55:14 crc kubenswrapper[4779]: I1128 12:55:14.193873 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hgpw7" Nov 28 12:55:14 crc kubenswrapper[4779]: I1128 12:55:14.196393 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1c512ed-6e02-45b5-a320-0a1b58b074ab","Type":"ContainerStarted","Data":"b581c781a23106c262b50f06de85addd1a186152f5ea5225c549e2fc981c1b4f"} Nov 28 12:55:14 crc kubenswrapper[4779]: I1128 12:55:14.242471 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a4dca0b7-4681-4e3c-8602-b777c31b27f1-credential-keys\") pod \"a4dca0b7-4681-4e3c-8602-b777c31b27f1\" (UID: \"a4dca0b7-4681-4e3c-8602-b777c31b27f1\") " Nov 28 12:55:14 crc kubenswrapper[4779]: I1128 12:55:14.242860 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4dca0b7-4681-4e3c-8602-b777c31b27f1-combined-ca-bundle\") pod \"a4dca0b7-4681-4e3c-8602-b777c31b27f1\" (UID: \"a4dca0b7-4681-4e3c-8602-b777c31b27f1\") " Nov 28 12:55:14 crc kubenswrapper[4779]: I1128 12:55:14.242895 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4dca0b7-4681-4e3c-8602-b777c31b27f1-scripts\") pod \"a4dca0b7-4681-4e3c-8602-b777c31b27f1\" (UID: \"a4dca0b7-4681-4e3c-8602-b777c31b27f1\") " Nov 28 12:55:14 crc kubenswrapper[4779]: I1128 12:55:14.242931 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4dca0b7-4681-4e3c-8602-b777c31b27f1-config-data\") pod \"a4dca0b7-4681-4e3c-8602-b777c31b27f1\" (UID: \"a4dca0b7-4681-4e3c-8602-b777c31b27f1\") " Nov 28 12:55:14 crc kubenswrapper[4779]: I1128 12:55:14.242974 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52jjl\" (UniqueName: \"kubernetes.io/projected/a4dca0b7-4681-4e3c-8602-b777c31b27f1-kube-api-access-52jjl\") pod \"a4dca0b7-4681-4e3c-8602-b777c31b27f1\" (UID: \"a4dca0b7-4681-4e3c-8602-b777c31b27f1\") " Nov 28 12:55:14 crc kubenswrapper[4779]: I1128 12:55:14.243004 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a4dca0b7-4681-4e3c-8602-b777c31b27f1-fernet-keys\") pod \"a4dca0b7-4681-4e3c-8602-b777c31b27f1\" (UID: \"a4dca0b7-4681-4e3c-8602-b777c31b27f1\") " Nov 28 12:55:14 crc kubenswrapper[4779]: I1128 12:55:14.247973 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4dca0b7-4681-4e3c-8602-b777c31b27f1-scripts" (OuterVolumeSpecName: "scripts") pod "a4dca0b7-4681-4e3c-8602-b777c31b27f1" (UID: "a4dca0b7-4681-4e3c-8602-b777c31b27f1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:55:14 crc kubenswrapper[4779]: I1128 12:55:14.248587 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4dca0b7-4681-4e3c-8602-b777c31b27f1-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "a4dca0b7-4681-4e3c-8602-b777c31b27f1" (UID: "a4dca0b7-4681-4e3c-8602-b777c31b27f1"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:55:14 crc kubenswrapper[4779]: I1128 12:55:14.259507 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4dca0b7-4681-4e3c-8602-b777c31b27f1-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "a4dca0b7-4681-4e3c-8602-b777c31b27f1" (UID: "a4dca0b7-4681-4e3c-8602-b777c31b27f1"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:55:14 crc kubenswrapper[4779]: I1128 12:55:14.259688 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4dca0b7-4681-4e3c-8602-b777c31b27f1-kube-api-access-52jjl" (OuterVolumeSpecName: "kube-api-access-52jjl") pod "a4dca0b7-4681-4e3c-8602-b777c31b27f1" (UID: "a4dca0b7-4681-4e3c-8602-b777c31b27f1"). InnerVolumeSpecName "kube-api-access-52jjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:55:14 crc kubenswrapper[4779]: I1128 12:55:14.281324 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4dca0b7-4681-4e3c-8602-b777c31b27f1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a4dca0b7-4681-4e3c-8602-b777c31b27f1" (UID: "a4dca0b7-4681-4e3c-8602-b777c31b27f1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:55:14 crc kubenswrapper[4779]: I1128 12:55:14.282667 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4dca0b7-4681-4e3c-8602-b777c31b27f1-config-data" (OuterVolumeSpecName: "config-data") pod "a4dca0b7-4681-4e3c-8602-b777c31b27f1" (UID: "a4dca0b7-4681-4e3c-8602-b777c31b27f1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:55:14 crc kubenswrapper[4779]: I1128 12:55:14.344828 4779 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4dca0b7-4681-4e3c-8602-b777c31b27f1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:14 crc kubenswrapper[4779]: I1128 12:55:14.344856 4779 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4dca0b7-4681-4e3c-8602-b777c31b27f1-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:14 crc kubenswrapper[4779]: I1128 12:55:14.344865 4779 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4dca0b7-4681-4e3c-8602-b777c31b27f1-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:14 crc kubenswrapper[4779]: I1128 12:55:14.344873 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52jjl\" (UniqueName: \"kubernetes.io/projected/a4dca0b7-4681-4e3c-8602-b777c31b27f1-kube-api-access-52jjl\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:14 crc kubenswrapper[4779]: I1128 12:55:14.344882 4779 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a4dca0b7-4681-4e3c-8602-b777c31b27f1-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:14 crc kubenswrapper[4779]: I1128 12:55:14.344891 4779 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a4dca0b7-4681-4e3c-8602-b777c31b27f1-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:14 crc kubenswrapper[4779]: I1128 12:55:14.420312 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 28 12:55:14 crc kubenswrapper[4779]: I1128 12:55:14.420349 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 28 12:55:14 crc kubenswrapper[4779]: I1128 12:55:14.463690 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 28 12:55:14 crc kubenswrapper[4779]: I1128 12:55:14.483286 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 28 12:55:14 crc kubenswrapper[4779]: I1128 12:55:14.500791 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7756d796d9-vcgbk"] Nov 28 12:55:14 crc kubenswrapper[4779]: I1128 12:55:14.530447 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-wsv2w"] Nov 28 12:55:15 crc kubenswrapper[4779]: I1128 12:55:15.190695 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-68b65c9788-nmrvn"] Nov 28 12:55:15 crc kubenswrapper[4779]: E1128 12:55:15.191531 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4dca0b7-4681-4e3c-8602-b777c31b27f1" containerName="keystone-bootstrap" Nov 28 12:55:15 crc kubenswrapper[4779]: I1128 12:55:15.191543 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4dca0b7-4681-4e3c-8602-b777c31b27f1" containerName="keystone-bootstrap" Nov 28 12:55:15 crc kubenswrapper[4779]: I1128 12:55:15.191708 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4dca0b7-4681-4e3c-8602-b777c31b27f1" containerName="keystone-bootstrap" Nov 28 12:55:15 crc kubenswrapper[4779]: I1128 12:55:15.192260 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-68b65c9788-nmrvn" Nov 28 12:55:15 crc kubenswrapper[4779]: I1128 12:55:15.196653 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 28 12:55:15 crc kubenswrapper[4779]: I1128 12:55:15.196831 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 28 12:55:15 crc kubenswrapper[4779]: I1128 12:55:15.196937 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 28 12:55:15 crc kubenswrapper[4779]: I1128 12:55:15.197044 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 28 12:55:15 crc kubenswrapper[4779]: I1128 12:55:15.197161 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-nlxvv" Nov 28 12:55:15 crc kubenswrapper[4779]: I1128 12:55:15.200903 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-68b65c9788-nmrvn"] Nov 28 12:55:15 crc kubenswrapper[4779]: I1128 12:55:15.227588 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 28 12:55:15 crc kubenswrapper[4779]: I1128 12:55:15.232050 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f67e7c90-06fb-42ba-98c6-b30f8f9d2829","Type":"ContainerStarted","Data":"59c13bcf2469efa826e82d798716c85b168f3ba99f1de588a860b5f0ee81b3ba"} Nov 28 12:55:15 crc kubenswrapper[4779]: I1128 12:55:15.235100 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7756d796d9-vcgbk" event={"ID":"0cb2d061-fb70-4108-8204-9bf7e699c89f","Type":"ContainerStarted","Data":"9a6c999b7a930463b7b9a9b447d22334a5b4ac35181e316203eb1566b429cb62"} Nov 28 12:55:15 crc kubenswrapper[4779]: I1128 12:55:15.235151 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7756d796d9-vcgbk" event={"ID":"0cb2d061-fb70-4108-8204-9bf7e699c89f","Type":"ContainerStarted","Data":"1d0067960ffb7cea4d568127a0de22df25353550a963fedf61329a6a12ff589a"} Nov 28 12:55:15 crc kubenswrapper[4779]: I1128 12:55:15.244106 4779 generic.go:334] "Generic (PLEG): container finished" podID="fe5791bd-f850-498b-8dfe-bef249904487" containerID="2218a9e512e5465c40fe81fd4578a0080372adccbe8429b71c1afa3665c81d2f" exitCode=0 Nov 28 12:55:15 crc kubenswrapper[4779]: I1128 12:55:15.246347 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-wsv2w" event={"ID":"fe5791bd-f850-498b-8dfe-bef249904487","Type":"ContainerDied","Data":"2218a9e512e5465c40fe81fd4578a0080372adccbe8429b71c1afa3665c81d2f"} Nov 28 12:55:15 crc kubenswrapper[4779]: I1128 12:55:15.246390 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 28 12:55:15 crc kubenswrapper[4779]: I1128 12:55:15.246412 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 28 12:55:15 crc kubenswrapper[4779]: I1128 12:55:15.246420 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-wsv2w" event={"ID":"fe5791bd-f850-498b-8dfe-bef249904487","Type":"ContainerStarted","Data":"45fbd169684234876c32747ef29a25dedcd1ab5b739a01d4b50e96aa1f93646b"} Nov 28 12:55:15 crc kubenswrapper[4779]: I1128 12:55:15.291786 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.291769235 podStartE2EDuration="6.291769235s" podCreationTimestamp="2025-11-28 12:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:55:15.252421116 +0000 UTC m=+1175.818096470" watchObservedRunningTime="2025-11-28 12:55:15.291769235 +0000 UTC m=+1175.857444589" Nov 28 12:55:15 crc kubenswrapper[4779]: I1128 12:55:15.364979 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8da74c5c-34bf-4136-a395-51d2be7258db-scripts\") pod \"keystone-68b65c9788-nmrvn\" (UID: \"8da74c5c-34bf-4136-a395-51d2be7258db\") " pod="openstack/keystone-68b65c9788-nmrvn" Nov 28 12:55:15 crc kubenswrapper[4779]: I1128 12:55:15.365180 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8da74c5c-34bf-4136-a395-51d2be7258db-config-data\") pod \"keystone-68b65c9788-nmrvn\" (UID: \"8da74c5c-34bf-4136-a395-51d2be7258db\") " pod="openstack/keystone-68b65c9788-nmrvn" Nov 28 12:55:15 crc kubenswrapper[4779]: I1128 12:55:15.365199 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8da74c5c-34bf-4136-a395-51d2be7258db-internal-tls-certs\") pod \"keystone-68b65c9788-nmrvn\" (UID: \"8da74c5c-34bf-4136-a395-51d2be7258db\") " pod="openstack/keystone-68b65c9788-nmrvn" Nov 28 12:55:15 crc kubenswrapper[4779]: I1128 12:55:15.365229 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8da74c5c-34bf-4136-a395-51d2be7258db-combined-ca-bundle\") pod \"keystone-68b65c9788-nmrvn\" (UID: \"8da74c5c-34bf-4136-a395-51d2be7258db\") " pod="openstack/keystone-68b65c9788-nmrvn" Nov 28 12:55:15 crc kubenswrapper[4779]: I1128 12:55:15.365251 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8da74c5c-34bf-4136-a395-51d2be7258db-fernet-keys\") pod \"keystone-68b65c9788-nmrvn\" (UID: \"8da74c5c-34bf-4136-a395-51d2be7258db\") " pod="openstack/keystone-68b65c9788-nmrvn" Nov 28 12:55:15 crc kubenswrapper[4779]: I1128 12:55:15.365275 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v98f\" (UniqueName: \"kubernetes.io/projected/8da74c5c-34bf-4136-a395-51d2be7258db-kube-api-access-9v98f\") pod \"keystone-68b65c9788-nmrvn\" (UID: \"8da74c5c-34bf-4136-a395-51d2be7258db\") " pod="openstack/keystone-68b65c9788-nmrvn" Nov 28 12:55:15 crc kubenswrapper[4779]: I1128 12:55:15.365336 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8da74c5c-34bf-4136-a395-51d2be7258db-credential-keys\") pod \"keystone-68b65c9788-nmrvn\" (UID: \"8da74c5c-34bf-4136-a395-51d2be7258db\") " pod="openstack/keystone-68b65c9788-nmrvn" Nov 28 12:55:15 crc kubenswrapper[4779]: I1128 12:55:15.365356 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8da74c5c-34bf-4136-a395-51d2be7258db-public-tls-certs\") pod \"keystone-68b65c9788-nmrvn\" (UID: \"8da74c5c-34bf-4136-a395-51d2be7258db\") " pod="openstack/keystone-68b65c9788-nmrvn" Nov 28 12:55:15 crc kubenswrapper[4779]: I1128 12:55:15.467232 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8da74c5c-34bf-4136-a395-51d2be7258db-scripts\") pod \"keystone-68b65c9788-nmrvn\" (UID: \"8da74c5c-34bf-4136-a395-51d2be7258db\") " pod="openstack/keystone-68b65c9788-nmrvn" Nov 28 12:55:15 crc kubenswrapper[4779]: I1128 12:55:15.467520 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8da74c5c-34bf-4136-a395-51d2be7258db-config-data\") pod \"keystone-68b65c9788-nmrvn\" (UID: \"8da74c5c-34bf-4136-a395-51d2be7258db\") " pod="openstack/keystone-68b65c9788-nmrvn" Nov 28 12:55:15 crc kubenswrapper[4779]: I1128 12:55:15.467597 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8da74c5c-34bf-4136-a395-51d2be7258db-internal-tls-certs\") pod \"keystone-68b65c9788-nmrvn\" (UID: \"8da74c5c-34bf-4136-a395-51d2be7258db\") " pod="openstack/keystone-68b65c9788-nmrvn" Nov 28 12:55:15 crc kubenswrapper[4779]: I1128 12:55:15.467757 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8da74c5c-34bf-4136-a395-51d2be7258db-combined-ca-bundle\") pod \"keystone-68b65c9788-nmrvn\" (UID: \"8da74c5c-34bf-4136-a395-51d2be7258db\") " pod="openstack/keystone-68b65c9788-nmrvn" Nov 28 12:55:15 crc kubenswrapper[4779]: I1128 12:55:15.467798 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8da74c5c-34bf-4136-a395-51d2be7258db-fernet-keys\") pod \"keystone-68b65c9788-nmrvn\" (UID: \"8da74c5c-34bf-4136-a395-51d2be7258db\") " pod="openstack/keystone-68b65c9788-nmrvn" Nov 28 12:55:15 crc kubenswrapper[4779]: I1128 12:55:15.468188 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9v98f\" (UniqueName: \"kubernetes.io/projected/8da74c5c-34bf-4136-a395-51d2be7258db-kube-api-access-9v98f\") pod \"keystone-68b65c9788-nmrvn\" (UID: \"8da74c5c-34bf-4136-a395-51d2be7258db\") " pod="openstack/keystone-68b65c9788-nmrvn" Nov 28 12:55:15 crc kubenswrapper[4779]: I1128 12:55:15.468229 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8da74c5c-34bf-4136-a395-51d2be7258db-credential-keys\") pod \"keystone-68b65c9788-nmrvn\" (UID: \"8da74c5c-34bf-4136-a395-51d2be7258db\") " pod="openstack/keystone-68b65c9788-nmrvn" Nov 28 12:55:15 crc kubenswrapper[4779]: I1128 12:55:15.468254 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8da74c5c-34bf-4136-a395-51d2be7258db-public-tls-certs\") pod \"keystone-68b65c9788-nmrvn\" (UID: \"8da74c5c-34bf-4136-a395-51d2be7258db\") " pod="openstack/keystone-68b65c9788-nmrvn" Nov 28 12:55:15 crc kubenswrapper[4779]: I1128 12:55:15.472551 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8da74c5c-34bf-4136-a395-51d2be7258db-internal-tls-certs\") pod \"keystone-68b65c9788-nmrvn\" (UID: \"8da74c5c-34bf-4136-a395-51d2be7258db\") " pod="openstack/keystone-68b65c9788-nmrvn" Nov 28 12:55:15 crc kubenswrapper[4779]: I1128 12:55:15.472951 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8da74c5c-34bf-4136-a395-51d2be7258db-public-tls-certs\") pod \"keystone-68b65c9788-nmrvn\" (UID: \"8da74c5c-34bf-4136-a395-51d2be7258db\") " pod="openstack/keystone-68b65c9788-nmrvn" Nov 28 12:55:15 crc kubenswrapper[4779]: I1128 12:55:15.474566 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8da74c5c-34bf-4136-a395-51d2be7258db-combined-ca-bundle\") pod \"keystone-68b65c9788-nmrvn\" (UID: \"8da74c5c-34bf-4136-a395-51d2be7258db\") " pod="openstack/keystone-68b65c9788-nmrvn" Nov 28 12:55:15 crc kubenswrapper[4779]: I1128 12:55:15.476008 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8da74c5c-34bf-4136-a395-51d2be7258db-config-data\") pod \"keystone-68b65c9788-nmrvn\" (UID: \"8da74c5c-34bf-4136-a395-51d2be7258db\") " pod="openstack/keystone-68b65c9788-nmrvn" Nov 28 12:55:15 crc kubenswrapper[4779]: I1128 12:55:15.476407 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8da74c5c-34bf-4136-a395-51d2be7258db-credential-keys\") pod \"keystone-68b65c9788-nmrvn\" (UID: \"8da74c5c-34bf-4136-a395-51d2be7258db\") " pod="openstack/keystone-68b65c9788-nmrvn" Nov 28 12:55:15 crc kubenswrapper[4779]: I1128 12:55:15.476647 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8da74c5c-34bf-4136-a395-51d2be7258db-fernet-keys\") pod \"keystone-68b65c9788-nmrvn\" (UID: \"8da74c5c-34bf-4136-a395-51d2be7258db\") " pod="openstack/keystone-68b65c9788-nmrvn" Nov 28 12:55:15 crc kubenswrapper[4779]: I1128 12:55:15.477269 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8da74c5c-34bf-4136-a395-51d2be7258db-scripts\") pod \"keystone-68b65c9788-nmrvn\" (UID: \"8da74c5c-34bf-4136-a395-51d2be7258db\") " pod="openstack/keystone-68b65c9788-nmrvn" Nov 28 12:55:15 crc kubenswrapper[4779]: I1128 12:55:15.484076 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9v98f\" (UniqueName: \"kubernetes.io/projected/8da74c5c-34bf-4136-a395-51d2be7258db-kube-api-access-9v98f\") pod \"keystone-68b65c9788-nmrvn\" (UID: \"8da74c5c-34bf-4136-a395-51d2be7258db\") " pod="openstack/keystone-68b65c9788-nmrvn" Nov 28 12:55:15 crc kubenswrapper[4779]: I1128 12:55:15.544182 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-68b65c9788-nmrvn" Nov 28 12:55:15 crc kubenswrapper[4779]: I1128 12:55:15.605870 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6cf455dd68-ljtxn"] Nov 28 12:55:16 crc kubenswrapper[4779]: I1128 12:55:16.060252 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-68b65c9788-nmrvn"] Nov 28 12:55:16 crc kubenswrapper[4779]: I1128 12:55:16.261483 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-68b65c9788-nmrvn" event={"ID":"8da74c5c-34bf-4136-a395-51d2be7258db","Type":"ContainerStarted","Data":"dba76f9d43a2a9aa8aecd93141bc8f884e38da3f4565c8433bf7048eaa6be40c"} Nov 28 12:55:16 crc kubenswrapper[4779]: I1128 12:55:16.267018 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7756d796d9-vcgbk" event={"ID":"0cb2d061-fb70-4108-8204-9bf7e699c89f","Type":"ContainerStarted","Data":"494f6f91a6461eb413da15ef8f709010a529dc1c9ca0bb3c41cdb56ce7bbdb1a"} Nov 28 12:55:16 crc kubenswrapper[4779]: I1128 12:55:16.269593 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cf455dd68-ljtxn" event={"ID":"4ebb4112-c634-428c-ae8a-55682be30c80","Type":"ContainerStarted","Data":"c74e15d9a598364068a61051c24796a0778af418c366f7f222054b1138d51028"} Nov 28 12:55:16 crc kubenswrapper[4779]: I1128 12:55:16.285834 4779 patch_prober.go:28] interesting pod/machine-config-daemon-kj9g2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 12:55:16 crc kubenswrapper[4779]: I1128 12:55:16.285905 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 12:55:17 crc kubenswrapper[4779]: I1128 12:55:17.127179 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 28 12:55:17 crc kubenswrapper[4779]: I1128 12:55:17.281047 4779 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 12:55:17 crc kubenswrapper[4779]: I1128 12:55:17.947197 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 28 12:55:18 crc kubenswrapper[4779]: I1128 12:55:18.292461 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-wsv2w" event={"ID":"fe5791bd-f850-498b-8dfe-bef249904487","Type":"ContainerStarted","Data":"23a734a37105ea28f53f8644364d0b778c486d689d4401cb7f984cc8910355c2"} Nov 28 12:55:18 crc kubenswrapper[4779]: I1128 12:55:18.293228 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-84b966f6c9-wsv2w" Nov 28 12:55:18 crc kubenswrapper[4779]: I1128 12:55:18.300526 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cf455dd68-ljtxn" event={"ID":"4ebb4112-c634-428c-ae8a-55682be30c80","Type":"ContainerStarted","Data":"aaa3d4e1d2a82bbd0df3b6c8e1205655c5e7d9ccfc290b0fffbaa30f796a2e28"} Nov 28 12:55:18 crc kubenswrapper[4779]: I1128 12:55:18.300779 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7756d796d9-vcgbk" Nov 28 12:55:18 crc kubenswrapper[4779]: I1128 12:55:18.347754 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-84b966f6c9-wsv2w" podStartSLOduration=8.347737787 podStartE2EDuration="8.347737787s" podCreationTimestamp="2025-11-28 12:55:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:55:18.319881242 +0000 UTC m=+1178.885556596" watchObservedRunningTime="2025-11-28 12:55:18.347737787 +0000 UTC m=+1178.913413141" Nov 28 12:55:18 crc kubenswrapper[4779]: I1128 12:55:18.348170 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7756d796d9-vcgbk" podStartSLOduration=6.348164748 podStartE2EDuration="6.348164748s" podCreationTimestamp="2025-11-28 12:55:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:55:18.341166164 +0000 UTC m=+1178.906841518" watchObservedRunningTime="2025-11-28 12:55:18.348164748 +0000 UTC m=+1178.913840092" Nov 28 12:55:19 crc kubenswrapper[4779]: I1128 12:55:19.309340 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-68b65c9788-nmrvn" event={"ID":"8da74c5c-34bf-4136-a395-51d2be7258db","Type":"ContainerStarted","Data":"766bdc6eeaac74ead976c37893467b3e6aa374ff4492bb6469cedd0b7705db2e"} Nov 28 12:55:19 crc kubenswrapper[4779]: I1128 12:55:19.309751 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-68b65c9788-nmrvn" Nov 28 12:55:19 crc kubenswrapper[4779]: I1128 12:55:19.314728 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cf455dd68-ljtxn" event={"ID":"4ebb4112-c634-428c-ae8a-55682be30c80","Type":"ContainerStarted","Data":"4a2fae7138c4bc400e0467d94ddd4bcf9e27614c48087590c798fb1a636970b6"} Nov 28 12:55:19 crc kubenswrapper[4779]: I1128 12:55:19.314756 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6cf455dd68-ljtxn" Nov 28 12:55:19 crc kubenswrapper[4779]: I1128 12:55:19.358002 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6cf455dd68-ljtxn" podStartSLOduration=9.3579861 podStartE2EDuration="9.3579861s" podCreationTimestamp="2025-11-28 12:55:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:55:19.355954986 +0000 UTC m=+1179.921630340" watchObservedRunningTime="2025-11-28 12:55:19.3579861 +0000 UTC m=+1179.923661454" Nov 28 12:55:19 crc kubenswrapper[4779]: I1128 12:55:19.359284 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-68b65c9788-nmrvn" podStartSLOduration=4.359278284 podStartE2EDuration="4.359278284s" podCreationTimestamp="2025-11-28 12:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:55:19.337227232 +0000 UTC m=+1179.902902586" watchObservedRunningTime="2025-11-28 12:55:19.359278284 +0000 UTC m=+1179.924953638" Nov 28 12:55:19 crc kubenswrapper[4779]: I1128 12:55:19.524247 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 28 12:55:19 crc kubenswrapper[4779]: I1128 12:55:19.524339 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 28 12:55:19 crc kubenswrapper[4779]: I1128 12:55:19.575787 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 28 12:55:19 crc kubenswrapper[4779]: I1128 12:55:19.588174 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 28 12:55:20 crc kubenswrapper[4779]: I1128 12:55:20.325624 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-nrmk4" event={"ID":"be93bf1f-510b-4a38-8f85-b59c36b2feb1","Type":"ContainerStarted","Data":"f8062de0006b63e01d516fec87f5ce45e8e1ff97a8d34602c6f5d5f2a47bd0c6"} Nov 28 12:55:20 crc kubenswrapper[4779]: I1128 12:55:20.325988 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 28 12:55:20 crc kubenswrapper[4779]: I1128 12:55:20.326007 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 28 12:55:20 crc kubenswrapper[4779]: I1128 12:55:20.749200 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-nrmk4" podStartSLOduration=4.143776329 podStartE2EDuration="47.749181192s" podCreationTimestamp="2025-11-28 12:54:33 +0000 UTC" firstStartedPulling="2025-11-28 12:54:35.480271003 +0000 UTC m=+1136.045946357" lastFinishedPulling="2025-11-28 12:55:19.085675846 +0000 UTC m=+1179.651351220" observedRunningTime="2025-11-28 12:55:20.378835511 +0000 UTC m=+1180.944510885" watchObservedRunningTime="2025-11-28 12:55:20.749181192 +0000 UTC m=+1181.314856566" Nov 28 12:55:21 crc kubenswrapper[4779]: I1128 12:55:21.341754 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-2rlgj" event={"ID":"090eca16-3536-4b84-85c8-e9a0d3a7deb6","Type":"ContainerStarted","Data":"eed56070a46314d3063bf11a7af91f813f4c532a1abe79a677c9bd9a61beba5e"} Nov 28 12:55:21 crc kubenswrapper[4779]: I1128 12:55:21.364968 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-2rlgj" podStartSLOduration=2.8409817 podStartE2EDuration="48.364948417s" podCreationTimestamp="2025-11-28 12:54:33 +0000 UTC" firstStartedPulling="2025-11-28 12:54:34.786979663 +0000 UTC m=+1135.352655017" lastFinishedPulling="2025-11-28 12:55:20.31094637 +0000 UTC m=+1180.876621734" observedRunningTime="2025-11-28 12:55:21.359153224 +0000 UTC m=+1181.924828588" watchObservedRunningTime="2025-11-28 12:55:21.364948417 +0000 UTC m=+1181.930623771" Nov 28 12:55:22 crc kubenswrapper[4779]: I1128 12:55:22.110438 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 28 12:55:22 crc kubenswrapper[4779]: I1128 12:55:22.242153 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 28 12:55:22 crc kubenswrapper[4779]: I1128 12:55:22.352587 4779 generic.go:334] "Generic (PLEG): container finished" podID="be93bf1f-510b-4a38-8f85-b59c36b2feb1" containerID="f8062de0006b63e01d516fec87f5ce45e8e1ff97a8d34602c6f5d5f2a47bd0c6" exitCode=0 Nov 28 12:55:22 crc kubenswrapper[4779]: I1128 12:55:22.352670 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-nrmk4" event={"ID":"be93bf1f-510b-4a38-8f85-b59c36b2feb1","Type":"ContainerDied","Data":"f8062de0006b63e01d516fec87f5ce45e8e1ff97a8d34602c6f5d5f2a47bd0c6"} Nov 28 12:55:24 crc kubenswrapper[4779]: I1128 12:55:24.370715 4779 generic.go:334] "Generic (PLEG): container finished" podID="090eca16-3536-4b84-85c8-e9a0d3a7deb6" containerID="eed56070a46314d3063bf11a7af91f813f4c532a1abe79a677c9bd9a61beba5e" exitCode=0 Nov 28 12:55:24 crc kubenswrapper[4779]: I1128 12:55:24.370816 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-2rlgj" event={"ID":"090eca16-3536-4b84-85c8-e9a0d3a7deb6","Type":"ContainerDied","Data":"eed56070a46314d3063bf11a7af91f813f4c532a1abe79a677c9bd9a61beba5e"} Nov 28 12:55:25 crc kubenswrapper[4779]: I1128 12:55:25.187308 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-nrmk4" Nov 28 12:55:25 crc kubenswrapper[4779]: I1128 12:55:25.381697 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-nrmk4" Nov 28 12:55:25 crc kubenswrapper[4779]: I1128 12:55:25.381927 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-nrmk4" event={"ID":"be93bf1f-510b-4a38-8f85-b59c36b2feb1","Type":"ContainerDied","Data":"9a376e3d5d23f3a824f1298a4303597ae79cb88f1c0edcaf52e53ab8bd072e34"} Nov 28 12:55:25 crc kubenswrapper[4779]: I1128 12:55:25.382016 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a376e3d5d23f3a824f1298a4303597ae79cb88f1c0edcaf52e53ab8bd072e34" Nov 28 12:55:25 crc kubenswrapper[4779]: I1128 12:55:25.384790 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be93bf1f-510b-4a38-8f85-b59c36b2feb1-logs\") pod \"be93bf1f-510b-4a38-8f85-b59c36b2feb1\" (UID: \"be93bf1f-510b-4a38-8f85-b59c36b2feb1\") " Nov 28 12:55:25 crc kubenswrapper[4779]: I1128 12:55:25.384910 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be93bf1f-510b-4a38-8f85-b59c36b2feb1-config-data\") pod \"be93bf1f-510b-4a38-8f85-b59c36b2feb1\" (UID: \"be93bf1f-510b-4a38-8f85-b59c36b2feb1\") " Nov 28 12:55:25 crc kubenswrapper[4779]: I1128 12:55:25.385616 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be93bf1f-510b-4a38-8f85-b59c36b2feb1-logs" (OuterVolumeSpecName: "logs") pod "be93bf1f-510b-4a38-8f85-b59c36b2feb1" (UID: "be93bf1f-510b-4a38-8f85-b59c36b2feb1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:55:25 crc kubenswrapper[4779]: I1128 12:55:25.385705 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be93bf1f-510b-4a38-8f85-b59c36b2feb1-combined-ca-bundle\") pod \"be93bf1f-510b-4a38-8f85-b59c36b2feb1\" (UID: \"be93bf1f-510b-4a38-8f85-b59c36b2feb1\") " Nov 28 12:55:25 crc kubenswrapper[4779]: I1128 12:55:25.385741 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdbvk\" (UniqueName: \"kubernetes.io/projected/be93bf1f-510b-4a38-8f85-b59c36b2feb1-kube-api-access-sdbvk\") pod \"be93bf1f-510b-4a38-8f85-b59c36b2feb1\" (UID: \"be93bf1f-510b-4a38-8f85-b59c36b2feb1\") " Nov 28 12:55:25 crc kubenswrapper[4779]: I1128 12:55:25.385786 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be93bf1f-510b-4a38-8f85-b59c36b2feb1-scripts\") pod \"be93bf1f-510b-4a38-8f85-b59c36b2feb1\" (UID: \"be93bf1f-510b-4a38-8f85-b59c36b2feb1\") " Nov 28 12:55:25 crc kubenswrapper[4779]: I1128 12:55:25.386209 4779 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be93bf1f-510b-4a38-8f85-b59c36b2feb1-logs\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:25 crc kubenswrapper[4779]: I1128 12:55:25.390375 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be93bf1f-510b-4a38-8f85-b59c36b2feb1-scripts" (OuterVolumeSpecName: "scripts") pod "be93bf1f-510b-4a38-8f85-b59c36b2feb1" (UID: "be93bf1f-510b-4a38-8f85-b59c36b2feb1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:55:25 crc kubenswrapper[4779]: I1128 12:55:25.393156 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be93bf1f-510b-4a38-8f85-b59c36b2feb1-kube-api-access-sdbvk" (OuterVolumeSpecName: "kube-api-access-sdbvk") pod "be93bf1f-510b-4a38-8f85-b59c36b2feb1" (UID: "be93bf1f-510b-4a38-8f85-b59c36b2feb1"). InnerVolumeSpecName "kube-api-access-sdbvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:55:25 crc kubenswrapper[4779]: I1128 12:55:25.429235 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be93bf1f-510b-4a38-8f85-b59c36b2feb1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "be93bf1f-510b-4a38-8f85-b59c36b2feb1" (UID: "be93bf1f-510b-4a38-8f85-b59c36b2feb1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:55:25 crc kubenswrapper[4779]: I1128 12:55:25.452613 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be93bf1f-510b-4a38-8f85-b59c36b2feb1-config-data" (OuterVolumeSpecName: "config-data") pod "be93bf1f-510b-4a38-8f85-b59c36b2feb1" (UID: "be93bf1f-510b-4a38-8f85-b59c36b2feb1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:55:25 crc kubenswrapper[4779]: I1128 12:55:25.487416 4779 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be93bf1f-510b-4a38-8f85-b59c36b2feb1-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:25 crc kubenswrapper[4779]: I1128 12:55:25.487452 4779 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be93bf1f-510b-4a38-8f85-b59c36b2feb1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:25 crc kubenswrapper[4779]: I1128 12:55:25.487466 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdbvk\" (UniqueName: \"kubernetes.io/projected/be93bf1f-510b-4a38-8f85-b59c36b2feb1-kube-api-access-sdbvk\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:25 crc kubenswrapper[4779]: I1128 12:55:25.487477 4779 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be93bf1f-510b-4a38-8f85-b59c36b2feb1-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:25 crc kubenswrapper[4779]: I1128 12:55:25.625238 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-84b966f6c9-wsv2w" Nov 28 12:55:25 crc kubenswrapper[4779]: I1128 12:55:25.689820 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-5bfp4"] Nov 28 12:55:25 crc kubenswrapper[4779]: I1128 12:55:25.690050 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8b5c85b87-5bfp4" podUID="a38e4faf-dc47-411c-94d0-7e143c2540d0" containerName="dnsmasq-dns" containerID="cri-o://2e0c27204fe3dd9b95ffe2fb40d5c34c90ad5a8ccc758ab0bd761dece8d38f07" gracePeriod=10 Nov 28 12:55:25 crc kubenswrapper[4779]: I1128 12:55:25.769025 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-2rlgj" Nov 28 12:55:25 crc kubenswrapper[4779]: I1128 12:55:25.895703 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsmk9\" (UniqueName: \"kubernetes.io/projected/090eca16-3536-4b84-85c8-e9a0d3a7deb6-kube-api-access-tsmk9\") pod \"090eca16-3536-4b84-85c8-e9a0d3a7deb6\" (UID: \"090eca16-3536-4b84-85c8-e9a0d3a7deb6\") " Nov 28 12:55:25 crc kubenswrapper[4779]: I1128 12:55:25.896208 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/090eca16-3536-4b84-85c8-e9a0d3a7deb6-combined-ca-bundle\") pod \"090eca16-3536-4b84-85c8-e9a0d3a7deb6\" (UID: \"090eca16-3536-4b84-85c8-e9a0d3a7deb6\") " Nov 28 12:55:25 crc kubenswrapper[4779]: I1128 12:55:25.896730 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/090eca16-3536-4b84-85c8-e9a0d3a7deb6-config-data\") pod \"090eca16-3536-4b84-85c8-e9a0d3a7deb6\" (UID: \"090eca16-3536-4b84-85c8-e9a0d3a7deb6\") " Nov 28 12:55:25 crc kubenswrapper[4779]: I1128 12:55:25.900516 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/090eca16-3536-4b84-85c8-e9a0d3a7deb6-kube-api-access-tsmk9" (OuterVolumeSpecName: "kube-api-access-tsmk9") pod "090eca16-3536-4b84-85c8-e9a0d3a7deb6" (UID: "090eca16-3536-4b84-85c8-e9a0d3a7deb6"). InnerVolumeSpecName "kube-api-access-tsmk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:55:25 crc kubenswrapper[4779]: I1128 12:55:25.934968 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/090eca16-3536-4b84-85c8-e9a0d3a7deb6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "090eca16-3536-4b84-85c8-e9a0d3a7deb6" (UID: "090eca16-3536-4b84-85c8-e9a0d3a7deb6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:55:25 crc kubenswrapper[4779]: I1128 12:55:25.999713 4779 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/090eca16-3536-4b84-85c8-e9a0d3a7deb6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:25 crc kubenswrapper[4779]: I1128 12:55:25.999763 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsmk9\" (UniqueName: \"kubernetes.io/projected/090eca16-3536-4b84-85c8-e9a0d3a7deb6-kube-api-access-tsmk9\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.035133 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/090eca16-3536-4b84-85c8-e9a0d3a7deb6-config-data" (OuterVolumeSpecName: "config-data") pod "090eca16-3536-4b84-85c8-e9a0d3a7deb6" (UID: "090eca16-3536-4b84-85c8-e9a0d3a7deb6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.105325 4779 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/090eca16-3536-4b84-85c8-e9a0d3a7deb6-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.171541 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-5bfp4" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.308917 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a38e4faf-dc47-411c-94d0-7e143c2540d0-dns-swift-storage-0\") pod \"a38e4faf-dc47-411c-94d0-7e143c2540d0\" (UID: \"a38e4faf-dc47-411c-94d0-7e143c2540d0\") " Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.308986 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a38e4faf-dc47-411c-94d0-7e143c2540d0-ovsdbserver-nb\") pod \"a38e4faf-dc47-411c-94d0-7e143c2540d0\" (UID: \"a38e4faf-dc47-411c-94d0-7e143c2540d0\") " Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.309032 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a38e4faf-dc47-411c-94d0-7e143c2540d0-config\") pod \"a38e4faf-dc47-411c-94d0-7e143c2540d0\" (UID: \"a38e4faf-dc47-411c-94d0-7e143c2540d0\") " Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.309570 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a38e4faf-dc47-411c-94d0-7e143c2540d0-ovsdbserver-sb\") pod \"a38e4faf-dc47-411c-94d0-7e143c2540d0\" (UID: \"a38e4faf-dc47-411c-94d0-7e143c2540d0\") " Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.309663 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w45fp\" (UniqueName: \"kubernetes.io/projected/a38e4faf-dc47-411c-94d0-7e143c2540d0-kube-api-access-w45fp\") pod \"a38e4faf-dc47-411c-94d0-7e143c2540d0\" (UID: \"a38e4faf-dc47-411c-94d0-7e143c2540d0\") " Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.309765 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a38e4faf-dc47-411c-94d0-7e143c2540d0-dns-svc\") pod \"a38e4faf-dc47-411c-94d0-7e143c2540d0\" (UID: \"a38e4faf-dc47-411c-94d0-7e143c2540d0\") " Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.317013 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a38e4faf-dc47-411c-94d0-7e143c2540d0-kube-api-access-w45fp" (OuterVolumeSpecName: "kube-api-access-w45fp") pod "a38e4faf-dc47-411c-94d0-7e143c2540d0" (UID: "a38e4faf-dc47-411c-94d0-7e143c2540d0"). InnerVolumeSpecName "kube-api-access-w45fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.356566 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a38e4faf-dc47-411c-94d0-7e143c2540d0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a38e4faf-dc47-411c-94d0-7e143c2540d0" (UID: "a38e4faf-dc47-411c-94d0-7e143c2540d0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.390338 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-674bfd5544-x2xz6"] Nov 28 12:55:26 crc kubenswrapper[4779]: E1128 12:55:26.390836 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a38e4faf-dc47-411c-94d0-7e143c2540d0" containerName="dnsmasq-dns" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.390852 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="a38e4faf-dc47-411c-94d0-7e143c2540d0" containerName="dnsmasq-dns" Nov 28 12:55:26 crc kubenswrapper[4779]: E1128 12:55:26.390876 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="090eca16-3536-4b84-85c8-e9a0d3a7deb6" containerName="heat-db-sync" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.390885 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="090eca16-3536-4b84-85c8-e9a0d3a7deb6" containerName="heat-db-sync" Nov 28 12:55:26 crc kubenswrapper[4779]: E1128 12:55:26.390898 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a38e4faf-dc47-411c-94d0-7e143c2540d0" containerName="init" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.390906 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="a38e4faf-dc47-411c-94d0-7e143c2540d0" containerName="init" Nov 28 12:55:26 crc kubenswrapper[4779]: E1128 12:55:26.390916 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be93bf1f-510b-4a38-8f85-b59c36b2feb1" containerName="placement-db-sync" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.390925 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="be93bf1f-510b-4a38-8f85-b59c36b2feb1" containerName="placement-db-sync" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.391188 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="be93bf1f-510b-4a38-8f85-b59c36b2feb1" containerName="placement-db-sync" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.391203 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="a38e4faf-dc47-411c-94d0-7e143c2540d0" containerName="dnsmasq-dns" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.391212 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="090eca16-3536-4b84-85c8-e9a0d3a7deb6" containerName="heat-db-sync" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.392340 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-674bfd5544-x2xz6" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.395270 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.395416 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.395508 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-hn9ct" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.396514 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.396680 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.401010 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a38e4faf-dc47-411c-94d0-7e143c2540d0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a38e4faf-dc47-411c-94d0-7e143c2540d0" (UID: "a38e4faf-dc47-411c-94d0-7e143c2540d0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.404746 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-674bfd5544-x2xz6"] Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.408060 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a38e4faf-dc47-411c-94d0-7e143c2540d0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a38e4faf-dc47-411c-94d0-7e143c2540d0" (UID: "a38e4faf-dc47-411c-94d0-7e143c2540d0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.414247 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a38e4faf-dc47-411c-94d0-7e143c2540d0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a38e4faf-dc47-411c-94d0-7e143c2540d0" (UID: "a38e4faf-dc47-411c-94d0-7e143c2540d0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.428795 4779 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a38e4faf-dc47-411c-94d0-7e143c2540d0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.428843 4779 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a38e4faf-dc47-411c-94d0-7e143c2540d0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.428855 4779 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a38e4faf-dc47-411c-94d0-7e143c2540d0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.428868 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w45fp\" (UniqueName: \"kubernetes.io/projected/a38e4faf-dc47-411c-94d0-7e143c2540d0-kube-api-access-w45fp\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.428883 4779 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a38e4faf-dc47-411c-94d0-7e143c2540d0-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.446979 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-ggv2n" event={"ID":"1f844f06-a227-4423-9d97-33f9c85c0df8","Type":"ContainerStarted","Data":"ba69b178e94de13e79a3c7e8d5ba4c90bbc358a0148d4b49f0ddce22f9797d61"} Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.447399 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a38e4faf-dc47-411c-94d0-7e143c2540d0-config" (OuterVolumeSpecName: "config") pod "a38e4faf-dc47-411c-94d0-7e143c2540d0" (UID: "a38e4faf-dc47-411c-94d0-7e143c2540d0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.456344 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1c512ed-6e02-45b5-a320-0a1b58b074ab","Type":"ContainerStarted","Data":"113107799aedcc974bbde085134634cb2e32b2c9f48d94d643bef012eec7c8cf"} Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.456595 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c1c512ed-6e02-45b5-a320-0a1b58b074ab" containerName="ceilometer-central-agent" containerID="cri-o://a843fd57f43ab914ddbc61aac1c2dfb4f6a765d67e387788f015fb4ca36eac19" gracePeriod=30 Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.456878 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.456926 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c1c512ed-6e02-45b5-a320-0a1b58b074ab" containerName="proxy-httpd" containerID="cri-o://113107799aedcc974bbde085134634cb2e32b2c9f48d94d643bef012eec7c8cf" gracePeriod=30 Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.456970 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c1c512ed-6e02-45b5-a320-0a1b58b074ab" containerName="sg-core" containerID="cri-o://b581c781a23106c262b50f06de85addd1a186152f5ea5225c549e2fc981c1b4f" gracePeriod=30 Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.457023 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c1c512ed-6e02-45b5-a320-0a1b58b074ab" containerName="ceilometer-notification-agent" containerID="cri-o://d6f059866bd005fa18d51c3b07f06cfc7707528011b5c5beea2756640d4de1ce" gracePeriod=30 Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.485434 4779 generic.go:334] "Generic (PLEG): container finished" podID="a38e4faf-dc47-411c-94d0-7e143c2540d0" containerID="2e0c27204fe3dd9b95ffe2fb40d5c34c90ad5a8ccc758ab0bd761dece8d38f07" exitCode=0 Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.485712 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-5bfp4" event={"ID":"a38e4faf-dc47-411c-94d0-7e143c2540d0","Type":"ContainerDied","Data":"2e0c27204fe3dd9b95ffe2fb40d5c34c90ad5a8ccc758ab0bd761dece8d38f07"} Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.485824 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-5bfp4" event={"ID":"a38e4faf-dc47-411c-94d0-7e143c2540d0","Type":"ContainerDied","Data":"94dce1ee06460af8d07ee2a95925f556b7c203ef7a3a086b1b5a178b5da10c3e"} Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.486035 4779 scope.go:117] "RemoveContainer" containerID="2e0c27204fe3dd9b95ffe2fb40d5c34c90ad5a8ccc758ab0bd761dece8d38f07" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.486325 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-5bfp4" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.510302 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-2rlgj" event={"ID":"090eca16-3536-4b84-85c8-e9a0d3a7deb6","Type":"ContainerDied","Data":"ded2120ced19628bc0cd364b28c90e052b35e2ef293f36e77172891fb7a940fe"} Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.510367 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ded2120ced19628bc0cd364b28c90e052b35e2ef293f36e77172891fb7a940fe" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.510411 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-ggv2n" podStartSLOduration=3.12339093 podStartE2EDuration="53.510397175s" podCreationTimestamp="2025-11-28 12:54:33 +0000 UTC" firstStartedPulling="2025-11-28 12:54:35.32017261 +0000 UTC m=+1135.885847964" lastFinishedPulling="2025-11-28 12:55:25.707178855 +0000 UTC m=+1186.272854209" observedRunningTime="2025-11-28 12:55:26.474674733 +0000 UTC m=+1187.040350087" watchObservedRunningTime="2025-11-28 12:55:26.510397175 +0000 UTC m=+1187.076072529" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.510609 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-2rlgj" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.511625 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.840634951 podStartE2EDuration="53.511619328s" podCreationTimestamp="2025-11-28 12:54:33 +0000 UTC" firstStartedPulling="2025-11-28 12:54:35.179626362 +0000 UTC m=+1135.745301716" lastFinishedPulling="2025-11-28 12:55:25.850610739 +0000 UTC m=+1186.416286093" observedRunningTime="2025-11-28 12:55:26.498598624 +0000 UTC m=+1187.064273978" watchObservedRunningTime="2025-11-28 12:55:26.511619328 +0000 UTC m=+1187.077294682" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.531340 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/60595e82-374d-4133-8a19-c240290be2da-internal-tls-certs\") pod \"placement-674bfd5544-x2xz6\" (UID: \"60595e82-374d-4133-8a19-c240290be2da\") " pod="openstack/placement-674bfd5544-x2xz6" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.531407 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60595e82-374d-4133-8a19-c240290be2da-config-data\") pod \"placement-674bfd5544-x2xz6\" (UID: \"60595e82-374d-4133-8a19-c240290be2da\") " pod="openstack/placement-674bfd5544-x2xz6" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.531431 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/60595e82-374d-4133-8a19-c240290be2da-public-tls-certs\") pod \"placement-674bfd5544-x2xz6\" (UID: \"60595e82-374d-4133-8a19-c240290be2da\") " pod="openstack/placement-674bfd5544-x2xz6" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.531458 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60595e82-374d-4133-8a19-c240290be2da-scripts\") pod \"placement-674bfd5544-x2xz6\" (UID: \"60595e82-374d-4133-8a19-c240290be2da\") " pod="openstack/placement-674bfd5544-x2xz6" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.531477 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fszv\" (UniqueName: \"kubernetes.io/projected/60595e82-374d-4133-8a19-c240290be2da-kube-api-access-9fszv\") pod \"placement-674bfd5544-x2xz6\" (UID: \"60595e82-374d-4133-8a19-c240290be2da\") " pod="openstack/placement-674bfd5544-x2xz6" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.531499 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60595e82-374d-4133-8a19-c240290be2da-logs\") pod \"placement-674bfd5544-x2xz6\" (UID: \"60595e82-374d-4133-8a19-c240290be2da\") " pod="openstack/placement-674bfd5544-x2xz6" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.531563 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60595e82-374d-4133-8a19-c240290be2da-combined-ca-bundle\") pod \"placement-674bfd5544-x2xz6\" (UID: \"60595e82-374d-4133-8a19-c240290be2da\") " pod="openstack/placement-674bfd5544-x2xz6" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.531605 4779 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a38e4faf-dc47-411c-94d0-7e143c2540d0-config\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.531732 4779 scope.go:117] "RemoveContainer" containerID="821e42c02099ce1aaf571a616fb0cee538a34cbf3677ff093436d4d87aa399a8" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.544019 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-5bfp4"] Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.554687 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-5bfp4"] Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.564226 4779 scope.go:117] "RemoveContainer" containerID="2e0c27204fe3dd9b95ffe2fb40d5c34c90ad5a8ccc758ab0bd761dece8d38f07" Nov 28 12:55:26 crc kubenswrapper[4779]: E1128 12:55:26.564776 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e0c27204fe3dd9b95ffe2fb40d5c34c90ad5a8ccc758ab0bd761dece8d38f07\": container with ID starting with 2e0c27204fe3dd9b95ffe2fb40d5c34c90ad5a8ccc758ab0bd761dece8d38f07 not found: ID does not exist" containerID="2e0c27204fe3dd9b95ffe2fb40d5c34c90ad5a8ccc758ab0bd761dece8d38f07" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.564820 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e0c27204fe3dd9b95ffe2fb40d5c34c90ad5a8ccc758ab0bd761dece8d38f07"} err="failed to get container status \"2e0c27204fe3dd9b95ffe2fb40d5c34c90ad5a8ccc758ab0bd761dece8d38f07\": rpc error: code = NotFound desc = could not find container \"2e0c27204fe3dd9b95ffe2fb40d5c34c90ad5a8ccc758ab0bd761dece8d38f07\": container with ID starting with 2e0c27204fe3dd9b95ffe2fb40d5c34c90ad5a8ccc758ab0bd761dece8d38f07 not found: ID does not exist" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.564847 4779 scope.go:117] "RemoveContainer" containerID="821e42c02099ce1aaf571a616fb0cee538a34cbf3677ff093436d4d87aa399a8" Nov 28 12:55:26 crc kubenswrapper[4779]: E1128 12:55:26.565243 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"821e42c02099ce1aaf571a616fb0cee538a34cbf3677ff093436d4d87aa399a8\": container with ID starting with 821e42c02099ce1aaf571a616fb0cee538a34cbf3677ff093436d4d87aa399a8 not found: ID does not exist" containerID="821e42c02099ce1aaf571a616fb0cee538a34cbf3677ff093436d4d87aa399a8" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.565364 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"821e42c02099ce1aaf571a616fb0cee538a34cbf3677ff093436d4d87aa399a8"} err="failed to get container status \"821e42c02099ce1aaf571a616fb0cee538a34cbf3677ff093436d4d87aa399a8\": rpc error: code = NotFound desc = could not find container \"821e42c02099ce1aaf571a616fb0cee538a34cbf3677ff093436d4d87aa399a8\": container with ID starting with 821e42c02099ce1aaf571a616fb0cee538a34cbf3677ff093436d4d87aa399a8 not found: ID does not exist" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.632643 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60595e82-374d-4133-8a19-c240290be2da-config-data\") pod \"placement-674bfd5544-x2xz6\" (UID: \"60595e82-374d-4133-8a19-c240290be2da\") " pod="openstack/placement-674bfd5544-x2xz6" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.632983 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/60595e82-374d-4133-8a19-c240290be2da-public-tls-certs\") pod \"placement-674bfd5544-x2xz6\" (UID: \"60595e82-374d-4133-8a19-c240290be2da\") " pod="openstack/placement-674bfd5544-x2xz6" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.633011 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60595e82-374d-4133-8a19-c240290be2da-scripts\") pod \"placement-674bfd5544-x2xz6\" (UID: \"60595e82-374d-4133-8a19-c240290be2da\") " pod="openstack/placement-674bfd5544-x2xz6" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.633032 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fszv\" (UniqueName: \"kubernetes.io/projected/60595e82-374d-4133-8a19-c240290be2da-kube-api-access-9fszv\") pod \"placement-674bfd5544-x2xz6\" (UID: \"60595e82-374d-4133-8a19-c240290be2da\") " pod="openstack/placement-674bfd5544-x2xz6" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.633054 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60595e82-374d-4133-8a19-c240290be2da-logs\") pod \"placement-674bfd5544-x2xz6\" (UID: \"60595e82-374d-4133-8a19-c240290be2da\") " pod="openstack/placement-674bfd5544-x2xz6" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.633163 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60595e82-374d-4133-8a19-c240290be2da-combined-ca-bundle\") pod \"placement-674bfd5544-x2xz6\" (UID: \"60595e82-374d-4133-8a19-c240290be2da\") " pod="openstack/placement-674bfd5544-x2xz6" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.633226 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/60595e82-374d-4133-8a19-c240290be2da-internal-tls-certs\") pod \"placement-674bfd5544-x2xz6\" (UID: \"60595e82-374d-4133-8a19-c240290be2da\") " pod="openstack/placement-674bfd5544-x2xz6" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.635546 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60595e82-374d-4133-8a19-c240290be2da-logs\") pod \"placement-674bfd5544-x2xz6\" (UID: \"60595e82-374d-4133-8a19-c240290be2da\") " pod="openstack/placement-674bfd5544-x2xz6" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.636884 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/60595e82-374d-4133-8a19-c240290be2da-public-tls-certs\") pod \"placement-674bfd5544-x2xz6\" (UID: \"60595e82-374d-4133-8a19-c240290be2da\") " pod="openstack/placement-674bfd5544-x2xz6" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.637600 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60595e82-374d-4133-8a19-c240290be2da-scripts\") pod \"placement-674bfd5544-x2xz6\" (UID: \"60595e82-374d-4133-8a19-c240290be2da\") " pod="openstack/placement-674bfd5544-x2xz6" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.637671 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60595e82-374d-4133-8a19-c240290be2da-combined-ca-bundle\") pod \"placement-674bfd5544-x2xz6\" (UID: \"60595e82-374d-4133-8a19-c240290be2da\") " pod="openstack/placement-674bfd5544-x2xz6" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.640240 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/60595e82-374d-4133-8a19-c240290be2da-internal-tls-certs\") pod \"placement-674bfd5544-x2xz6\" (UID: \"60595e82-374d-4133-8a19-c240290be2da\") " pod="openstack/placement-674bfd5544-x2xz6" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.642651 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60595e82-374d-4133-8a19-c240290be2da-config-data\") pod \"placement-674bfd5544-x2xz6\" (UID: \"60595e82-374d-4133-8a19-c240290be2da\") " pod="openstack/placement-674bfd5544-x2xz6" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.651667 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fszv\" (UniqueName: \"kubernetes.io/projected/60595e82-374d-4133-8a19-c240290be2da-kube-api-access-9fszv\") pod \"placement-674bfd5544-x2xz6\" (UID: \"60595e82-374d-4133-8a19-c240290be2da\") " pod="openstack/placement-674bfd5544-x2xz6" Nov 28 12:55:26 crc kubenswrapper[4779]: I1128 12:55:26.829820 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-674bfd5544-x2xz6" Nov 28 12:55:27 crc kubenswrapper[4779]: W1128 12:55:27.433160 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60595e82_374d_4133_8a19_c240290be2da.slice/crio-af8d672fb00db428776c450f7ad98ed03857c50eb3a6e1d05ad82065bfd87187 WatchSource:0}: Error finding container af8d672fb00db428776c450f7ad98ed03857c50eb3a6e1d05ad82065bfd87187: Status 404 returned error can't find the container with id af8d672fb00db428776c450f7ad98ed03857c50eb3a6e1d05ad82065bfd87187 Nov 28 12:55:27 crc kubenswrapper[4779]: I1128 12:55:27.434899 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-674bfd5544-x2xz6"] Nov 28 12:55:27 crc kubenswrapper[4779]: I1128 12:55:27.530214 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-sh4hl" event={"ID":"aaa51f35-9ab4-4629-ae5a-349484d0917d","Type":"ContainerStarted","Data":"dfbb86a0458722a007432ce973db0be61e1713c43e30999473692d6f1f0db0b5"} Nov 28 12:55:27 crc kubenswrapper[4779]: I1128 12:55:27.536401 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-674bfd5544-x2xz6" event={"ID":"60595e82-374d-4133-8a19-c240290be2da","Type":"ContainerStarted","Data":"af8d672fb00db428776c450f7ad98ed03857c50eb3a6e1d05ad82065bfd87187"} Nov 28 12:55:27 crc kubenswrapper[4779]: I1128 12:55:27.547816 4779 generic.go:334] "Generic (PLEG): container finished" podID="c1c512ed-6e02-45b5-a320-0a1b58b074ab" containerID="113107799aedcc974bbde085134634cb2e32b2c9f48d94d643bef012eec7c8cf" exitCode=0 Nov 28 12:55:27 crc kubenswrapper[4779]: I1128 12:55:27.547888 4779 generic.go:334] "Generic (PLEG): container finished" podID="c1c512ed-6e02-45b5-a320-0a1b58b074ab" containerID="b581c781a23106c262b50f06de85addd1a186152f5ea5225c549e2fc981c1b4f" exitCode=2 Nov 28 12:55:27 crc kubenswrapper[4779]: I1128 12:55:27.547916 4779 generic.go:334] "Generic (PLEG): container finished" podID="c1c512ed-6e02-45b5-a320-0a1b58b074ab" containerID="a843fd57f43ab914ddbc61aac1c2dfb4f6a765d67e387788f015fb4ca36eac19" exitCode=0 Nov 28 12:55:27 crc kubenswrapper[4779]: I1128 12:55:27.548036 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1c512ed-6e02-45b5-a320-0a1b58b074ab","Type":"ContainerDied","Data":"113107799aedcc974bbde085134634cb2e32b2c9f48d94d643bef012eec7c8cf"} Nov 28 12:55:27 crc kubenswrapper[4779]: I1128 12:55:27.548086 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1c512ed-6e02-45b5-a320-0a1b58b074ab","Type":"ContainerDied","Data":"b581c781a23106c262b50f06de85addd1a186152f5ea5225c549e2fc981c1b4f"} Nov 28 12:55:27 crc kubenswrapper[4779]: I1128 12:55:27.548143 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1c512ed-6e02-45b5-a320-0a1b58b074ab","Type":"ContainerDied","Data":"a843fd57f43ab914ddbc61aac1c2dfb4f6a765d67e387788f015fb4ca36eac19"} Nov 28 12:55:27 crc kubenswrapper[4779]: I1128 12:55:27.574762 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-sh4hl" podStartSLOduration=3.7453693489999997 podStartE2EDuration="54.574734904s" podCreationTimestamp="2025-11-28 12:54:33 +0000 UTC" firstStartedPulling="2025-11-28 12:54:35.010809208 +0000 UTC m=+1135.576484562" lastFinishedPulling="2025-11-28 12:55:25.840174763 +0000 UTC m=+1186.405850117" observedRunningTime="2025-11-28 12:55:27.564651308 +0000 UTC m=+1188.130326712" watchObservedRunningTime="2025-11-28 12:55:27.574734904 +0000 UTC m=+1188.140410288" Nov 28 12:55:27 crc kubenswrapper[4779]: I1128 12:55:27.738617 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a38e4faf-dc47-411c-94d0-7e143c2540d0" path="/var/lib/kubelet/pods/a38e4faf-dc47-411c-94d0-7e143c2540d0/volumes" Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.564367 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.575589 4779 generic.go:334] "Generic (PLEG): container finished" podID="1f844f06-a227-4423-9d97-33f9c85c0df8" containerID="ba69b178e94de13e79a3c7e8d5ba4c90bbc358a0148d4b49f0ddce22f9797d61" exitCode=0 Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.575659 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-ggv2n" event={"ID":"1f844f06-a227-4423-9d97-33f9c85c0df8","Type":"ContainerDied","Data":"ba69b178e94de13e79a3c7e8d5ba4c90bbc358a0148d4b49f0ddce22f9797d61"} Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.577927 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-674bfd5544-x2xz6" event={"ID":"60595e82-374d-4133-8a19-c240290be2da","Type":"ContainerStarted","Data":"2c0f233b408dd3128c2f11488720ac0f6e5432ae5615a1ef4db9fc25479fea56"} Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.578070 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-674bfd5544-x2xz6" Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.578141 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-674bfd5544-x2xz6" event={"ID":"60595e82-374d-4133-8a19-c240290be2da","Type":"ContainerStarted","Data":"9ad3a0a3ce6cfa826bd81f83f8974e371122c6914528162913757f58619c86ed"} Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.578166 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-674bfd5544-x2xz6" Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.580898 4779 generic.go:334] "Generic (PLEG): container finished" podID="c1c512ed-6e02-45b5-a320-0a1b58b074ab" containerID="d6f059866bd005fa18d51c3b07f06cfc7707528011b5c5beea2756640d4de1ce" exitCode=0 Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.580974 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1c512ed-6e02-45b5-a320-0a1b58b074ab","Type":"ContainerDied","Data":"d6f059866bd005fa18d51c3b07f06cfc7707528011b5c5beea2756640d4de1ce"} Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.580989 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.581054 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1c512ed-6e02-45b5-a320-0a1b58b074ab","Type":"ContainerDied","Data":"3a495fb8bb8a40f48ce1ef32bac92fefc808d97694f4c7a5a08476bf6e07f093"} Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.581136 4779 scope.go:117] "RemoveContainer" containerID="113107799aedcc974bbde085134634cb2e32b2c9f48d94d643bef012eec7c8cf" Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.607875 4779 scope.go:117] "RemoveContainer" containerID="b581c781a23106c262b50f06de85addd1a186152f5ea5225c549e2fc981c1b4f" Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.635197 4779 scope.go:117] "RemoveContainer" containerID="d6f059866bd005fa18d51c3b07f06cfc7707528011b5c5beea2756640d4de1ce" Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.646478 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-674bfd5544-x2xz6" podStartSLOduration=2.6464591090000003 podStartE2EDuration="2.646459109s" podCreationTimestamp="2025-11-28 12:55:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:55:28.640987444 +0000 UTC m=+1189.206662798" watchObservedRunningTime="2025-11-28 12:55:28.646459109 +0000 UTC m=+1189.212134453" Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.664836 4779 scope.go:117] "RemoveContainer" containerID="a843fd57f43ab914ddbc61aac1c2dfb4f6a765d67e387788f015fb4ca36eac19" Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.665835 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntq2b\" (UniqueName: \"kubernetes.io/projected/c1c512ed-6e02-45b5-a320-0a1b58b074ab-kube-api-access-ntq2b\") pod \"c1c512ed-6e02-45b5-a320-0a1b58b074ab\" (UID: \"c1c512ed-6e02-45b5-a320-0a1b58b074ab\") " Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.666018 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1c512ed-6e02-45b5-a320-0a1b58b074ab-config-data\") pod \"c1c512ed-6e02-45b5-a320-0a1b58b074ab\" (UID: \"c1c512ed-6e02-45b5-a320-0a1b58b074ab\") " Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.666085 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1c512ed-6e02-45b5-a320-0a1b58b074ab-log-httpd\") pod \"c1c512ed-6e02-45b5-a320-0a1b58b074ab\" (UID: \"c1c512ed-6e02-45b5-a320-0a1b58b074ab\") " Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.666134 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1c512ed-6e02-45b5-a320-0a1b58b074ab-scripts\") pod \"c1c512ed-6e02-45b5-a320-0a1b58b074ab\" (UID: \"c1c512ed-6e02-45b5-a320-0a1b58b074ab\") " Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.666200 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1c512ed-6e02-45b5-a320-0a1b58b074ab-combined-ca-bundle\") pod \"c1c512ed-6e02-45b5-a320-0a1b58b074ab\" (UID: \"c1c512ed-6e02-45b5-a320-0a1b58b074ab\") " Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.666307 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1c512ed-6e02-45b5-a320-0a1b58b074ab-run-httpd\") pod \"c1c512ed-6e02-45b5-a320-0a1b58b074ab\" (UID: \"c1c512ed-6e02-45b5-a320-0a1b58b074ab\") " Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.666341 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1c512ed-6e02-45b5-a320-0a1b58b074ab-sg-core-conf-yaml\") pod \"c1c512ed-6e02-45b5-a320-0a1b58b074ab\" (UID: \"c1c512ed-6e02-45b5-a320-0a1b58b074ab\") " Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.666983 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1c512ed-6e02-45b5-a320-0a1b58b074ab-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c1c512ed-6e02-45b5-a320-0a1b58b074ab" (UID: "c1c512ed-6e02-45b5-a320-0a1b58b074ab"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.667435 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1c512ed-6e02-45b5-a320-0a1b58b074ab-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c1c512ed-6e02-45b5-a320-0a1b58b074ab" (UID: "c1c512ed-6e02-45b5-a320-0a1b58b074ab"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.700862 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1c512ed-6e02-45b5-a320-0a1b58b074ab-kube-api-access-ntq2b" (OuterVolumeSpecName: "kube-api-access-ntq2b") pod "c1c512ed-6e02-45b5-a320-0a1b58b074ab" (UID: "c1c512ed-6e02-45b5-a320-0a1b58b074ab"). InnerVolumeSpecName "kube-api-access-ntq2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.703185 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1c512ed-6e02-45b5-a320-0a1b58b074ab-scripts" (OuterVolumeSpecName: "scripts") pod "c1c512ed-6e02-45b5-a320-0a1b58b074ab" (UID: "c1c512ed-6e02-45b5-a320-0a1b58b074ab"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.711566 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1c512ed-6e02-45b5-a320-0a1b58b074ab-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c1c512ed-6e02-45b5-a320-0a1b58b074ab" (UID: "c1c512ed-6e02-45b5-a320-0a1b58b074ab"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.754641 4779 scope.go:117] "RemoveContainer" containerID="113107799aedcc974bbde085134634cb2e32b2c9f48d94d643bef012eec7c8cf" Nov 28 12:55:28 crc kubenswrapper[4779]: E1128 12:55:28.755063 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"113107799aedcc974bbde085134634cb2e32b2c9f48d94d643bef012eec7c8cf\": container with ID starting with 113107799aedcc974bbde085134634cb2e32b2c9f48d94d643bef012eec7c8cf not found: ID does not exist" containerID="113107799aedcc974bbde085134634cb2e32b2c9f48d94d643bef012eec7c8cf" Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.755124 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"113107799aedcc974bbde085134634cb2e32b2c9f48d94d643bef012eec7c8cf"} err="failed to get container status \"113107799aedcc974bbde085134634cb2e32b2c9f48d94d643bef012eec7c8cf\": rpc error: code = NotFound desc = could not find container \"113107799aedcc974bbde085134634cb2e32b2c9f48d94d643bef012eec7c8cf\": container with ID starting with 113107799aedcc974bbde085134634cb2e32b2c9f48d94d643bef012eec7c8cf not found: ID does not exist" Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.755155 4779 scope.go:117] "RemoveContainer" containerID="b581c781a23106c262b50f06de85addd1a186152f5ea5225c549e2fc981c1b4f" Nov 28 12:55:28 crc kubenswrapper[4779]: E1128 12:55:28.755668 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b581c781a23106c262b50f06de85addd1a186152f5ea5225c549e2fc981c1b4f\": container with ID starting with b581c781a23106c262b50f06de85addd1a186152f5ea5225c549e2fc981c1b4f not found: ID does not exist" containerID="b581c781a23106c262b50f06de85addd1a186152f5ea5225c549e2fc981c1b4f" Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.755699 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b581c781a23106c262b50f06de85addd1a186152f5ea5225c549e2fc981c1b4f"} err="failed to get container status \"b581c781a23106c262b50f06de85addd1a186152f5ea5225c549e2fc981c1b4f\": rpc error: code = NotFound desc = could not find container \"b581c781a23106c262b50f06de85addd1a186152f5ea5225c549e2fc981c1b4f\": container with ID starting with b581c781a23106c262b50f06de85addd1a186152f5ea5225c549e2fc981c1b4f not found: ID does not exist" Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.755719 4779 scope.go:117] "RemoveContainer" containerID="d6f059866bd005fa18d51c3b07f06cfc7707528011b5c5beea2756640d4de1ce" Nov 28 12:55:28 crc kubenswrapper[4779]: E1128 12:55:28.756056 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6f059866bd005fa18d51c3b07f06cfc7707528011b5c5beea2756640d4de1ce\": container with ID starting with d6f059866bd005fa18d51c3b07f06cfc7707528011b5c5beea2756640d4de1ce not found: ID does not exist" containerID="d6f059866bd005fa18d51c3b07f06cfc7707528011b5c5beea2756640d4de1ce" Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.756117 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6f059866bd005fa18d51c3b07f06cfc7707528011b5c5beea2756640d4de1ce"} err="failed to get container status \"d6f059866bd005fa18d51c3b07f06cfc7707528011b5c5beea2756640d4de1ce\": rpc error: code = NotFound desc = could not find container \"d6f059866bd005fa18d51c3b07f06cfc7707528011b5c5beea2756640d4de1ce\": container with ID starting with d6f059866bd005fa18d51c3b07f06cfc7707528011b5c5beea2756640d4de1ce not found: ID does not exist" Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.756148 4779 scope.go:117] "RemoveContainer" containerID="a843fd57f43ab914ddbc61aac1c2dfb4f6a765d67e387788f015fb4ca36eac19" Nov 28 12:55:28 crc kubenswrapper[4779]: E1128 12:55:28.756462 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a843fd57f43ab914ddbc61aac1c2dfb4f6a765d67e387788f015fb4ca36eac19\": container with ID starting with a843fd57f43ab914ddbc61aac1c2dfb4f6a765d67e387788f015fb4ca36eac19 not found: ID does not exist" containerID="a843fd57f43ab914ddbc61aac1c2dfb4f6a765d67e387788f015fb4ca36eac19" Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.756489 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a843fd57f43ab914ddbc61aac1c2dfb4f6a765d67e387788f015fb4ca36eac19"} err="failed to get container status \"a843fd57f43ab914ddbc61aac1c2dfb4f6a765d67e387788f015fb4ca36eac19\": rpc error: code = NotFound desc = could not find container \"a843fd57f43ab914ddbc61aac1c2dfb4f6a765d67e387788f015fb4ca36eac19\": container with ID starting with a843fd57f43ab914ddbc61aac1c2dfb4f6a765d67e387788f015fb4ca36eac19 not found: ID does not exist" Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.770580 4779 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1c512ed-6e02-45b5-a320-0a1b58b074ab-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.770614 4779 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1c512ed-6e02-45b5-a320-0a1b58b074ab-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.770625 4779 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1c512ed-6e02-45b5-a320-0a1b58b074ab-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.770639 4779 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1c512ed-6e02-45b5-a320-0a1b58b074ab-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.770651 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntq2b\" (UniqueName: \"kubernetes.io/projected/c1c512ed-6e02-45b5-a320-0a1b58b074ab-kube-api-access-ntq2b\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.790201 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1c512ed-6e02-45b5-a320-0a1b58b074ab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c1c512ed-6e02-45b5-a320-0a1b58b074ab" (UID: "c1c512ed-6e02-45b5-a320-0a1b58b074ab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.796670 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1c512ed-6e02-45b5-a320-0a1b58b074ab-config-data" (OuterVolumeSpecName: "config-data") pod "c1c512ed-6e02-45b5-a320-0a1b58b074ab" (UID: "c1c512ed-6e02-45b5-a320-0a1b58b074ab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.872126 4779 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1c512ed-6e02-45b5-a320-0a1b58b074ab-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.872165 4779 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1c512ed-6e02-45b5-a320-0a1b58b074ab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.925689 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.948255 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.978703 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 12:55:28 crc kubenswrapper[4779]: E1128 12:55:28.979138 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1c512ed-6e02-45b5-a320-0a1b58b074ab" containerName="ceilometer-central-agent" Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.979159 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1c512ed-6e02-45b5-a320-0a1b58b074ab" containerName="ceilometer-central-agent" Nov 28 12:55:28 crc kubenswrapper[4779]: E1128 12:55:28.979181 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1c512ed-6e02-45b5-a320-0a1b58b074ab" containerName="proxy-httpd" Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.979190 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1c512ed-6e02-45b5-a320-0a1b58b074ab" containerName="proxy-httpd" Nov 28 12:55:28 crc kubenswrapper[4779]: E1128 12:55:28.979201 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1c512ed-6e02-45b5-a320-0a1b58b074ab" containerName="ceilometer-notification-agent" Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.979209 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1c512ed-6e02-45b5-a320-0a1b58b074ab" containerName="ceilometer-notification-agent" Nov 28 12:55:28 crc kubenswrapper[4779]: E1128 12:55:28.979232 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1c512ed-6e02-45b5-a320-0a1b58b074ab" containerName="sg-core" Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.979241 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1c512ed-6e02-45b5-a320-0a1b58b074ab" containerName="sg-core" Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.979459 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1c512ed-6e02-45b5-a320-0a1b58b074ab" containerName="ceilometer-notification-agent" Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.979478 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1c512ed-6e02-45b5-a320-0a1b58b074ab" containerName="proxy-httpd" Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.979499 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1c512ed-6e02-45b5-a320-0a1b58b074ab" containerName="ceilometer-central-agent" Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.979508 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1c512ed-6e02-45b5-a320-0a1b58b074ab" containerName="sg-core" Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.981402 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.984582 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.985052 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 12:55:28 crc kubenswrapper[4779]: I1128 12:55:28.994163 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 12:55:29 crc kubenswrapper[4779]: I1128 12:55:29.076623 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bd014390-52f0-4d0c-944f-aed58d5b179f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bd014390-52f0-4d0c-944f-aed58d5b179f\") " pod="openstack/ceilometer-0" Nov 28 12:55:29 crc kubenswrapper[4779]: I1128 12:55:29.077220 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd014390-52f0-4d0c-944f-aed58d5b179f-scripts\") pod \"ceilometer-0\" (UID: \"bd014390-52f0-4d0c-944f-aed58d5b179f\") " pod="openstack/ceilometer-0" Nov 28 12:55:29 crc kubenswrapper[4779]: I1128 12:55:29.077423 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8cp5\" (UniqueName: \"kubernetes.io/projected/bd014390-52f0-4d0c-944f-aed58d5b179f-kube-api-access-g8cp5\") pod \"ceilometer-0\" (UID: \"bd014390-52f0-4d0c-944f-aed58d5b179f\") " pod="openstack/ceilometer-0" Nov 28 12:55:29 crc kubenswrapper[4779]: I1128 12:55:29.077619 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd014390-52f0-4d0c-944f-aed58d5b179f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bd014390-52f0-4d0c-944f-aed58d5b179f\") " pod="openstack/ceilometer-0" Nov 28 12:55:29 crc kubenswrapper[4779]: I1128 12:55:29.077663 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd014390-52f0-4d0c-944f-aed58d5b179f-config-data\") pod \"ceilometer-0\" (UID: \"bd014390-52f0-4d0c-944f-aed58d5b179f\") " pod="openstack/ceilometer-0" Nov 28 12:55:29 crc kubenswrapper[4779]: I1128 12:55:29.077823 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd014390-52f0-4d0c-944f-aed58d5b179f-log-httpd\") pod \"ceilometer-0\" (UID: \"bd014390-52f0-4d0c-944f-aed58d5b179f\") " pod="openstack/ceilometer-0" Nov 28 12:55:29 crc kubenswrapper[4779]: I1128 12:55:29.078024 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd014390-52f0-4d0c-944f-aed58d5b179f-run-httpd\") pod \"ceilometer-0\" (UID: \"bd014390-52f0-4d0c-944f-aed58d5b179f\") " pod="openstack/ceilometer-0" Nov 28 12:55:29 crc kubenswrapper[4779]: I1128 12:55:29.179968 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd014390-52f0-4d0c-944f-aed58d5b179f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bd014390-52f0-4d0c-944f-aed58d5b179f\") " pod="openstack/ceilometer-0" Nov 28 12:55:29 crc kubenswrapper[4779]: I1128 12:55:29.180022 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd014390-52f0-4d0c-944f-aed58d5b179f-config-data\") pod \"ceilometer-0\" (UID: \"bd014390-52f0-4d0c-944f-aed58d5b179f\") " pod="openstack/ceilometer-0" Nov 28 12:55:29 crc kubenswrapper[4779]: I1128 12:55:29.180086 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd014390-52f0-4d0c-944f-aed58d5b179f-log-httpd\") pod \"ceilometer-0\" (UID: \"bd014390-52f0-4d0c-944f-aed58d5b179f\") " pod="openstack/ceilometer-0" Nov 28 12:55:29 crc kubenswrapper[4779]: I1128 12:55:29.180748 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd014390-52f0-4d0c-944f-aed58d5b179f-run-httpd\") pod \"ceilometer-0\" (UID: \"bd014390-52f0-4d0c-944f-aed58d5b179f\") " pod="openstack/ceilometer-0" Nov 28 12:55:29 crc kubenswrapper[4779]: I1128 12:55:29.180774 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bd014390-52f0-4d0c-944f-aed58d5b179f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bd014390-52f0-4d0c-944f-aed58d5b179f\") " pod="openstack/ceilometer-0" Nov 28 12:55:29 crc kubenswrapper[4779]: I1128 12:55:29.180806 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd014390-52f0-4d0c-944f-aed58d5b179f-scripts\") pod \"ceilometer-0\" (UID: \"bd014390-52f0-4d0c-944f-aed58d5b179f\") " pod="openstack/ceilometer-0" Nov 28 12:55:29 crc kubenswrapper[4779]: I1128 12:55:29.180850 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8cp5\" (UniqueName: \"kubernetes.io/projected/bd014390-52f0-4d0c-944f-aed58d5b179f-kube-api-access-g8cp5\") pod \"ceilometer-0\" (UID: \"bd014390-52f0-4d0c-944f-aed58d5b179f\") " pod="openstack/ceilometer-0" Nov 28 12:55:29 crc kubenswrapper[4779]: I1128 12:55:29.181451 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd014390-52f0-4d0c-944f-aed58d5b179f-run-httpd\") pod \"ceilometer-0\" (UID: \"bd014390-52f0-4d0c-944f-aed58d5b179f\") " pod="openstack/ceilometer-0" Nov 28 12:55:29 crc kubenswrapper[4779]: I1128 12:55:29.181576 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd014390-52f0-4d0c-944f-aed58d5b179f-log-httpd\") pod \"ceilometer-0\" (UID: \"bd014390-52f0-4d0c-944f-aed58d5b179f\") " pod="openstack/ceilometer-0" Nov 28 12:55:29 crc kubenswrapper[4779]: I1128 12:55:29.186048 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd014390-52f0-4d0c-944f-aed58d5b179f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bd014390-52f0-4d0c-944f-aed58d5b179f\") " pod="openstack/ceilometer-0" Nov 28 12:55:29 crc kubenswrapper[4779]: I1128 12:55:29.186576 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd014390-52f0-4d0c-944f-aed58d5b179f-config-data\") pod \"ceilometer-0\" (UID: \"bd014390-52f0-4d0c-944f-aed58d5b179f\") " pod="openstack/ceilometer-0" Nov 28 12:55:29 crc kubenswrapper[4779]: I1128 12:55:29.186902 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bd014390-52f0-4d0c-944f-aed58d5b179f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bd014390-52f0-4d0c-944f-aed58d5b179f\") " pod="openstack/ceilometer-0" Nov 28 12:55:29 crc kubenswrapper[4779]: I1128 12:55:29.187781 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd014390-52f0-4d0c-944f-aed58d5b179f-scripts\") pod \"ceilometer-0\" (UID: \"bd014390-52f0-4d0c-944f-aed58d5b179f\") " pod="openstack/ceilometer-0" Nov 28 12:55:29 crc kubenswrapper[4779]: I1128 12:55:29.205297 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8cp5\" (UniqueName: \"kubernetes.io/projected/bd014390-52f0-4d0c-944f-aed58d5b179f-kube-api-access-g8cp5\") pod \"ceilometer-0\" (UID: \"bd014390-52f0-4d0c-944f-aed58d5b179f\") " pod="openstack/ceilometer-0" Nov 28 12:55:29 crc kubenswrapper[4779]: I1128 12:55:29.308132 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 12:55:29 crc kubenswrapper[4779]: I1128 12:55:29.741815 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1c512ed-6e02-45b5-a320-0a1b58b074ab" path="/var/lib/kubelet/pods/c1c512ed-6e02-45b5-a320-0a1b58b074ab/volumes" Nov 28 12:55:29 crc kubenswrapper[4779]: W1128 12:55:29.851210 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd014390_52f0_4d0c_944f_aed58d5b179f.slice/crio-16fdcc6b856e9fee8f87f8464616cb1621b38a9396acf5da34785964e5c7b868 WatchSource:0}: Error finding container 16fdcc6b856e9fee8f87f8464616cb1621b38a9396acf5da34785964e5c7b868: Status 404 returned error can't find the container with id 16fdcc6b856e9fee8f87f8464616cb1621b38a9396acf5da34785964e5c7b868 Nov 28 12:55:29 crc kubenswrapper[4779]: I1128 12:55:29.853575 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 12:55:29 crc kubenswrapper[4779]: I1128 12:55:29.869527 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-ggv2n" Nov 28 12:55:29 crc kubenswrapper[4779]: I1128 12:55:29.994609 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1f844f06-a227-4423-9d97-33f9c85c0df8-db-sync-config-data\") pod \"1f844f06-a227-4423-9d97-33f9c85c0df8\" (UID: \"1f844f06-a227-4423-9d97-33f9c85c0df8\") " Nov 28 12:55:29 crc kubenswrapper[4779]: I1128 12:55:29.994708 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f844f06-a227-4423-9d97-33f9c85c0df8-combined-ca-bundle\") pod \"1f844f06-a227-4423-9d97-33f9c85c0df8\" (UID: \"1f844f06-a227-4423-9d97-33f9c85c0df8\") " Nov 28 12:55:29 crc kubenswrapper[4779]: I1128 12:55:29.994808 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlrz9\" (UniqueName: \"kubernetes.io/projected/1f844f06-a227-4423-9d97-33f9c85c0df8-kube-api-access-nlrz9\") pod \"1f844f06-a227-4423-9d97-33f9c85c0df8\" (UID: \"1f844f06-a227-4423-9d97-33f9c85c0df8\") " Nov 28 12:55:30 crc kubenswrapper[4779]: I1128 12:55:30.001633 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f844f06-a227-4423-9d97-33f9c85c0df8-kube-api-access-nlrz9" (OuterVolumeSpecName: "kube-api-access-nlrz9") pod "1f844f06-a227-4423-9d97-33f9c85c0df8" (UID: "1f844f06-a227-4423-9d97-33f9c85c0df8"). InnerVolumeSpecName "kube-api-access-nlrz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:55:30 crc kubenswrapper[4779]: I1128 12:55:30.001722 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f844f06-a227-4423-9d97-33f9c85c0df8-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "1f844f06-a227-4423-9d97-33f9c85c0df8" (UID: "1f844f06-a227-4423-9d97-33f9c85c0df8"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:55:30 crc kubenswrapper[4779]: I1128 12:55:30.027950 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f844f06-a227-4423-9d97-33f9c85c0df8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f844f06-a227-4423-9d97-33f9c85c0df8" (UID: "1f844f06-a227-4423-9d97-33f9c85c0df8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:55:30 crc kubenswrapper[4779]: I1128 12:55:30.096544 4779 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f844f06-a227-4423-9d97-33f9c85c0df8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:30 crc kubenswrapper[4779]: I1128 12:55:30.096581 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlrz9\" (UniqueName: \"kubernetes.io/projected/1f844f06-a227-4423-9d97-33f9c85c0df8-kube-api-access-nlrz9\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:30 crc kubenswrapper[4779]: I1128 12:55:30.096595 4779 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1f844f06-a227-4423-9d97-33f9c85c0df8-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:30 crc kubenswrapper[4779]: I1128 12:55:30.609594 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd014390-52f0-4d0c-944f-aed58d5b179f","Type":"ContainerStarted","Data":"16fdcc6b856e9fee8f87f8464616cb1621b38a9396acf5da34785964e5c7b868"} Nov 28 12:55:30 crc kubenswrapper[4779]: I1128 12:55:30.612799 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-ggv2n" event={"ID":"1f844f06-a227-4423-9d97-33f9c85c0df8","Type":"ContainerDied","Data":"eec1fe99366f987ed7967e3b7d5fdb7b10bd97bc95b297d7de50d6360eb472f4"} Nov 28 12:55:30 crc kubenswrapper[4779]: I1128 12:55:30.612985 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eec1fe99366f987ed7967e3b7d5fdb7b10bd97bc95b297d7de50d6360eb472f4" Nov 28 12:55:30 crc kubenswrapper[4779]: I1128 12:55:30.612894 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-ggv2n" Nov 28 12:55:30 crc kubenswrapper[4779]: I1128 12:55:30.877481 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-79c7d84d4c-82wcz"] Nov 28 12:55:30 crc kubenswrapper[4779]: E1128 12:55:30.878086 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f844f06-a227-4423-9d97-33f9c85c0df8" containerName="barbican-db-sync" Nov 28 12:55:30 crc kubenswrapper[4779]: I1128 12:55:30.878135 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f844f06-a227-4423-9d97-33f9c85c0df8" containerName="barbican-db-sync" Nov 28 12:55:30 crc kubenswrapper[4779]: I1128 12:55:30.878442 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f844f06-a227-4423-9d97-33f9c85c0df8" containerName="barbican-db-sync" Nov 28 12:55:30 crc kubenswrapper[4779]: I1128 12:55:30.880281 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-79c7d84d4c-82wcz" Nov 28 12:55:30 crc kubenswrapper[4779]: I1128 12:55:30.882404 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-xmvlq" Nov 28 12:55:30 crc kubenswrapper[4779]: I1128 12:55:30.883893 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 28 12:55:30 crc kubenswrapper[4779]: I1128 12:55:30.883902 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 28 12:55:30 crc kubenswrapper[4779]: I1128 12:55:30.893823 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-79c7d84d4c-82wcz"] Nov 28 12:55:30 crc kubenswrapper[4779]: I1128 12:55:30.954207 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7784844594-g7gws"] Nov 28 12:55:30 crc kubenswrapper[4779]: I1128 12:55:30.955602 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7784844594-g7gws" Nov 28 12:55:30 crc kubenswrapper[4779]: I1128 12:55:30.958981 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 28 12:55:30 crc kubenswrapper[4779]: I1128 12:55:30.968396 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7784844594-g7gws"] Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.035961 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/684d6129-3c1c-43df-b258-c32b447736d1-config-data\") pod \"barbican-worker-79c7d84d4c-82wcz\" (UID: \"684d6129-3c1c-43df-b258-c32b447736d1\") " pod="openstack/barbican-worker-79c7d84d4c-82wcz" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.036056 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/684d6129-3c1c-43df-b258-c32b447736d1-config-data-custom\") pod \"barbican-worker-79c7d84d4c-82wcz\" (UID: \"684d6129-3c1c-43df-b258-c32b447736d1\") " pod="openstack/barbican-worker-79c7d84d4c-82wcz" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.036080 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnn74\" (UniqueName: \"kubernetes.io/projected/684d6129-3c1c-43df-b258-c32b447736d1-kube-api-access-lnn74\") pod \"barbican-worker-79c7d84d4c-82wcz\" (UID: \"684d6129-3c1c-43df-b258-c32b447736d1\") " pod="openstack/barbican-worker-79c7d84d4c-82wcz" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.036124 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/684d6129-3c1c-43df-b258-c32b447736d1-combined-ca-bundle\") pod \"barbican-worker-79c7d84d4c-82wcz\" (UID: \"684d6129-3c1c-43df-b258-c32b447736d1\") " pod="openstack/barbican-worker-79c7d84d4c-82wcz" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.036192 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/684d6129-3c1c-43df-b258-c32b447736d1-logs\") pod \"barbican-worker-79c7d84d4c-82wcz\" (UID: \"684d6129-3c1c-43df-b258-c32b447736d1\") " pod="openstack/barbican-worker-79c7d84d4c-82wcz" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.046424 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-z9d75"] Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.055917 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-z9d75" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.082070 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-z9d75"] Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.139739 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f944a10-9e80-47a5-8ad8-3b6edc0c3315-config-data\") pod \"barbican-keystone-listener-7784844594-g7gws\" (UID: \"6f944a10-9e80-47a5-8ad8-3b6edc0c3315\") " pod="openstack/barbican-keystone-listener-7784844594-g7gws" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.140108 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/684d6129-3c1c-43df-b258-c32b447736d1-config-data-custom\") pod \"barbican-worker-79c7d84d4c-82wcz\" (UID: \"684d6129-3c1c-43df-b258-c32b447736d1\") " pod="openstack/barbican-worker-79c7d84d4c-82wcz" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.140132 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnn74\" (UniqueName: \"kubernetes.io/projected/684d6129-3c1c-43df-b258-c32b447736d1-kube-api-access-lnn74\") pod \"barbican-worker-79c7d84d4c-82wcz\" (UID: \"684d6129-3c1c-43df-b258-c32b447736d1\") " pod="openstack/barbican-worker-79c7d84d4c-82wcz" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.140163 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/684d6129-3c1c-43df-b258-c32b447736d1-combined-ca-bundle\") pod \"barbican-worker-79c7d84d4c-82wcz\" (UID: \"684d6129-3c1c-43df-b258-c32b447736d1\") " pod="openstack/barbican-worker-79c7d84d4c-82wcz" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.140186 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f944a10-9e80-47a5-8ad8-3b6edc0c3315-combined-ca-bundle\") pod \"barbican-keystone-listener-7784844594-g7gws\" (UID: \"6f944a10-9e80-47a5-8ad8-3b6edc0c3315\") " pod="openstack/barbican-keystone-listener-7784844594-g7gws" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.140217 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/684d6129-3c1c-43df-b258-c32b447736d1-logs\") pod \"barbican-worker-79c7d84d4c-82wcz\" (UID: \"684d6129-3c1c-43df-b258-c32b447736d1\") " pod="openstack/barbican-worker-79c7d84d4c-82wcz" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.140238 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq2bw\" (UniqueName: \"kubernetes.io/projected/6f944a10-9e80-47a5-8ad8-3b6edc0c3315-kube-api-access-tq2bw\") pod \"barbican-keystone-listener-7784844594-g7gws\" (UID: \"6f944a10-9e80-47a5-8ad8-3b6edc0c3315\") " pod="openstack/barbican-keystone-listener-7784844594-g7gws" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.140261 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f944a10-9e80-47a5-8ad8-3b6edc0c3315-logs\") pod \"barbican-keystone-listener-7784844594-g7gws\" (UID: \"6f944a10-9e80-47a5-8ad8-3b6edc0c3315\") " pod="openstack/barbican-keystone-listener-7784844594-g7gws" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.140309 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f944a10-9e80-47a5-8ad8-3b6edc0c3315-config-data-custom\") pod \"barbican-keystone-listener-7784844594-g7gws\" (UID: \"6f944a10-9e80-47a5-8ad8-3b6edc0c3315\") " pod="openstack/barbican-keystone-listener-7784844594-g7gws" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.140331 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/684d6129-3c1c-43df-b258-c32b447736d1-config-data\") pod \"barbican-worker-79c7d84d4c-82wcz\" (UID: \"684d6129-3c1c-43df-b258-c32b447736d1\") " pod="openstack/barbican-worker-79c7d84d4c-82wcz" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.147277 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/684d6129-3c1c-43df-b258-c32b447736d1-combined-ca-bundle\") pod \"barbican-worker-79c7d84d4c-82wcz\" (UID: \"684d6129-3c1c-43df-b258-c32b447736d1\") " pod="openstack/barbican-worker-79c7d84d4c-82wcz" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.155883 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/684d6129-3c1c-43df-b258-c32b447736d1-logs\") pod \"barbican-worker-79c7d84d4c-82wcz\" (UID: \"684d6129-3c1c-43df-b258-c32b447736d1\") " pod="openstack/barbican-worker-79c7d84d4c-82wcz" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.157298 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/684d6129-3c1c-43df-b258-c32b447736d1-config-data-custom\") pod \"barbican-worker-79c7d84d4c-82wcz\" (UID: \"684d6129-3c1c-43df-b258-c32b447736d1\") " pod="openstack/barbican-worker-79c7d84d4c-82wcz" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.157423 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/684d6129-3c1c-43df-b258-c32b447736d1-config-data\") pod \"barbican-worker-79c7d84d4c-82wcz\" (UID: \"684d6129-3c1c-43df-b258-c32b447736d1\") " pod="openstack/barbican-worker-79c7d84d4c-82wcz" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.162901 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5b89964d86-6t622"] Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.164485 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b89964d86-6t622" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.174826 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.175189 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnn74\" (UniqueName: \"kubernetes.io/projected/684d6129-3c1c-43df-b258-c32b447736d1-kube-api-access-lnn74\") pod \"barbican-worker-79c7d84d4c-82wcz\" (UID: \"684d6129-3c1c-43df-b258-c32b447736d1\") " pod="openstack/barbican-worker-79c7d84d4c-82wcz" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.175478 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5b89964d86-6t622"] Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.198442 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-79c7d84d4c-82wcz" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.242269 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f944a10-9e80-47a5-8ad8-3b6edc0c3315-logs\") pod \"barbican-keystone-listener-7784844594-g7gws\" (UID: \"6f944a10-9e80-47a5-8ad8-3b6edc0c3315\") " pod="openstack/barbican-keystone-listener-7784844594-g7gws" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.242326 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ecd6266-0ce0-435c-a8a3-b28526b74517-config-data\") pod \"barbican-api-5b89964d86-6t622\" (UID: \"3ecd6266-0ce0-435c-a8a3-b28526b74517\") " pod="openstack/barbican-api-5b89964d86-6t622" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.242533 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ecd6266-0ce0-435c-a8a3-b28526b74517-combined-ca-bundle\") pod \"barbican-api-5b89964d86-6t622\" (UID: \"3ecd6266-0ce0-435c-a8a3-b28526b74517\") " pod="openstack/barbican-api-5b89964d86-6t622" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.242574 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f944a10-9e80-47a5-8ad8-3b6edc0c3315-config-data-custom\") pod \"barbican-keystone-listener-7784844594-g7gws\" (UID: \"6f944a10-9e80-47a5-8ad8-3b6edc0c3315\") " pod="openstack/barbican-keystone-listener-7784844594-g7gws" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.242591 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ecd6266-0ce0-435c-a8a3-b28526b74517-logs\") pod \"barbican-api-5b89964d86-6t622\" (UID: \"3ecd6266-0ce0-435c-a8a3-b28526b74517\") " pod="openstack/barbican-api-5b89964d86-6t622" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.242626 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4jn5\" (UniqueName: \"kubernetes.io/projected/3ecd6266-0ce0-435c-a8a3-b28526b74517-kube-api-access-l4jn5\") pod \"barbican-api-5b89964d86-6t622\" (UID: \"3ecd6266-0ce0-435c-a8a3-b28526b74517\") " pod="openstack/barbican-api-5b89964d86-6t622" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.242655 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3ecd6266-0ce0-435c-a8a3-b28526b74517-config-data-custom\") pod \"barbican-api-5b89964d86-6t622\" (UID: \"3ecd6266-0ce0-435c-a8a3-b28526b74517\") " pod="openstack/barbican-api-5b89964d86-6t622" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.242675 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a46c7de6-cbd5-4579-8fd8-0d7a6701e115-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-z9d75\" (UID: \"a46c7de6-cbd5-4579-8fd8-0d7a6701e115\") " pod="openstack/dnsmasq-dns-75c8ddd69c-z9d75" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.242703 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f944a10-9e80-47a5-8ad8-3b6edc0c3315-config-data\") pod \"barbican-keystone-listener-7784844594-g7gws\" (UID: \"6f944a10-9e80-47a5-8ad8-3b6edc0c3315\") " pod="openstack/barbican-keystone-listener-7784844594-g7gws" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.242718 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a46c7de6-cbd5-4579-8fd8-0d7a6701e115-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-z9d75\" (UID: \"a46c7de6-cbd5-4579-8fd8-0d7a6701e115\") " pod="openstack/dnsmasq-dns-75c8ddd69c-z9d75" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.242741 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a46c7de6-cbd5-4579-8fd8-0d7a6701e115-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-z9d75\" (UID: \"a46c7de6-cbd5-4579-8fd8-0d7a6701e115\") " pod="openstack/dnsmasq-dns-75c8ddd69c-z9d75" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.242779 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc42s\" (UniqueName: \"kubernetes.io/projected/a46c7de6-cbd5-4579-8fd8-0d7a6701e115-kube-api-access-nc42s\") pod \"dnsmasq-dns-75c8ddd69c-z9d75\" (UID: \"a46c7de6-cbd5-4579-8fd8-0d7a6701e115\") " pod="openstack/dnsmasq-dns-75c8ddd69c-z9d75" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.242801 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a46c7de6-cbd5-4579-8fd8-0d7a6701e115-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-z9d75\" (UID: \"a46c7de6-cbd5-4579-8fd8-0d7a6701e115\") " pod="openstack/dnsmasq-dns-75c8ddd69c-z9d75" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.242827 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f944a10-9e80-47a5-8ad8-3b6edc0c3315-combined-ca-bundle\") pod \"barbican-keystone-listener-7784844594-g7gws\" (UID: \"6f944a10-9e80-47a5-8ad8-3b6edc0c3315\") " pod="openstack/barbican-keystone-listener-7784844594-g7gws" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.242853 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tq2bw\" (UniqueName: \"kubernetes.io/projected/6f944a10-9e80-47a5-8ad8-3b6edc0c3315-kube-api-access-tq2bw\") pod \"barbican-keystone-listener-7784844594-g7gws\" (UID: \"6f944a10-9e80-47a5-8ad8-3b6edc0c3315\") " pod="openstack/barbican-keystone-listener-7784844594-g7gws" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.242872 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a46c7de6-cbd5-4579-8fd8-0d7a6701e115-config\") pod \"dnsmasq-dns-75c8ddd69c-z9d75\" (UID: \"a46c7de6-cbd5-4579-8fd8-0d7a6701e115\") " pod="openstack/dnsmasq-dns-75c8ddd69c-z9d75" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.243398 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f944a10-9e80-47a5-8ad8-3b6edc0c3315-logs\") pod \"barbican-keystone-listener-7784844594-g7gws\" (UID: \"6f944a10-9e80-47a5-8ad8-3b6edc0c3315\") " pod="openstack/barbican-keystone-listener-7784844594-g7gws" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.246664 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f944a10-9e80-47a5-8ad8-3b6edc0c3315-combined-ca-bundle\") pod \"barbican-keystone-listener-7784844594-g7gws\" (UID: \"6f944a10-9e80-47a5-8ad8-3b6edc0c3315\") " pod="openstack/barbican-keystone-listener-7784844594-g7gws" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.247528 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f944a10-9e80-47a5-8ad8-3b6edc0c3315-config-data-custom\") pod \"barbican-keystone-listener-7784844594-g7gws\" (UID: \"6f944a10-9e80-47a5-8ad8-3b6edc0c3315\") " pod="openstack/barbican-keystone-listener-7784844594-g7gws" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.256195 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f944a10-9e80-47a5-8ad8-3b6edc0c3315-config-data\") pod \"barbican-keystone-listener-7784844594-g7gws\" (UID: \"6f944a10-9e80-47a5-8ad8-3b6edc0c3315\") " pod="openstack/barbican-keystone-listener-7784844594-g7gws" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.265860 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tq2bw\" (UniqueName: \"kubernetes.io/projected/6f944a10-9e80-47a5-8ad8-3b6edc0c3315-kube-api-access-tq2bw\") pod \"barbican-keystone-listener-7784844594-g7gws\" (UID: \"6f944a10-9e80-47a5-8ad8-3b6edc0c3315\") " pod="openstack/barbican-keystone-listener-7784844594-g7gws" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.280256 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7784844594-g7gws" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.344090 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a46c7de6-cbd5-4579-8fd8-0d7a6701e115-config\") pod \"dnsmasq-dns-75c8ddd69c-z9d75\" (UID: \"a46c7de6-cbd5-4579-8fd8-0d7a6701e115\") " pod="openstack/dnsmasq-dns-75c8ddd69c-z9d75" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.344423 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ecd6266-0ce0-435c-a8a3-b28526b74517-config-data\") pod \"barbican-api-5b89964d86-6t622\" (UID: \"3ecd6266-0ce0-435c-a8a3-b28526b74517\") " pod="openstack/barbican-api-5b89964d86-6t622" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.344455 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ecd6266-0ce0-435c-a8a3-b28526b74517-combined-ca-bundle\") pod \"barbican-api-5b89964d86-6t622\" (UID: \"3ecd6266-0ce0-435c-a8a3-b28526b74517\") " pod="openstack/barbican-api-5b89964d86-6t622" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.344495 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ecd6266-0ce0-435c-a8a3-b28526b74517-logs\") pod \"barbican-api-5b89964d86-6t622\" (UID: \"3ecd6266-0ce0-435c-a8a3-b28526b74517\") " pod="openstack/barbican-api-5b89964d86-6t622" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.344530 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4jn5\" (UniqueName: \"kubernetes.io/projected/3ecd6266-0ce0-435c-a8a3-b28526b74517-kube-api-access-l4jn5\") pod \"barbican-api-5b89964d86-6t622\" (UID: \"3ecd6266-0ce0-435c-a8a3-b28526b74517\") " pod="openstack/barbican-api-5b89964d86-6t622" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.344557 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3ecd6266-0ce0-435c-a8a3-b28526b74517-config-data-custom\") pod \"barbican-api-5b89964d86-6t622\" (UID: \"3ecd6266-0ce0-435c-a8a3-b28526b74517\") " pod="openstack/barbican-api-5b89964d86-6t622" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.344578 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a46c7de6-cbd5-4579-8fd8-0d7a6701e115-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-z9d75\" (UID: \"a46c7de6-cbd5-4579-8fd8-0d7a6701e115\") " pod="openstack/dnsmasq-dns-75c8ddd69c-z9d75" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.344606 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a46c7de6-cbd5-4579-8fd8-0d7a6701e115-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-z9d75\" (UID: \"a46c7de6-cbd5-4579-8fd8-0d7a6701e115\") " pod="openstack/dnsmasq-dns-75c8ddd69c-z9d75" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.344630 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a46c7de6-cbd5-4579-8fd8-0d7a6701e115-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-z9d75\" (UID: \"a46c7de6-cbd5-4579-8fd8-0d7a6701e115\") " pod="openstack/dnsmasq-dns-75c8ddd69c-z9d75" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.344665 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc42s\" (UniqueName: \"kubernetes.io/projected/a46c7de6-cbd5-4579-8fd8-0d7a6701e115-kube-api-access-nc42s\") pod \"dnsmasq-dns-75c8ddd69c-z9d75\" (UID: \"a46c7de6-cbd5-4579-8fd8-0d7a6701e115\") " pod="openstack/dnsmasq-dns-75c8ddd69c-z9d75" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.344686 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a46c7de6-cbd5-4579-8fd8-0d7a6701e115-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-z9d75\" (UID: \"a46c7de6-cbd5-4579-8fd8-0d7a6701e115\") " pod="openstack/dnsmasq-dns-75c8ddd69c-z9d75" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.345486 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a46c7de6-cbd5-4579-8fd8-0d7a6701e115-config\") pod \"dnsmasq-dns-75c8ddd69c-z9d75\" (UID: \"a46c7de6-cbd5-4579-8fd8-0d7a6701e115\") " pod="openstack/dnsmasq-dns-75c8ddd69c-z9d75" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.345582 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a46c7de6-cbd5-4579-8fd8-0d7a6701e115-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-z9d75\" (UID: \"a46c7de6-cbd5-4579-8fd8-0d7a6701e115\") " pod="openstack/dnsmasq-dns-75c8ddd69c-z9d75" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.345887 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a46c7de6-cbd5-4579-8fd8-0d7a6701e115-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-z9d75\" (UID: \"a46c7de6-cbd5-4579-8fd8-0d7a6701e115\") " pod="openstack/dnsmasq-dns-75c8ddd69c-z9d75" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.346158 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a46c7de6-cbd5-4579-8fd8-0d7a6701e115-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-z9d75\" (UID: \"a46c7de6-cbd5-4579-8fd8-0d7a6701e115\") " pod="openstack/dnsmasq-dns-75c8ddd69c-z9d75" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.346165 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ecd6266-0ce0-435c-a8a3-b28526b74517-logs\") pod \"barbican-api-5b89964d86-6t622\" (UID: \"3ecd6266-0ce0-435c-a8a3-b28526b74517\") " pod="openstack/barbican-api-5b89964d86-6t622" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.346659 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a46c7de6-cbd5-4579-8fd8-0d7a6701e115-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-z9d75\" (UID: \"a46c7de6-cbd5-4579-8fd8-0d7a6701e115\") " pod="openstack/dnsmasq-dns-75c8ddd69c-z9d75" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.354929 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3ecd6266-0ce0-435c-a8a3-b28526b74517-config-data-custom\") pod \"barbican-api-5b89964d86-6t622\" (UID: \"3ecd6266-0ce0-435c-a8a3-b28526b74517\") " pod="openstack/barbican-api-5b89964d86-6t622" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.355267 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ecd6266-0ce0-435c-a8a3-b28526b74517-config-data\") pod \"barbican-api-5b89964d86-6t622\" (UID: \"3ecd6266-0ce0-435c-a8a3-b28526b74517\") " pod="openstack/barbican-api-5b89964d86-6t622" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.357661 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ecd6266-0ce0-435c-a8a3-b28526b74517-combined-ca-bundle\") pod \"barbican-api-5b89964d86-6t622\" (UID: \"3ecd6266-0ce0-435c-a8a3-b28526b74517\") " pod="openstack/barbican-api-5b89964d86-6t622" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.363709 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc42s\" (UniqueName: \"kubernetes.io/projected/a46c7de6-cbd5-4579-8fd8-0d7a6701e115-kube-api-access-nc42s\") pod \"dnsmasq-dns-75c8ddd69c-z9d75\" (UID: \"a46c7de6-cbd5-4579-8fd8-0d7a6701e115\") " pod="openstack/dnsmasq-dns-75c8ddd69c-z9d75" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.364539 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4jn5\" (UniqueName: \"kubernetes.io/projected/3ecd6266-0ce0-435c-a8a3-b28526b74517-kube-api-access-l4jn5\") pod \"barbican-api-5b89964d86-6t622\" (UID: \"3ecd6266-0ce0-435c-a8a3-b28526b74517\") " pod="openstack/barbican-api-5b89964d86-6t622" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.377826 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-z9d75" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.613967 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b89964d86-6t622" Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.637319 4779 generic.go:334] "Generic (PLEG): container finished" podID="aaa51f35-9ab4-4629-ae5a-349484d0917d" containerID="dfbb86a0458722a007432ce973db0be61e1713c43e30999473692d6f1f0db0b5" exitCode=0 Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.637389 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-sh4hl" event={"ID":"aaa51f35-9ab4-4629-ae5a-349484d0917d","Type":"ContainerDied","Data":"dfbb86a0458722a007432ce973db0be61e1713c43e30999473692d6f1f0db0b5"} Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.641129 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd014390-52f0-4d0c-944f-aed58d5b179f","Type":"ContainerStarted","Data":"c07e218bebc32e46a5262a66789e3b2e6c91840016611a839e148ea842e4ab06"} Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.641163 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd014390-52f0-4d0c-944f-aed58d5b179f","Type":"ContainerStarted","Data":"1933d485b31e0a6d50e4c9567d1bda24d717564e5c93aa56578ef001ae4e7d68"} Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.713990 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-79c7d84d4c-82wcz"] Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.779231 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7784844594-g7gws"] Nov 28 12:55:31 crc kubenswrapper[4779]: I1128 12:55:31.900099 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-z9d75"] Nov 28 12:55:32 crc kubenswrapper[4779]: I1128 12:55:32.090731 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5b89964d86-6t622"] Nov 28 12:55:32 crc kubenswrapper[4779]: W1128 12:55:32.099069 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ecd6266_0ce0_435c_a8a3_b28526b74517.slice/crio-f3fa016ede6dc10977b3e30d8a9442b152a8306ebf04c69f2959a53d8f56d334 WatchSource:0}: Error finding container f3fa016ede6dc10977b3e30d8a9442b152a8306ebf04c69f2959a53d8f56d334: Status 404 returned error can't find the container with id f3fa016ede6dc10977b3e30d8a9442b152a8306ebf04c69f2959a53d8f56d334 Nov 28 12:55:32 crc kubenswrapper[4779]: I1128 12:55:32.655387 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b89964d86-6t622" event={"ID":"3ecd6266-0ce0-435c-a8a3-b28526b74517","Type":"ContainerStarted","Data":"f8e7f527275d083087326188ea60ca0d052073288bd8344e2339b5baff8e9304"} Nov 28 12:55:32 crc kubenswrapper[4779]: I1128 12:55:32.655686 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b89964d86-6t622" event={"ID":"3ecd6266-0ce0-435c-a8a3-b28526b74517","Type":"ContainerStarted","Data":"16658e704f521573e991d45766718e7581963b8d92fb62db6cbfd15b5996b761"} Nov 28 12:55:32 crc kubenswrapper[4779]: I1128 12:55:32.655702 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5b89964d86-6t622" Nov 28 12:55:32 crc kubenswrapper[4779]: I1128 12:55:32.655712 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5b89964d86-6t622" Nov 28 12:55:32 crc kubenswrapper[4779]: I1128 12:55:32.655720 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b89964d86-6t622" event={"ID":"3ecd6266-0ce0-435c-a8a3-b28526b74517","Type":"ContainerStarted","Data":"f3fa016ede6dc10977b3e30d8a9442b152a8306ebf04c69f2959a53d8f56d334"} Nov 28 12:55:32 crc kubenswrapper[4779]: I1128 12:55:32.666136 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7784844594-g7gws" event={"ID":"6f944a10-9e80-47a5-8ad8-3b6edc0c3315","Type":"ContainerStarted","Data":"3844645487d7b6b5b95496e57a308573cac63b1fb6e648b904e6b9b779d7de9a"} Nov 28 12:55:32 crc kubenswrapper[4779]: I1128 12:55:32.671704 4779 generic.go:334] "Generic (PLEG): container finished" podID="a46c7de6-cbd5-4579-8fd8-0d7a6701e115" containerID="a20e80ad10bdce39f70eeac58ec8bee6b4e43bf7ca6c49d96265b7262794e83d" exitCode=0 Nov 28 12:55:32 crc kubenswrapper[4779]: I1128 12:55:32.671743 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-z9d75" event={"ID":"a46c7de6-cbd5-4579-8fd8-0d7a6701e115","Type":"ContainerDied","Data":"a20e80ad10bdce39f70eeac58ec8bee6b4e43bf7ca6c49d96265b7262794e83d"} Nov 28 12:55:32 crc kubenswrapper[4779]: I1128 12:55:32.671780 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-z9d75" event={"ID":"a46c7de6-cbd5-4579-8fd8-0d7a6701e115","Type":"ContainerStarted","Data":"9c20579cfb3cc848bfa5ce9de5731c74bd061f1902354763b0da86d55962eec5"} Nov 28 12:55:32 crc kubenswrapper[4779]: I1128 12:55:32.676621 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-79c7d84d4c-82wcz" event={"ID":"684d6129-3c1c-43df-b258-c32b447736d1","Type":"ContainerStarted","Data":"d6f660323cc86189c5ec2fa70bbc0436f19c8217c00307163e5e87c722626d5f"} Nov 28 12:55:32 crc kubenswrapper[4779]: I1128 12:55:32.689243 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd014390-52f0-4d0c-944f-aed58d5b179f","Type":"ContainerStarted","Data":"0ae6312f8ee755e61ab4466220c49a10118979506e8a763f2c414ca5d589ed50"} Nov 28 12:55:32 crc kubenswrapper[4779]: I1128 12:55:32.717587 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5b89964d86-6t622" podStartSLOduration=1.7175680930000001 podStartE2EDuration="1.717568093s" podCreationTimestamp="2025-11-28 12:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:55:32.686160975 +0000 UTC m=+1193.251836349" watchObservedRunningTime="2025-11-28 12:55:32.717568093 +0000 UTC m=+1193.283243447" Nov 28 12:55:33 crc kubenswrapper[4779]: I1128 12:55:33.303424 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-sh4hl" Nov 28 12:55:33 crc kubenswrapper[4779]: I1128 12:55:33.399421 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aaa51f35-9ab4-4629-ae5a-349484d0917d-scripts\") pod \"aaa51f35-9ab4-4629-ae5a-349484d0917d\" (UID: \"aaa51f35-9ab4-4629-ae5a-349484d0917d\") " Nov 28 12:55:33 crc kubenswrapper[4779]: I1128 12:55:33.399463 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaa51f35-9ab4-4629-ae5a-349484d0917d-config-data\") pod \"aaa51f35-9ab4-4629-ae5a-349484d0917d\" (UID: \"aaa51f35-9ab4-4629-ae5a-349484d0917d\") " Nov 28 12:55:33 crc kubenswrapper[4779]: I1128 12:55:33.399496 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhrsh\" (UniqueName: \"kubernetes.io/projected/aaa51f35-9ab4-4629-ae5a-349484d0917d-kube-api-access-nhrsh\") pod \"aaa51f35-9ab4-4629-ae5a-349484d0917d\" (UID: \"aaa51f35-9ab4-4629-ae5a-349484d0917d\") " Nov 28 12:55:33 crc kubenswrapper[4779]: I1128 12:55:33.399589 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaa51f35-9ab4-4629-ae5a-349484d0917d-combined-ca-bundle\") pod \"aaa51f35-9ab4-4629-ae5a-349484d0917d\" (UID: \"aaa51f35-9ab4-4629-ae5a-349484d0917d\") " Nov 28 12:55:33 crc kubenswrapper[4779]: I1128 12:55:33.399605 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aaa51f35-9ab4-4629-ae5a-349484d0917d-etc-machine-id\") pod \"aaa51f35-9ab4-4629-ae5a-349484d0917d\" (UID: \"aaa51f35-9ab4-4629-ae5a-349484d0917d\") " Nov 28 12:55:33 crc kubenswrapper[4779]: I1128 12:55:33.399626 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/aaa51f35-9ab4-4629-ae5a-349484d0917d-db-sync-config-data\") pod \"aaa51f35-9ab4-4629-ae5a-349484d0917d\" (UID: \"aaa51f35-9ab4-4629-ae5a-349484d0917d\") " Nov 28 12:55:33 crc kubenswrapper[4779]: I1128 12:55:33.413522 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aaa51f35-9ab4-4629-ae5a-349484d0917d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "aaa51f35-9ab4-4629-ae5a-349484d0917d" (UID: "aaa51f35-9ab4-4629-ae5a-349484d0917d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:55:33 crc kubenswrapper[4779]: I1128 12:55:33.457268 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaa51f35-9ab4-4629-ae5a-349484d0917d-scripts" (OuterVolumeSpecName: "scripts") pod "aaa51f35-9ab4-4629-ae5a-349484d0917d" (UID: "aaa51f35-9ab4-4629-ae5a-349484d0917d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:55:33 crc kubenswrapper[4779]: I1128 12:55:33.457424 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaa51f35-9ab4-4629-ae5a-349484d0917d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "aaa51f35-9ab4-4629-ae5a-349484d0917d" (UID: "aaa51f35-9ab4-4629-ae5a-349484d0917d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:55:33 crc kubenswrapper[4779]: I1128 12:55:33.457544 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaa51f35-9ab4-4629-ae5a-349484d0917d-kube-api-access-nhrsh" (OuterVolumeSpecName: "kube-api-access-nhrsh") pod "aaa51f35-9ab4-4629-ae5a-349484d0917d" (UID: "aaa51f35-9ab4-4629-ae5a-349484d0917d"). InnerVolumeSpecName "kube-api-access-nhrsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:55:33 crc kubenswrapper[4779]: I1128 12:55:33.508469 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaa51f35-9ab4-4629-ae5a-349484d0917d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aaa51f35-9ab4-4629-ae5a-349484d0917d" (UID: "aaa51f35-9ab4-4629-ae5a-349484d0917d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:55:33 crc kubenswrapper[4779]: I1128 12:55:33.510351 4779 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaa51f35-9ab4-4629-ae5a-349484d0917d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:33 crc kubenswrapper[4779]: I1128 12:55:33.510891 4779 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aaa51f35-9ab4-4629-ae5a-349484d0917d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:33 crc kubenswrapper[4779]: I1128 12:55:33.510913 4779 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/aaa51f35-9ab4-4629-ae5a-349484d0917d-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:33 crc kubenswrapper[4779]: I1128 12:55:33.510921 4779 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aaa51f35-9ab4-4629-ae5a-349484d0917d-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:33 crc kubenswrapper[4779]: I1128 12:55:33.510930 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhrsh\" (UniqueName: \"kubernetes.io/projected/aaa51f35-9ab4-4629-ae5a-349484d0917d-kube-api-access-nhrsh\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:33 crc kubenswrapper[4779]: I1128 12:55:33.560030 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaa51f35-9ab4-4629-ae5a-349484d0917d-config-data" (OuterVolumeSpecName: "config-data") pod "aaa51f35-9ab4-4629-ae5a-349484d0917d" (UID: "aaa51f35-9ab4-4629-ae5a-349484d0917d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:55:33 crc kubenswrapper[4779]: I1128 12:55:33.613013 4779 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaa51f35-9ab4-4629-ae5a-349484d0917d-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:33 crc kubenswrapper[4779]: I1128 12:55:33.711741 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-sh4hl" Nov 28 12:55:33 crc kubenswrapper[4779]: I1128 12:55:33.714164 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-sh4hl" event={"ID":"aaa51f35-9ab4-4629-ae5a-349484d0917d","Type":"ContainerDied","Data":"53d59a2dac967357d2cf1cf7d81ce49e92e6ada155f2f98656c7e16c73de1bf8"} Nov 28 12:55:33 crc kubenswrapper[4779]: I1128 12:55:33.714211 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53d59a2dac967357d2cf1cf7d81ce49e92e6ada155f2f98656c7e16c73de1bf8" Nov 28 12:55:33 crc kubenswrapper[4779]: I1128 12:55:33.801408 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5b764d4b5d-q6jq2"] Nov 28 12:55:33 crc kubenswrapper[4779]: E1128 12:55:33.801802 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaa51f35-9ab4-4629-ae5a-349484d0917d" containerName="cinder-db-sync" Nov 28 12:55:33 crc kubenswrapper[4779]: I1128 12:55:33.801819 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaa51f35-9ab4-4629-ae5a-349484d0917d" containerName="cinder-db-sync" Nov 28 12:55:33 crc kubenswrapper[4779]: I1128 12:55:33.801994 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaa51f35-9ab4-4629-ae5a-349484d0917d" containerName="cinder-db-sync" Nov 28 12:55:33 crc kubenswrapper[4779]: I1128 12:55:33.802974 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b764d4b5d-q6jq2" Nov 28 12:55:33 crc kubenswrapper[4779]: I1128 12:55:33.804863 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 28 12:55:33 crc kubenswrapper[4779]: I1128 12:55:33.804962 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 28 12:55:33 crc kubenswrapper[4779]: I1128 12:55:33.811268 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5b764d4b5d-q6jq2"] Nov 28 12:55:33 crc kubenswrapper[4779]: I1128 12:55:33.817850 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cfb62ec-1fc1-42e9-b77b-9883c7deeaa9-public-tls-certs\") pod \"barbican-api-5b764d4b5d-q6jq2\" (UID: \"2cfb62ec-1fc1-42e9-b77b-9883c7deeaa9\") " pod="openstack/barbican-api-5b764d4b5d-q6jq2" Nov 28 12:55:33 crc kubenswrapper[4779]: I1128 12:55:33.817894 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cfb62ec-1fc1-42e9-b77b-9883c7deeaa9-internal-tls-certs\") pod \"barbican-api-5b764d4b5d-q6jq2\" (UID: \"2cfb62ec-1fc1-42e9-b77b-9883c7deeaa9\") " pod="openstack/barbican-api-5b764d4b5d-q6jq2" Nov 28 12:55:33 crc kubenswrapper[4779]: I1128 12:55:33.817951 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gkxl\" (UniqueName: \"kubernetes.io/projected/2cfb62ec-1fc1-42e9-b77b-9883c7deeaa9-kube-api-access-7gkxl\") pod \"barbican-api-5b764d4b5d-q6jq2\" (UID: \"2cfb62ec-1fc1-42e9-b77b-9883c7deeaa9\") " pod="openstack/barbican-api-5b764d4b5d-q6jq2" Nov 28 12:55:33 crc kubenswrapper[4779]: I1128 12:55:33.818017 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2cfb62ec-1fc1-42e9-b77b-9883c7deeaa9-logs\") pod \"barbican-api-5b764d4b5d-q6jq2\" (UID: \"2cfb62ec-1fc1-42e9-b77b-9883c7deeaa9\") " pod="openstack/barbican-api-5b764d4b5d-q6jq2" Nov 28 12:55:33 crc kubenswrapper[4779]: I1128 12:55:33.818060 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cfb62ec-1fc1-42e9-b77b-9883c7deeaa9-config-data\") pod \"barbican-api-5b764d4b5d-q6jq2\" (UID: \"2cfb62ec-1fc1-42e9-b77b-9883c7deeaa9\") " pod="openstack/barbican-api-5b764d4b5d-q6jq2" Nov 28 12:55:33 crc kubenswrapper[4779]: I1128 12:55:33.818129 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfb62ec-1fc1-42e9-b77b-9883c7deeaa9-combined-ca-bundle\") pod \"barbican-api-5b764d4b5d-q6jq2\" (UID: \"2cfb62ec-1fc1-42e9-b77b-9883c7deeaa9\") " pod="openstack/barbican-api-5b764d4b5d-q6jq2" Nov 28 12:55:33 crc kubenswrapper[4779]: I1128 12:55:33.818164 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2cfb62ec-1fc1-42e9-b77b-9883c7deeaa9-config-data-custom\") pod \"barbican-api-5b764d4b5d-q6jq2\" (UID: \"2cfb62ec-1fc1-42e9-b77b-9883c7deeaa9\") " pod="openstack/barbican-api-5b764d4b5d-q6jq2" Nov 28 12:55:33 crc kubenswrapper[4779]: I1128 12:55:33.921224 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2cfb62ec-1fc1-42e9-b77b-9883c7deeaa9-config-data-custom\") pod \"barbican-api-5b764d4b5d-q6jq2\" (UID: \"2cfb62ec-1fc1-42e9-b77b-9883c7deeaa9\") " pod="openstack/barbican-api-5b764d4b5d-q6jq2" Nov 28 12:55:33 crc kubenswrapper[4779]: I1128 12:55:33.921278 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cfb62ec-1fc1-42e9-b77b-9883c7deeaa9-public-tls-certs\") pod \"barbican-api-5b764d4b5d-q6jq2\" (UID: \"2cfb62ec-1fc1-42e9-b77b-9883c7deeaa9\") " pod="openstack/barbican-api-5b764d4b5d-q6jq2" Nov 28 12:55:33 crc kubenswrapper[4779]: I1128 12:55:33.921310 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cfb62ec-1fc1-42e9-b77b-9883c7deeaa9-internal-tls-certs\") pod \"barbican-api-5b764d4b5d-q6jq2\" (UID: \"2cfb62ec-1fc1-42e9-b77b-9883c7deeaa9\") " pod="openstack/barbican-api-5b764d4b5d-q6jq2" Nov 28 12:55:33 crc kubenswrapper[4779]: I1128 12:55:33.921351 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gkxl\" (UniqueName: \"kubernetes.io/projected/2cfb62ec-1fc1-42e9-b77b-9883c7deeaa9-kube-api-access-7gkxl\") pod \"barbican-api-5b764d4b5d-q6jq2\" (UID: \"2cfb62ec-1fc1-42e9-b77b-9883c7deeaa9\") " pod="openstack/barbican-api-5b764d4b5d-q6jq2" Nov 28 12:55:33 crc kubenswrapper[4779]: I1128 12:55:33.921422 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2cfb62ec-1fc1-42e9-b77b-9883c7deeaa9-logs\") pod \"barbican-api-5b764d4b5d-q6jq2\" (UID: \"2cfb62ec-1fc1-42e9-b77b-9883c7deeaa9\") " pod="openstack/barbican-api-5b764d4b5d-q6jq2" Nov 28 12:55:33 crc kubenswrapper[4779]: I1128 12:55:33.921459 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cfb62ec-1fc1-42e9-b77b-9883c7deeaa9-config-data\") pod \"barbican-api-5b764d4b5d-q6jq2\" (UID: \"2cfb62ec-1fc1-42e9-b77b-9883c7deeaa9\") " pod="openstack/barbican-api-5b764d4b5d-q6jq2" Nov 28 12:55:33 crc kubenswrapper[4779]: I1128 12:55:33.921480 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfb62ec-1fc1-42e9-b77b-9883c7deeaa9-combined-ca-bundle\") pod \"barbican-api-5b764d4b5d-q6jq2\" (UID: \"2cfb62ec-1fc1-42e9-b77b-9883c7deeaa9\") " pod="openstack/barbican-api-5b764d4b5d-q6jq2" Nov 28 12:55:33 crc kubenswrapper[4779]: I1128 12:55:33.923932 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2cfb62ec-1fc1-42e9-b77b-9883c7deeaa9-logs\") pod \"barbican-api-5b764d4b5d-q6jq2\" (UID: \"2cfb62ec-1fc1-42e9-b77b-9883c7deeaa9\") " pod="openstack/barbican-api-5b764d4b5d-q6jq2" Nov 28 12:55:33 crc kubenswrapper[4779]: I1128 12:55:33.930062 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2cfb62ec-1fc1-42e9-b77b-9883c7deeaa9-config-data-custom\") pod \"barbican-api-5b764d4b5d-q6jq2\" (UID: \"2cfb62ec-1fc1-42e9-b77b-9883c7deeaa9\") " pod="openstack/barbican-api-5b764d4b5d-q6jq2" Nov 28 12:55:33 crc kubenswrapper[4779]: I1128 12:55:33.930657 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cfb62ec-1fc1-42e9-b77b-9883c7deeaa9-internal-tls-certs\") pod \"barbican-api-5b764d4b5d-q6jq2\" (UID: \"2cfb62ec-1fc1-42e9-b77b-9883c7deeaa9\") " pod="openstack/barbican-api-5b764d4b5d-q6jq2" Nov 28 12:55:33 crc kubenswrapper[4779]: I1128 12:55:33.931063 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cfb62ec-1fc1-42e9-b77b-9883c7deeaa9-public-tls-certs\") pod \"barbican-api-5b764d4b5d-q6jq2\" (UID: \"2cfb62ec-1fc1-42e9-b77b-9883c7deeaa9\") " pod="openstack/barbican-api-5b764d4b5d-q6jq2" Nov 28 12:55:33 crc kubenswrapper[4779]: I1128 12:55:33.933810 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cfb62ec-1fc1-42e9-b77b-9883c7deeaa9-config-data\") pod \"barbican-api-5b764d4b5d-q6jq2\" (UID: \"2cfb62ec-1fc1-42e9-b77b-9883c7deeaa9\") " pod="openstack/barbican-api-5b764d4b5d-q6jq2" Nov 28 12:55:33 crc kubenswrapper[4779]: I1128 12:55:33.935449 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfb62ec-1fc1-42e9-b77b-9883c7deeaa9-combined-ca-bundle\") pod \"barbican-api-5b764d4b5d-q6jq2\" (UID: \"2cfb62ec-1fc1-42e9-b77b-9883c7deeaa9\") " pod="openstack/barbican-api-5b764d4b5d-q6jq2" Nov 28 12:55:33 crc kubenswrapper[4779]: I1128 12:55:33.947253 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 12:55:33 crc kubenswrapper[4779]: I1128 12:55:33.952755 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 28 12:55:33 crc kubenswrapper[4779]: I1128 12:55:33.966160 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-k7xzm" Nov 28 12:55:33 crc kubenswrapper[4779]: I1128 12:55:33.966790 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 28 12:55:33 crc kubenswrapper[4779]: I1128 12:55:33.967034 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 28 12:55:33 crc kubenswrapper[4779]: I1128 12:55:33.967169 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 28 12:55:33 crc kubenswrapper[4779]: I1128 12:55:33.977067 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gkxl\" (UniqueName: \"kubernetes.io/projected/2cfb62ec-1fc1-42e9-b77b-9883c7deeaa9-kube-api-access-7gkxl\") pod \"barbican-api-5b764d4b5d-q6jq2\" (UID: \"2cfb62ec-1fc1-42e9-b77b-9883c7deeaa9\") " pod="openstack/barbican-api-5b764d4b5d-q6jq2" Nov 28 12:55:33 crc kubenswrapper[4779]: I1128 12:55:33.981257 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.010273 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-z9d75"] Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.027939 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c74869d-4b0d-41e9-a613-7e0f9e67c77e-config-data\") pod \"cinder-scheduler-0\" (UID: \"2c74869d-4b0d-41e9-a613-7e0f9e67c77e\") " pod="openstack/cinder-scheduler-0" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.028007 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c74869d-4b0d-41e9-a613-7e0f9e67c77e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2c74869d-4b0d-41e9-a613-7e0f9e67c77e\") " pod="openstack/cinder-scheduler-0" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.028062 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2c74869d-4b0d-41e9-a613-7e0f9e67c77e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2c74869d-4b0d-41e9-a613-7e0f9e67c77e\") " pod="openstack/cinder-scheduler-0" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.028105 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdt4r\" (UniqueName: \"kubernetes.io/projected/2c74869d-4b0d-41e9-a613-7e0f9e67c77e-kube-api-access-kdt4r\") pod \"cinder-scheduler-0\" (UID: \"2c74869d-4b0d-41e9-a613-7e0f9e67c77e\") " pod="openstack/cinder-scheduler-0" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.028126 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c74869d-4b0d-41e9-a613-7e0f9e67c77e-scripts\") pod \"cinder-scheduler-0\" (UID: \"2c74869d-4b0d-41e9-a613-7e0f9e67c77e\") " pod="openstack/cinder-scheduler-0" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.028166 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c74869d-4b0d-41e9-a613-7e0f9e67c77e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2c74869d-4b0d-41e9-a613-7e0f9e67c77e\") " pod="openstack/cinder-scheduler-0" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.029048 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-j87hp"] Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.042491 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-j87hp" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.058749 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-j87hp"] Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.128555 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.130214 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.131995 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c74869d-4b0d-41e9-a613-7e0f9e67c77e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2c74869d-4b0d-41e9-a613-7e0f9e67c77e\") " pod="openstack/cinder-scheduler-0" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.132240 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/126d0821-3736-467c-b13c-a5697d834177-dns-svc\") pod \"dnsmasq-dns-5784cf869f-j87hp\" (UID: \"126d0821-3736-467c-b13c-a5697d834177\") " pod="openstack/dnsmasq-dns-5784cf869f-j87hp" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.132286 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/126d0821-3736-467c-b13c-a5697d834177-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-j87hp\" (UID: \"126d0821-3736-467c-b13c-a5697d834177\") " pod="openstack/dnsmasq-dns-5784cf869f-j87hp" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.132368 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c74869d-4b0d-41e9-a613-7e0f9e67c77e-config-data\") pod \"cinder-scheduler-0\" (UID: \"2c74869d-4b0d-41e9-a613-7e0f9e67c77e\") " pod="openstack/cinder-scheduler-0" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.132446 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/126d0821-3736-467c-b13c-a5697d834177-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-j87hp\" (UID: \"126d0821-3736-467c-b13c-a5697d834177\") " pod="openstack/dnsmasq-dns-5784cf869f-j87hp" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.132478 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c74869d-4b0d-41e9-a613-7e0f9e67c77e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2c74869d-4b0d-41e9-a613-7e0f9e67c77e\") " pod="openstack/cinder-scheduler-0" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.132517 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg6v9\" (UniqueName: \"kubernetes.io/projected/126d0821-3736-467c-b13c-a5697d834177-kube-api-access-zg6v9\") pod \"dnsmasq-dns-5784cf869f-j87hp\" (UID: \"126d0821-3736-467c-b13c-a5697d834177\") " pod="openstack/dnsmasq-dns-5784cf869f-j87hp" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.132593 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/126d0821-3736-467c-b13c-a5697d834177-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-j87hp\" (UID: \"126d0821-3736-467c-b13c-a5697d834177\") " pod="openstack/dnsmasq-dns-5784cf869f-j87hp" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.132650 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2c74869d-4b0d-41e9-a613-7e0f9e67c77e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2c74869d-4b0d-41e9-a613-7e0f9e67c77e\") " pod="openstack/cinder-scheduler-0" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.132706 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdt4r\" (UniqueName: \"kubernetes.io/projected/2c74869d-4b0d-41e9-a613-7e0f9e67c77e-kube-api-access-kdt4r\") pod \"cinder-scheduler-0\" (UID: \"2c74869d-4b0d-41e9-a613-7e0f9e67c77e\") " pod="openstack/cinder-scheduler-0" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.132742 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c74869d-4b0d-41e9-a613-7e0f9e67c77e-scripts\") pod \"cinder-scheduler-0\" (UID: \"2c74869d-4b0d-41e9-a613-7e0f9e67c77e\") " pod="openstack/cinder-scheduler-0" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.132774 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/126d0821-3736-467c-b13c-a5697d834177-config\") pod \"dnsmasq-dns-5784cf869f-j87hp\" (UID: \"126d0821-3736-467c-b13c-a5697d834177\") " pod="openstack/dnsmasq-dns-5784cf869f-j87hp" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.132885 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2c74869d-4b0d-41e9-a613-7e0f9e67c77e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2c74869d-4b0d-41e9-a613-7e0f9e67c77e\") " pod="openstack/cinder-scheduler-0" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.135954 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.139341 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.139438 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c74869d-4b0d-41e9-a613-7e0f9e67c77e-config-data\") pod \"cinder-scheduler-0\" (UID: \"2c74869d-4b0d-41e9-a613-7e0f9e67c77e\") " pod="openstack/cinder-scheduler-0" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.139462 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c74869d-4b0d-41e9-a613-7e0f9e67c77e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2c74869d-4b0d-41e9-a613-7e0f9e67c77e\") " pod="openstack/cinder-scheduler-0" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.149588 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c74869d-4b0d-41e9-a613-7e0f9e67c77e-scripts\") pod \"cinder-scheduler-0\" (UID: \"2c74869d-4b0d-41e9-a613-7e0f9e67c77e\") " pod="openstack/cinder-scheduler-0" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.152217 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdt4r\" (UniqueName: \"kubernetes.io/projected/2c74869d-4b0d-41e9-a613-7e0f9e67c77e-kube-api-access-kdt4r\") pod \"cinder-scheduler-0\" (UID: \"2c74869d-4b0d-41e9-a613-7e0f9e67c77e\") " pod="openstack/cinder-scheduler-0" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.152293 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c74869d-4b0d-41e9-a613-7e0f9e67c77e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2c74869d-4b0d-41e9-a613-7e0f9e67c77e\") " pod="openstack/cinder-scheduler-0" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.161757 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b764d4b5d-q6jq2" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.236329 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/126d0821-3736-467c-b13c-a5697d834177-dns-svc\") pod \"dnsmasq-dns-5784cf869f-j87hp\" (UID: \"126d0821-3736-467c-b13c-a5697d834177\") " pod="openstack/dnsmasq-dns-5784cf869f-j87hp" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.236377 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/126d0821-3736-467c-b13c-a5697d834177-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-j87hp\" (UID: \"126d0821-3736-467c-b13c-a5697d834177\") " pod="openstack/dnsmasq-dns-5784cf869f-j87hp" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.236430 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/795a9b0b-9199-4702-abff-fe32b9edd387-scripts\") pod \"cinder-api-0\" (UID: \"795a9b0b-9199-4702-abff-fe32b9edd387\") " pod="openstack/cinder-api-0" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.236456 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/126d0821-3736-467c-b13c-a5697d834177-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-j87hp\" (UID: \"126d0821-3736-467c-b13c-a5697d834177\") " pod="openstack/dnsmasq-dns-5784cf869f-j87hp" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.236473 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/795a9b0b-9199-4702-abff-fe32b9edd387-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"795a9b0b-9199-4702-abff-fe32b9edd387\") " pod="openstack/cinder-api-0" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.236495 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/795a9b0b-9199-4702-abff-fe32b9edd387-config-data\") pod \"cinder-api-0\" (UID: \"795a9b0b-9199-4702-abff-fe32b9edd387\") " pod="openstack/cinder-api-0" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.236523 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/795a9b0b-9199-4702-abff-fe32b9edd387-config-data-custom\") pod \"cinder-api-0\" (UID: \"795a9b0b-9199-4702-abff-fe32b9edd387\") " pod="openstack/cinder-api-0" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.236548 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg6v9\" (UniqueName: \"kubernetes.io/projected/126d0821-3736-467c-b13c-a5697d834177-kube-api-access-zg6v9\") pod \"dnsmasq-dns-5784cf869f-j87hp\" (UID: \"126d0821-3736-467c-b13c-a5697d834177\") " pod="openstack/dnsmasq-dns-5784cf869f-j87hp" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.236565 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/795a9b0b-9199-4702-abff-fe32b9edd387-etc-machine-id\") pod \"cinder-api-0\" (UID: \"795a9b0b-9199-4702-abff-fe32b9edd387\") " pod="openstack/cinder-api-0" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.236586 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/795a9b0b-9199-4702-abff-fe32b9edd387-logs\") pod \"cinder-api-0\" (UID: \"795a9b0b-9199-4702-abff-fe32b9edd387\") " pod="openstack/cinder-api-0" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.236609 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/126d0821-3736-467c-b13c-a5697d834177-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-j87hp\" (UID: \"126d0821-3736-467c-b13c-a5697d834177\") " pod="openstack/dnsmasq-dns-5784cf869f-j87hp" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.236645 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4h55m\" (UniqueName: \"kubernetes.io/projected/795a9b0b-9199-4702-abff-fe32b9edd387-kube-api-access-4h55m\") pod \"cinder-api-0\" (UID: \"795a9b0b-9199-4702-abff-fe32b9edd387\") " pod="openstack/cinder-api-0" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.236671 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/126d0821-3736-467c-b13c-a5697d834177-config\") pod \"dnsmasq-dns-5784cf869f-j87hp\" (UID: \"126d0821-3736-467c-b13c-a5697d834177\") " pod="openstack/dnsmasq-dns-5784cf869f-j87hp" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.237599 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/126d0821-3736-467c-b13c-a5697d834177-config\") pod \"dnsmasq-dns-5784cf869f-j87hp\" (UID: \"126d0821-3736-467c-b13c-a5697d834177\") " pod="openstack/dnsmasq-dns-5784cf869f-j87hp" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.237938 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/126d0821-3736-467c-b13c-a5697d834177-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-j87hp\" (UID: \"126d0821-3736-467c-b13c-a5697d834177\") " pod="openstack/dnsmasq-dns-5784cf869f-j87hp" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.238023 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/126d0821-3736-467c-b13c-a5697d834177-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-j87hp\" (UID: \"126d0821-3736-467c-b13c-a5697d834177\") " pod="openstack/dnsmasq-dns-5784cf869f-j87hp" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.238189 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/126d0821-3736-467c-b13c-a5697d834177-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-j87hp\" (UID: \"126d0821-3736-467c-b13c-a5697d834177\") " pod="openstack/dnsmasq-dns-5784cf869f-j87hp" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.238418 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/126d0821-3736-467c-b13c-a5697d834177-dns-svc\") pod \"dnsmasq-dns-5784cf869f-j87hp\" (UID: \"126d0821-3736-467c-b13c-a5697d834177\") " pod="openstack/dnsmasq-dns-5784cf869f-j87hp" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.264791 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg6v9\" (UniqueName: \"kubernetes.io/projected/126d0821-3736-467c-b13c-a5697d834177-kube-api-access-zg6v9\") pod \"dnsmasq-dns-5784cf869f-j87hp\" (UID: \"126d0821-3736-467c-b13c-a5697d834177\") " pod="openstack/dnsmasq-dns-5784cf869f-j87hp" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.304448 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.338664 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4h55m\" (UniqueName: \"kubernetes.io/projected/795a9b0b-9199-4702-abff-fe32b9edd387-kube-api-access-4h55m\") pod \"cinder-api-0\" (UID: \"795a9b0b-9199-4702-abff-fe32b9edd387\") " pod="openstack/cinder-api-0" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.339019 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/795a9b0b-9199-4702-abff-fe32b9edd387-scripts\") pod \"cinder-api-0\" (UID: \"795a9b0b-9199-4702-abff-fe32b9edd387\") " pod="openstack/cinder-api-0" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.339049 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/795a9b0b-9199-4702-abff-fe32b9edd387-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"795a9b0b-9199-4702-abff-fe32b9edd387\") " pod="openstack/cinder-api-0" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.339083 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/795a9b0b-9199-4702-abff-fe32b9edd387-config-data\") pod \"cinder-api-0\" (UID: \"795a9b0b-9199-4702-abff-fe32b9edd387\") " pod="openstack/cinder-api-0" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.339142 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/795a9b0b-9199-4702-abff-fe32b9edd387-config-data-custom\") pod \"cinder-api-0\" (UID: \"795a9b0b-9199-4702-abff-fe32b9edd387\") " pod="openstack/cinder-api-0" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.339163 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/795a9b0b-9199-4702-abff-fe32b9edd387-etc-machine-id\") pod \"cinder-api-0\" (UID: \"795a9b0b-9199-4702-abff-fe32b9edd387\") " pod="openstack/cinder-api-0" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.339196 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/795a9b0b-9199-4702-abff-fe32b9edd387-logs\") pod \"cinder-api-0\" (UID: \"795a9b0b-9199-4702-abff-fe32b9edd387\") " pod="openstack/cinder-api-0" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.339734 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/795a9b0b-9199-4702-abff-fe32b9edd387-logs\") pod \"cinder-api-0\" (UID: \"795a9b0b-9199-4702-abff-fe32b9edd387\") " pod="openstack/cinder-api-0" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.340577 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/795a9b0b-9199-4702-abff-fe32b9edd387-etc-machine-id\") pod \"cinder-api-0\" (UID: \"795a9b0b-9199-4702-abff-fe32b9edd387\") " pod="openstack/cinder-api-0" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.353239 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/795a9b0b-9199-4702-abff-fe32b9edd387-scripts\") pod \"cinder-api-0\" (UID: \"795a9b0b-9199-4702-abff-fe32b9edd387\") " pod="openstack/cinder-api-0" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.355491 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/795a9b0b-9199-4702-abff-fe32b9edd387-config-data-custom\") pod \"cinder-api-0\" (UID: \"795a9b0b-9199-4702-abff-fe32b9edd387\") " pod="openstack/cinder-api-0" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.358156 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/795a9b0b-9199-4702-abff-fe32b9edd387-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"795a9b0b-9199-4702-abff-fe32b9edd387\") " pod="openstack/cinder-api-0" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.364132 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4h55m\" (UniqueName: \"kubernetes.io/projected/795a9b0b-9199-4702-abff-fe32b9edd387-kube-api-access-4h55m\") pod \"cinder-api-0\" (UID: \"795a9b0b-9199-4702-abff-fe32b9edd387\") " pod="openstack/cinder-api-0" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.376006 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/795a9b0b-9199-4702-abff-fe32b9edd387-config-data\") pod \"cinder-api-0\" (UID: \"795a9b0b-9199-4702-abff-fe32b9edd387\") " pod="openstack/cinder-api-0" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.403929 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-j87hp" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.417949 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5b764d4b5d-q6jq2"] Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.466402 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.732901 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-79c7d84d4c-82wcz" event={"ID":"684d6129-3c1c-43df-b258-c32b447736d1","Type":"ContainerStarted","Data":"99aca0e8a646fc953a9b2c66522b54c377d9bac788cf72356b28aea8d073a4d4"} Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.738420 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b764d4b5d-q6jq2" event={"ID":"2cfb62ec-1fc1-42e9-b77b-9883c7deeaa9","Type":"ContainerStarted","Data":"d8d46bbdb92da14b5fdd92862dabbc2d7952e9934b075751f6b6dbb534db3a80"} Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.741958 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7784844594-g7gws" event={"ID":"6f944a10-9e80-47a5-8ad8-3b6edc0c3315","Type":"ContainerStarted","Data":"2ddcfe96f120f28b71046710f2eaa501ec4d604907d4c7233820edce6d540b49"} Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.743200 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-z9d75" event={"ID":"a46c7de6-cbd5-4579-8fd8-0d7a6701e115","Type":"ContainerStarted","Data":"93e6818563d77d4b3ff336d24bea38ade044d0d34c8ab78d143e8c0b69f19db4"} Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.743364 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75c8ddd69c-z9d75" podUID="a46c7de6-cbd5-4579-8fd8-0d7a6701e115" containerName="dnsmasq-dns" containerID="cri-o://93e6818563d77d4b3ff336d24bea38ade044d0d34c8ab78d143e8c0b69f19db4" gracePeriod=10 Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.743440 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75c8ddd69c-z9d75" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.787027 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75c8ddd69c-z9d75" podStartSLOduration=3.787006649 podStartE2EDuration="3.787006649s" podCreationTimestamp="2025-11-28 12:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:55:34.761228489 +0000 UTC m=+1195.326903853" watchObservedRunningTime="2025-11-28 12:55:34.787006649 +0000 UTC m=+1195.352682003" Nov 28 12:55:34 crc kubenswrapper[4779]: I1128 12:55:34.849617 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 12:55:35 crc kubenswrapper[4779]: I1128 12:55:35.004050 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-j87hp"] Nov 28 12:55:35 crc kubenswrapper[4779]: W1128 12:55:35.011703 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod126d0821_3736_467c_b13c_a5697d834177.slice/crio-0ee61e4435bfb0dcbe039745081cf9993635659d01daad604272c247c8a9ebae WatchSource:0}: Error finding container 0ee61e4435bfb0dcbe039745081cf9993635659d01daad604272c247c8a9ebae: Status 404 returned error can't find the container with id 0ee61e4435bfb0dcbe039745081cf9993635659d01daad604272c247c8a9ebae Nov 28 12:55:35 crc kubenswrapper[4779]: W1128 12:55:35.086990 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod795a9b0b_9199_4702_abff_fe32b9edd387.slice/crio-d29a765d47407d28369e38cee25de9d50fd28ec8dd4baa2adde88456e30907a1 WatchSource:0}: Error finding container d29a765d47407d28369e38cee25de9d50fd28ec8dd4baa2adde88456e30907a1: Status 404 returned error can't find the container with id d29a765d47407d28369e38cee25de9d50fd28ec8dd4baa2adde88456e30907a1 Nov 28 12:55:35 crc kubenswrapper[4779]: I1128 12:55:35.093848 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 28 12:55:35 crc kubenswrapper[4779]: I1128 12:55:35.785468 4779 generic.go:334] "Generic (PLEG): container finished" podID="126d0821-3736-467c-b13c-a5697d834177" containerID="b1db6becf352b0bcd49bc942c6033604bf3219c9ab02a31d98d98b3f8b7004e0" exitCode=0 Nov 28 12:55:35 crc kubenswrapper[4779]: I1128 12:55:35.785958 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-j87hp" event={"ID":"126d0821-3736-467c-b13c-a5697d834177","Type":"ContainerDied","Data":"b1db6becf352b0bcd49bc942c6033604bf3219c9ab02a31d98d98b3f8b7004e0"} Nov 28 12:55:35 crc kubenswrapper[4779]: I1128 12:55:35.785997 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-j87hp" event={"ID":"126d0821-3736-467c-b13c-a5697d834177","Type":"ContainerStarted","Data":"0ee61e4435bfb0dcbe039745081cf9993635659d01daad604272c247c8a9ebae"} Nov 28 12:55:35 crc kubenswrapper[4779]: I1128 12:55:35.792963 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-79c7d84d4c-82wcz" event={"ID":"684d6129-3c1c-43df-b258-c32b447736d1","Type":"ContainerStarted","Data":"22a73584aec612e4063f9ca38d9e9afb2a16df01b14d9fec0ba13b95a6793e7f"} Nov 28 12:55:35 crc kubenswrapper[4779]: I1128 12:55:35.800970 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b764d4b5d-q6jq2" event={"ID":"2cfb62ec-1fc1-42e9-b77b-9883c7deeaa9","Type":"ContainerStarted","Data":"cfb090c8be6f9d6946e486fb5b2cee5f0da94eaadf5c2c63cd1635bfa572206d"} Nov 28 12:55:35 crc kubenswrapper[4779]: I1128 12:55:35.836431 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd014390-52f0-4d0c-944f-aed58d5b179f","Type":"ContainerStarted","Data":"af291712d598b8d6fd36a612c4cdabc5152532d875d560ae2f111be115da9951"} Nov 28 12:55:35 crc kubenswrapper[4779]: I1128 12:55:35.838343 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 12:55:35 crc kubenswrapper[4779]: I1128 12:55:35.841297 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7784844594-g7gws" event={"ID":"6f944a10-9e80-47a5-8ad8-3b6edc0c3315","Type":"ContainerStarted","Data":"555fc1544e5af0ff9b3e68c69530c05f1f76652405d5760fe1bbebbe51ce79c5"} Nov 28 12:55:35 crc kubenswrapper[4779]: I1128 12:55:35.846978 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"795a9b0b-9199-4702-abff-fe32b9edd387","Type":"ContainerStarted","Data":"d29a765d47407d28369e38cee25de9d50fd28ec8dd4baa2adde88456e30907a1"} Nov 28 12:55:35 crc kubenswrapper[4779]: I1128 12:55:35.855959 4779 generic.go:334] "Generic (PLEG): container finished" podID="a46c7de6-cbd5-4579-8fd8-0d7a6701e115" containerID="93e6818563d77d4b3ff336d24bea38ade044d0d34c8ab78d143e8c0b69f19db4" exitCode=0 Nov 28 12:55:35 crc kubenswrapper[4779]: I1128 12:55:35.856074 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-z9d75" event={"ID":"a46c7de6-cbd5-4579-8fd8-0d7a6701e115","Type":"ContainerDied","Data":"93e6818563d77d4b3ff336d24bea38ade044d0d34c8ab78d143e8c0b69f19db4"} Nov 28 12:55:35 crc kubenswrapper[4779]: I1128 12:55:35.865926 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-79c7d84d4c-82wcz" podStartSLOduration=3.5525929229999997 podStartE2EDuration="5.865910523s" podCreationTimestamp="2025-11-28 12:55:30 +0000 UTC" firstStartedPulling="2025-11-28 12:55:31.727452192 +0000 UTC m=+1192.293127546" lastFinishedPulling="2025-11-28 12:55:34.040769792 +0000 UTC m=+1194.606445146" observedRunningTime="2025-11-28 12:55:35.850890947 +0000 UTC m=+1196.416566301" watchObservedRunningTime="2025-11-28 12:55:35.865910523 +0000 UTC m=+1196.431585877" Nov 28 12:55:35 crc kubenswrapper[4779]: I1128 12:55:35.866169 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2c74869d-4b0d-41e9-a613-7e0f9e67c77e","Type":"ContainerStarted","Data":"fa0b0b9e84346132797758c1591d9e08a2d98059983a862d2f5f47263183e04b"} Nov 28 12:55:35 crc kubenswrapper[4779]: I1128 12:55:35.919170 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.7247758109999998 podStartE2EDuration="7.919150818s" podCreationTimestamp="2025-11-28 12:55:28 +0000 UTC" firstStartedPulling="2025-11-28 12:55:29.857019156 +0000 UTC m=+1190.422694530" lastFinishedPulling="2025-11-28 12:55:34.051394183 +0000 UTC m=+1194.617069537" observedRunningTime="2025-11-28 12:55:35.882369517 +0000 UTC m=+1196.448044881" watchObservedRunningTime="2025-11-28 12:55:35.919150818 +0000 UTC m=+1196.484826172" Nov 28 12:55:35 crc kubenswrapper[4779]: I1128 12:55:35.932352 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7784844594-g7gws" podStartSLOduration=3.687903312 podStartE2EDuration="5.932332135s" podCreationTimestamp="2025-11-28 12:55:30 +0000 UTC" firstStartedPulling="2025-11-28 12:55:31.79563658 +0000 UTC m=+1192.361311935" lastFinishedPulling="2025-11-28 12:55:34.040065404 +0000 UTC m=+1194.605740758" observedRunningTime="2025-11-28 12:55:35.910587362 +0000 UTC m=+1196.476262746" watchObservedRunningTime="2025-11-28 12:55:35.932332135 +0000 UTC m=+1196.498007489" Nov 28 12:55:36 crc kubenswrapper[4779]: I1128 12:55:36.334046 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-z9d75" Nov 28 12:55:36 crc kubenswrapper[4779]: I1128 12:55:36.500694 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a46c7de6-cbd5-4579-8fd8-0d7a6701e115-ovsdbserver-nb\") pod \"a46c7de6-cbd5-4579-8fd8-0d7a6701e115\" (UID: \"a46c7de6-cbd5-4579-8fd8-0d7a6701e115\") " Nov 28 12:55:36 crc kubenswrapper[4779]: I1128 12:55:36.500736 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a46c7de6-cbd5-4579-8fd8-0d7a6701e115-dns-swift-storage-0\") pod \"a46c7de6-cbd5-4579-8fd8-0d7a6701e115\" (UID: \"a46c7de6-cbd5-4579-8fd8-0d7a6701e115\") " Nov 28 12:55:36 crc kubenswrapper[4779]: I1128 12:55:36.500763 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a46c7de6-cbd5-4579-8fd8-0d7a6701e115-dns-svc\") pod \"a46c7de6-cbd5-4579-8fd8-0d7a6701e115\" (UID: \"a46c7de6-cbd5-4579-8fd8-0d7a6701e115\") " Nov 28 12:55:36 crc kubenswrapper[4779]: I1128 12:55:36.500806 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nc42s\" (UniqueName: \"kubernetes.io/projected/a46c7de6-cbd5-4579-8fd8-0d7a6701e115-kube-api-access-nc42s\") pod \"a46c7de6-cbd5-4579-8fd8-0d7a6701e115\" (UID: \"a46c7de6-cbd5-4579-8fd8-0d7a6701e115\") " Nov 28 12:55:36 crc kubenswrapper[4779]: I1128 12:55:36.500865 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a46c7de6-cbd5-4579-8fd8-0d7a6701e115-ovsdbserver-sb\") pod \"a46c7de6-cbd5-4579-8fd8-0d7a6701e115\" (UID: \"a46c7de6-cbd5-4579-8fd8-0d7a6701e115\") " Nov 28 12:55:36 crc kubenswrapper[4779]: I1128 12:55:36.500900 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a46c7de6-cbd5-4579-8fd8-0d7a6701e115-config\") pod \"a46c7de6-cbd5-4579-8fd8-0d7a6701e115\" (UID: \"a46c7de6-cbd5-4579-8fd8-0d7a6701e115\") " Nov 28 12:55:36 crc kubenswrapper[4779]: I1128 12:55:36.522835 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a46c7de6-cbd5-4579-8fd8-0d7a6701e115-kube-api-access-nc42s" (OuterVolumeSpecName: "kube-api-access-nc42s") pod "a46c7de6-cbd5-4579-8fd8-0d7a6701e115" (UID: "a46c7de6-cbd5-4579-8fd8-0d7a6701e115"). InnerVolumeSpecName "kube-api-access-nc42s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:55:36 crc kubenswrapper[4779]: I1128 12:55:36.560380 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a46c7de6-cbd5-4579-8fd8-0d7a6701e115-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a46c7de6-cbd5-4579-8fd8-0d7a6701e115" (UID: "a46c7de6-cbd5-4579-8fd8-0d7a6701e115"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:55:36 crc kubenswrapper[4779]: I1128 12:55:36.563420 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a46c7de6-cbd5-4579-8fd8-0d7a6701e115-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a46c7de6-cbd5-4579-8fd8-0d7a6701e115" (UID: "a46c7de6-cbd5-4579-8fd8-0d7a6701e115"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:55:36 crc kubenswrapper[4779]: I1128 12:55:36.579386 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a46c7de6-cbd5-4579-8fd8-0d7a6701e115-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a46c7de6-cbd5-4579-8fd8-0d7a6701e115" (UID: "a46c7de6-cbd5-4579-8fd8-0d7a6701e115"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:55:36 crc kubenswrapper[4779]: I1128 12:55:36.603459 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a46c7de6-cbd5-4579-8fd8-0d7a6701e115-config" (OuterVolumeSpecName: "config") pod "a46c7de6-cbd5-4579-8fd8-0d7a6701e115" (UID: "a46c7de6-cbd5-4579-8fd8-0d7a6701e115"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:55:36 crc kubenswrapper[4779]: I1128 12:55:36.603989 4779 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a46c7de6-cbd5-4579-8fd8-0d7a6701e115-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:36 crc kubenswrapper[4779]: I1128 12:55:36.604016 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nc42s\" (UniqueName: \"kubernetes.io/projected/a46c7de6-cbd5-4579-8fd8-0d7a6701e115-kube-api-access-nc42s\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:36 crc kubenswrapper[4779]: I1128 12:55:36.604027 4779 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a46c7de6-cbd5-4579-8fd8-0d7a6701e115-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:36 crc kubenswrapper[4779]: I1128 12:55:36.604035 4779 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a46c7de6-cbd5-4579-8fd8-0d7a6701e115-config\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:36 crc kubenswrapper[4779]: I1128 12:55:36.604043 4779 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a46c7de6-cbd5-4579-8fd8-0d7a6701e115-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:36 crc kubenswrapper[4779]: I1128 12:55:36.615641 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a46c7de6-cbd5-4579-8fd8-0d7a6701e115-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a46c7de6-cbd5-4579-8fd8-0d7a6701e115" (UID: "a46c7de6-cbd5-4579-8fd8-0d7a6701e115"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:55:36 crc kubenswrapper[4779]: I1128 12:55:36.706485 4779 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a46c7de6-cbd5-4579-8fd8-0d7a6701e115-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:36 crc kubenswrapper[4779]: I1128 12:55:36.878708 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"795a9b0b-9199-4702-abff-fe32b9edd387","Type":"ContainerStarted","Data":"a3bc951e937ec9038708fd6be9d19870703bc3abdb27a7c6c6a6d87957b23647"} Nov 28 12:55:36 crc kubenswrapper[4779]: I1128 12:55:36.881388 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-z9d75" Nov 28 12:55:36 crc kubenswrapper[4779]: I1128 12:55:36.881377 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-z9d75" event={"ID":"a46c7de6-cbd5-4579-8fd8-0d7a6701e115","Type":"ContainerDied","Data":"9c20579cfb3cc848bfa5ce9de5731c74bd061f1902354763b0da86d55962eec5"} Nov 28 12:55:36 crc kubenswrapper[4779]: I1128 12:55:36.881578 4779 scope.go:117] "RemoveContainer" containerID="93e6818563d77d4b3ff336d24bea38ade044d0d34c8ab78d143e8c0b69f19db4" Nov 28 12:55:36 crc kubenswrapper[4779]: I1128 12:55:36.886277 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-j87hp" event={"ID":"126d0821-3736-467c-b13c-a5697d834177","Type":"ContainerStarted","Data":"25c80160d37c24d67d161ffa904d288e792389023e84e3a070b6a06b940f6168"} Nov 28 12:55:36 crc kubenswrapper[4779]: I1128 12:55:36.887162 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5784cf869f-j87hp" Nov 28 12:55:36 crc kubenswrapper[4779]: I1128 12:55:36.890068 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b764d4b5d-q6jq2" event={"ID":"2cfb62ec-1fc1-42e9-b77b-9883c7deeaa9","Type":"ContainerStarted","Data":"ba626b8a525bd45ca7faea5ce55ee3b8f51b40cf86e6249774a3c7677fa56c45"} Nov 28 12:55:36 crc kubenswrapper[4779]: I1128 12:55:36.890214 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5b764d4b5d-q6jq2" Nov 28 12:55:36 crc kubenswrapper[4779]: I1128 12:55:36.891304 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5b764d4b5d-q6jq2" Nov 28 12:55:36 crc kubenswrapper[4779]: I1128 12:55:36.931110 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5784cf869f-j87hp" podStartSLOduration=3.931076025 podStartE2EDuration="3.931076025s" podCreationTimestamp="2025-11-28 12:55:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:55:36.919291194 +0000 UTC m=+1197.484966548" watchObservedRunningTime="2025-11-28 12:55:36.931076025 +0000 UTC m=+1197.496751379" Nov 28 12:55:36 crc kubenswrapper[4779]: I1128 12:55:36.939421 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5b764d4b5d-q6jq2" podStartSLOduration=3.939406325 podStartE2EDuration="3.939406325s" podCreationTimestamp="2025-11-28 12:55:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:55:36.93770111 +0000 UTC m=+1197.503376484" watchObservedRunningTime="2025-11-28 12:55:36.939406325 +0000 UTC m=+1197.505081679" Nov 28 12:55:36 crc kubenswrapper[4779]: I1128 12:55:36.942033 4779 scope.go:117] "RemoveContainer" containerID="a20e80ad10bdce39f70eeac58ec8bee6b4e43bf7ca6c49d96265b7262794e83d" Nov 28 12:55:36 crc kubenswrapper[4779]: I1128 12:55:36.968058 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-z9d75"] Nov 28 12:55:36 crc kubenswrapper[4779]: I1128 12:55:36.977947 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-z9d75"] Nov 28 12:55:37 crc kubenswrapper[4779]: I1128 12:55:37.409547 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 28 12:55:37 crc kubenswrapper[4779]: I1128 12:55:37.742218 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a46c7de6-cbd5-4579-8fd8-0d7a6701e115" path="/var/lib/kubelet/pods/a46c7de6-cbd5-4579-8fd8-0d7a6701e115/volumes" Nov 28 12:55:37 crc kubenswrapper[4779]: I1128 12:55:37.898157 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"795a9b0b-9199-4702-abff-fe32b9edd387","Type":"ContainerStarted","Data":"f924b116a3be44653b3a5a3bc6e8cd167dfc7373105d3309f38c4188e7633571"} Nov 28 12:55:37 crc kubenswrapper[4779]: I1128 12:55:37.900632 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 28 12:55:37 crc kubenswrapper[4779]: I1128 12:55:37.904821 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2c74869d-4b0d-41e9-a613-7e0f9e67c77e","Type":"ContainerStarted","Data":"6f70ae9374f8b250a55d24e39f7298d7bb15bb3861c18daeaed8e723737e3022"} Nov 28 12:55:37 crc kubenswrapper[4779]: I1128 12:55:37.905548 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2c74869d-4b0d-41e9-a613-7e0f9e67c77e","Type":"ContainerStarted","Data":"add2d1e3bd0b37c382e4a0af9de41a560c70091dfd61b5713e6b00e61c148331"} Nov 28 12:55:37 crc kubenswrapper[4779]: I1128 12:55:37.934914 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.934890397 podStartE2EDuration="3.934890397s" podCreationTimestamp="2025-11-28 12:55:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:55:37.925497559 +0000 UTC m=+1198.491172913" watchObservedRunningTime="2025-11-28 12:55:37.934890397 +0000 UTC m=+1198.500565751" Nov 28 12:55:37 crc kubenswrapper[4779]: I1128 12:55:37.968605 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.917871286 podStartE2EDuration="4.968590526s" podCreationTimestamp="2025-11-28 12:55:33 +0000 UTC" firstStartedPulling="2025-11-28 12:55:34.853182385 +0000 UTC m=+1195.418857739" lastFinishedPulling="2025-11-28 12:55:35.903901625 +0000 UTC m=+1196.469576979" observedRunningTime="2025-11-28 12:55:37.963606785 +0000 UTC m=+1198.529282139" watchObservedRunningTime="2025-11-28 12:55:37.968590526 +0000 UTC m=+1198.534265870" Nov 28 12:55:38 crc kubenswrapper[4779]: I1128 12:55:38.910462 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="795a9b0b-9199-4702-abff-fe32b9edd387" containerName="cinder-api-log" containerID="cri-o://a3bc951e937ec9038708fd6be9d19870703bc3abdb27a7c6c6a6d87957b23647" gracePeriod=30 Nov 28 12:55:38 crc kubenswrapper[4779]: I1128 12:55:38.911813 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="795a9b0b-9199-4702-abff-fe32b9edd387" containerName="cinder-api" containerID="cri-o://f924b116a3be44653b3a5a3bc6e8cd167dfc7373105d3309f38c4188e7633571" gracePeriod=30 Nov 28 12:55:39 crc kubenswrapper[4779]: I1128 12:55:39.305395 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 28 12:55:39 crc kubenswrapper[4779]: I1128 12:55:39.799194 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 28 12:55:39 crc kubenswrapper[4779]: I1128 12:55:39.928235 4779 generic.go:334] "Generic (PLEG): container finished" podID="795a9b0b-9199-4702-abff-fe32b9edd387" containerID="f924b116a3be44653b3a5a3bc6e8cd167dfc7373105d3309f38c4188e7633571" exitCode=0 Nov 28 12:55:39 crc kubenswrapper[4779]: I1128 12:55:39.928261 4779 generic.go:334] "Generic (PLEG): container finished" podID="795a9b0b-9199-4702-abff-fe32b9edd387" containerID="a3bc951e937ec9038708fd6be9d19870703bc3abdb27a7c6c6a6d87957b23647" exitCode=143 Nov 28 12:55:39 crc kubenswrapper[4779]: I1128 12:55:39.928348 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 28 12:55:39 crc kubenswrapper[4779]: I1128 12:55:39.928338 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"795a9b0b-9199-4702-abff-fe32b9edd387","Type":"ContainerDied","Data":"f924b116a3be44653b3a5a3bc6e8cd167dfc7373105d3309f38c4188e7633571"} Nov 28 12:55:39 crc kubenswrapper[4779]: I1128 12:55:39.928428 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"795a9b0b-9199-4702-abff-fe32b9edd387","Type":"ContainerDied","Data":"a3bc951e937ec9038708fd6be9d19870703bc3abdb27a7c6c6a6d87957b23647"} Nov 28 12:55:39 crc kubenswrapper[4779]: I1128 12:55:39.928457 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"795a9b0b-9199-4702-abff-fe32b9edd387","Type":"ContainerDied","Data":"d29a765d47407d28369e38cee25de9d50fd28ec8dd4baa2adde88456e30907a1"} Nov 28 12:55:39 crc kubenswrapper[4779]: I1128 12:55:39.928488 4779 scope.go:117] "RemoveContainer" containerID="f924b116a3be44653b3a5a3bc6e8cd167dfc7373105d3309f38c4188e7633571" Nov 28 12:55:39 crc kubenswrapper[4779]: I1128 12:55:39.959505 4779 scope.go:117] "RemoveContainer" containerID="a3bc951e937ec9038708fd6be9d19870703bc3abdb27a7c6c6a6d87957b23647" Nov 28 12:55:39 crc kubenswrapper[4779]: I1128 12:55:39.967581 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4h55m\" (UniqueName: \"kubernetes.io/projected/795a9b0b-9199-4702-abff-fe32b9edd387-kube-api-access-4h55m\") pod \"795a9b0b-9199-4702-abff-fe32b9edd387\" (UID: \"795a9b0b-9199-4702-abff-fe32b9edd387\") " Nov 28 12:55:39 crc kubenswrapper[4779]: I1128 12:55:39.968196 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/795a9b0b-9199-4702-abff-fe32b9edd387-scripts\") pod \"795a9b0b-9199-4702-abff-fe32b9edd387\" (UID: \"795a9b0b-9199-4702-abff-fe32b9edd387\") " Nov 28 12:55:39 crc kubenswrapper[4779]: I1128 12:55:39.968267 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/795a9b0b-9199-4702-abff-fe32b9edd387-logs\") pod \"795a9b0b-9199-4702-abff-fe32b9edd387\" (UID: \"795a9b0b-9199-4702-abff-fe32b9edd387\") " Nov 28 12:55:39 crc kubenswrapper[4779]: I1128 12:55:39.968289 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/795a9b0b-9199-4702-abff-fe32b9edd387-combined-ca-bundle\") pod \"795a9b0b-9199-4702-abff-fe32b9edd387\" (UID: \"795a9b0b-9199-4702-abff-fe32b9edd387\") " Nov 28 12:55:39 crc kubenswrapper[4779]: I1128 12:55:39.968307 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/795a9b0b-9199-4702-abff-fe32b9edd387-config-data\") pod \"795a9b0b-9199-4702-abff-fe32b9edd387\" (UID: \"795a9b0b-9199-4702-abff-fe32b9edd387\") " Nov 28 12:55:39 crc kubenswrapper[4779]: I1128 12:55:39.968335 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/795a9b0b-9199-4702-abff-fe32b9edd387-etc-machine-id\") pod \"795a9b0b-9199-4702-abff-fe32b9edd387\" (UID: \"795a9b0b-9199-4702-abff-fe32b9edd387\") " Nov 28 12:55:39 crc kubenswrapper[4779]: I1128 12:55:39.968352 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/795a9b0b-9199-4702-abff-fe32b9edd387-config-data-custom\") pod \"795a9b0b-9199-4702-abff-fe32b9edd387\" (UID: \"795a9b0b-9199-4702-abff-fe32b9edd387\") " Nov 28 12:55:39 crc kubenswrapper[4779]: I1128 12:55:39.976628 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/795a9b0b-9199-4702-abff-fe32b9edd387-logs" (OuterVolumeSpecName: "logs") pod "795a9b0b-9199-4702-abff-fe32b9edd387" (UID: "795a9b0b-9199-4702-abff-fe32b9edd387"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:55:39 crc kubenswrapper[4779]: I1128 12:55:39.976682 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/795a9b0b-9199-4702-abff-fe32b9edd387-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "795a9b0b-9199-4702-abff-fe32b9edd387" (UID: "795a9b0b-9199-4702-abff-fe32b9edd387"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:55:39 crc kubenswrapper[4779]: I1128 12:55:39.984005 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/795a9b0b-9199-4702-abff-fe32b9edd387-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "795a9b0b-9199-4702-abff-fe32b9edd387" (UID: "795a9b0b-9199-4702-abff-fe32b9edd387"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:55:39 crc kubenswrapper[4779]: I1128 12:55:39.984865 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/795a9b0b-9199-4702-abff-fe32b9edd387-scripts" (OuterVolumeSpecName: "scripts") pod "795a9b0b-9199-4702-abff-fe32b9edd387" (UID: "795a9b0b-9199-4702-abff-fe32b9edd387"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:55:39 crc kubenswrapper[4779]: I1128 12:55:39.991900 4779 scope.go:117] "RemoveContainer" containerID="f924b116a3be44653b3a5a3bc6e8cd167dfc7373105d3309f38c4188e7633571" Nov 28 12:55:39 crc kubenswrapper[4779]: E1128 12:55:39.992322 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f924b116a3be44653b3a5a3bc6e8cd167dfc7373105d3309f38c4188e7633571\": container with ID starting with f924b116a3be44653b3a5a3bc6e8cd167dfc7373105d3309f38c4188e7633571 not found: ID does not exist" containerID="f924b116a3be44653b3a5a3bc6e8cd167dfc7373105d3309f38c4188e7633571" Nov 28 12:55:39 crc kubenswrapper[4779]: I1128 12:55:39.992352 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f924b116a3be44653b3a5a3bc6e8cd167dfc7373105d3309f38c4188e7633571"} err="failed to get container status \"f924b116a3be44653b3a5a3bc6e8cd167dfc7373105d3309f38c4188e7633571\": rpc error: code = NotFound desc = could not find container \"f924b116a3be44653b3a5a3bc6e8cd167dfc7373105d3309f38c4188e7633571\": container with ID starting with f924b116a3be44653b3a5a3bc6e8cd167dfc7373105d3309f38c4188e7633571 not found: ID does not exist" Nov 28 12:55:39 crc kubenswrapper[4779]: I1128 12:55:39.992374 4779 scope.go:117] "RemoveContainer" containerID="a3bc951e937ec9038708fd6be9d19870703bc3abdb27a7c6c6a6d87957b23647" Nov 28 12:55:39 crc kubenswrapper[4779]: E1128 12:55:39.992690 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3bc951e937ec9038708fd6be9d19870703bc3abdb27a7c6c6a6d87957b23647\": container with ID starting with a3bc951e937ec9038708fd6be9d19870703bc3abdb27a7c6c6a6d87957b23647 not found: ID does not exist" containerID="a3bc951e937ec9038708fd6be9d19870703bc3abdb27a7c6c6a6d87957b23647" Nov 28 12:55:39 crc kubenswrapper[4779]: I1128 12:55:39.992708 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3bc951e937ec9038708fd6be9d19870703bc3abdb27a7c6c6a6d87957b23647"} err="failed to get container status \"a3bc951e937ec9038708fd6be9d19870703bc3abdb27a7c6c6a6d87957b23647\": rpc error: code = NotFound desc = could not find container \"a3bc951e937ec9038708fd6be9d19870703bc3abdb27a7c6c6a6d87957b23647\": container with ID starting with a3bc951e937ec9038708fd6be9d19870703bc3abdb27a7c6c6a6d87957b23647 not found: ID does not exist" Nov 28 12:55:39 crc kubenswrapper[4779]: I1128 12:55:39.992721 4779 scope.go:117] "RemoveContainer" containerID="f924b116a3be44653b3a5a3bc6e8cd167dfc7373105d3309f38c4188e7633571" Nov 28 12:55:39 crc kubenswrapper[4779]: I1128 12:55:39.992915 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f924b116a3be44653b3a5a3bc6e8cd167dfc7373105d3309f38c4188e7633571"} err="failed to get container status \"f924b116a3be44653b3a5a3bc6e8cd167dfc7373105d3309f38c4188e7633571\": rpc error: code = NotFound desc = could not find container \"f924b116a3be44653b3a5a3bc6e8cd167dfc7373105d3309f38c4188e7633571\": container with ID starting with f924b116a3be44653b3a5a3bc6e8cd167dfc7373105d3309f38c4188e7633571 not found: ID does not exist" Nov 28 12:55:39 crc kubenswrapper[4779]: I1128 12:55:39.992928 4779 scope.go:117] "RemoveContainer" containerID="a3bc951e937ec9038708fd6be9d19870703bc3abdb27a7c6c6a6d87957b23647" Nov 28 12:55:39 crc kubenswrapper[4779]: I1128 12:55:39.993149 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3bc951e937ec9038708fd6be9d19870703bc3abdb27a7c6c6a6d87957b23647"} err="failed to get container status \"a3bc951e937ec9038708fd6be9d19870703bc3abdb27a7c6c6a6d87957b23647\": rpc error: code = NotFound desc = could not find container \"a3bc951e937ec9038708fd6be9d19870703bc3abdb27a7c6c6a6d87957b23647\": container with ID starting with a3bc951e937ec9038708fd6be9d19870703bc3abdb27a7c6c6a6d87957b23647 not found: ID does not exist" Nov 28 12:55:40 crc kubenswrapper[4779]: I1128 12:55:39.999508 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/795a9b0b-9199-4702-abff-fe32b9edd387-kube-api-access-4h55m" (OuterVolumeSpecName: "kube-api-access-4h55m") pod "795a9b0b-9199-4702-abff-fe32b9edd387" (UID: "795a9b0b-9199-4702-abff-fe32b9edd387"). InnerVolumeSpecName "kube-api-access-4h55m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:55:40 crc kubenswrapper[4779]: I1128 12:55:40.020292 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/795a9b0b-9199-4702-abff-fe32b9edd387-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "795a9b0b-9199-4702-abff-fe32b9edd387" (UID: "795a9b0b-9199-4702-abff-fe32b9edd387"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:55:40 crc kubenswrapper[4779]: I1128 12:55:40.031009 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/795a9b0b-9199-4702-abff-fe32b9edd387-config-data" (OuterVolumeSpecName: "config-data") pod "795a9b0b-9199-4702-abff-fe32b9edd387" (UID: "795a9b0b-9199-4702-abff-fe32b9edd387"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:55:40 crc kubenswrapper[4779]: I1128 12:55:40.075969 4779 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/795a9b0b-9199-4702-abff-fe32b9edd387-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:40 crc kubenswrapper[4779]: I1128 12:55:40.076037 4779 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/795a9b0b-9199-4702-abff-fe32b9edd387-logs\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:40 crc kubenswrapper[4779]: I1128 12:55:40.076065 4779 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/795a9b0b-9199-4702-abff-fe32b9edd387-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:40 crc kubenswrapper[4779]: I1128 12:55:40.076122 4779 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/795a9b0b-9199-4702-abff-fe32b9edd387-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:40 crc kubenswrapper[4779]: I1128 12:55:40.076146 4779 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/795a9b0b-9199-4702-abff-fe32b9edd387-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:40 crc kubenswrapper[4779]: I1128 12:55:40.076169 4779 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/795a9b0b-9199-4702-abff-fe32b9edd387-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:40 crc kubenswrapper[4779]: I1128 12:55:40.076352 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4h55m\" (UniqueName: \"kubernetes.io/projected/795a9b0b-9199-4702-abff-fe32b9edd387-kube-api-access-4h55m\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:40 crc kubenswrapper[4779]: I1128 12:55:40.265586 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 28 12:55:40 crc kubenswrapper[4779]: I1128 12:55:40.273857 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 28 12:55:40 crc kubenswrapper[4779]: I1128 12:55:40.334603 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 28 12:55:40 crc kubenswrapper[4779]: E1128 12:55:40.336685 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="795a9b0b-9199-4702-abff-fe32b9edd387" containerName="cinder-api-log" Nov 28 12:55:40 crc kubenswrapper[4779]: I1128 12:55:40.336767 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="795a9b0b-9199-4702-abff-fe32b9edd387" containerName="cinder-api-log" Nov 28 12:55:40 crc kubenswrapper[4779]: E1128 12:55:40.336835 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a46c7de6-cbd5-4579-8fd8-0d7a6701e115" containerName="dnsmasq-dns" Nov 28 12:55:40 crc kubenswrapper[4779]: I1128 12:55:40.336890 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="a46c7de6-cbd5-4579-8fd8-0d7a6701e115" containerName="dnsmasq-dns" Nov 28 12:55:40 crc kubenswrapper[4779]: E1128 12:55:40.336954 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="795a9b0b-9199-4702-abff-fe32b9edd387" containerName="cinder-api" Nov 28 12:55:40 crc kubenswrapper[4779]: I1128 12:55:40.337004 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="795a9b0b-9199-4702-abff-fe32b9edd387" containerName="cinder-api" Nov 28 12:55:40 crc kubenswrapper[4779]: E1128 12:55:40.337073 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a46c7de6-cbd5-4579-8fd8-0d7a6701e115" containerName="init" Nov 28 12:55:40 crc kubenswrapper[4779]: I1128 12:55:40.337142 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="a46c7de6-cbd5-4579-8fd8-0d7a6701e115" containerName="init" Nov 28 12:55:40 crc kubenswrapper[4779]: I1128 12:55:40.337744 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="795a9b0b-9199-4702-abff-fe32b9edd387" containerName="cinder-api-log" Nov 28 12:55:40 crc kubenswrapper[4779]: I1128 12:55:40.337819 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="a46c7de6-cbd5-4579-8fd8-0d7a6701e115" containerName="dnsmasq-dns" Nov 28 12:55:40 crc kubenswrapper[4779]: I1128 12:55:40.337838 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="795a9b0b-9199-4702-abff-fe32b9edd387" containerName="cinder-api" Nov 28 12:55:40 crc kubenswrapper[4779]: I1128 12:55:40.340031 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 28 12:55:40 crc kubenswrapper[4779]: I1128 12:55:40.346291 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 28 12:55:40 crc kubenswrapper[4779]: I1128 12:55:40.349740 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 28 12:55:40 crc kubenswrapper[4779]: I1128 12:55:40.350186 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 28 12:55:40 crc kubenswrapper[4779]: I1128 12:55:40.371201 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 28 12:55:40 crc kubenswrapper[4779]: I1128 12:55:40.381318 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8605d0be-235c-4b63-8781-ea140c60e622-config-data\") pod \"cinder-api-0\" (UID: \"8605d0be-235c-4b63-8781-ea140c60e622\") " pod="openstack/cinder-api-0" Nov 28 12:55:40 crc kubenswrapper[4779]: I1128 12:55:40.381364 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8605d0be-235c-4b63-8781-ea140c60e622-public-tls-certs\") pod \"cinder-api-0\" (UID: \"8605d0be-235c-4b63-8781-ea140c60e622\") " pod="openstack/cinder-api-0" Nov 28 12:55:40 crc kubenswrapper[4779]: I1128 12:55:40.381408 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8605d0be-235c-4b63-8781-ea140c60e622-config-data-custom\") pod \"cinder-api-0\" (UID: \"8605d0be-235c-4b63-8781-ea140c60e622\") " pod="openstack/cinder-api-0" Nov 28 12:55:40 crc kubenswrapper[4779]: I1128 12:55:40.381482 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8605d0be-235c-4b63-8781-ea140c60e622-scripts\") pod \"cinder-api-0\" (UID: \"8605d0be-235c-4b63-8781-ea140c60e622\") " pod="openstack/cinder-api-0" Nov 28 12:55:40 crc kubenswrapper[4779]: I1128 12:55:40.381519 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8605d0be-235c-4b63-8781-ea140c60e622-logs\") pod \"cinder-api-0\" (UID: \"8605d0be-235c-4b63-8781-ea140c60e622\") " pod="openstack/cinder-api-0" Nov 28 12:55:40 crc kubenswrapper[4779]: I1128 12:55:40.381541 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8605d0be-235c-4b63-8781-ea140c60e622-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8605d0be-235c-4b63-8781-ea140c60e622\") " pod="openstack/cinder-api-0" Nov 28 12:55:40 crc kubenswrapper[4779]: I1128 12:55:40.381598 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8605d0be-235c-4b63-8781-ea140c60e622-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8605d0be-235c-4b63-8781-ea140c60e622\") " pod="openstack/cinder-api-0" Nov 28 12:55:40 crc kubenswrapper[4779]: I1128 12:55:40.381628 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bknqx\" (UniqueName: \"kubernetes.io/projected/8605d0be-235c-4b63-8781-ea140c60e622-kube-api-access-bknqx\") pod \"cinder-api-0\" (UID: \"8605d0be-235c-4b63-8781-ea140c60e622\") " pod="openstack/cinder-api-0" Nov 28 12:55:40 crc kubenswrapper[4779]: I1128 12:55:40.381653 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8605d0be-235c-4b63-8781-ea140c60e622-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"8605d0be-235c-4b63-8781-ea140c60e622\") " pod="openstack/cinder-api-0" Nov 28 12:55:40 crc kubenswrapper[4779]: I1128 12:55:40.483350 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8605d0be-235c-4b63-8781-ea140c60e622-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8605d0be-235c-4b63-8781-ea140c60e622\") " pod="openstack/cinder-api-0" Nov 28 12:55:40 crc kubenswrapper[4779]: I1128 12:55:40.483394 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bknqx\" (UniqueName: \"kubernetes.io/projected/8605d0be-235c-4b63-8781-ea140c60e622-kube-api-access-bknqx\") pod \"cinder-api-0\" (UID: \"8605d0be-235c-4b63-8781-ea140c60e622\") " pod="openstack/cinder-api-0" Nov 28 12:55:40 crc kubenswrapper[4779]: I1128 12:55:40.483419 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8605d0be-235c-4b63-8781-ea140c60e622-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"8605d0be-235c-4b63-8781-ea140c60e622\") " pod="openstack/cinder-api-0" Nov 28 12:55:40 crc kubenswrapper[4779]: I1128 12:55:40.483452 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8605d0be-235c-4b63-8781-ea140c60e622-config-data\") pod \"cinder-api-0\" (UID: \"8605d0be-235c-4b63-8781-ea140c60e622\") " pod="openstack/cinder-api-0" Nov 28 12:55:40 crc kubenswrapper[4779]: I1128 12:55:40.483471 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8605d0be-235c-4b63-8781-ea140c60e622-public-tls-certs\") pod \"cinder-api-0\" (UID: \"8605d0be-235c-4b63-8781-ea140c60e622\") " pod="openstack/cinder-api-0" Nov 28 12:55:40 crc kubenswrapper[4779]: I1128 12:55:40.483508 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8605d0be-235c-4b63-8781-ea140c60e622-config-data-custom\") pod \"cinder-api-0\" (UID: \"8605d0be-235c-4b63-8781-ea140c60e622\") " pod="openstack/cinder-api-0" Nov 28 12:55:40 crc kubenswrapper[4779]: I1128 12:55:40.483626 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8605d0be-235c-4b63-8781-ea140c60e622-scripts\") pod \"cinder-api-0\" (UID: \"8605d0be-235c-4b63-8781-ea140c60e622\") " pod="openstack/cinder-api-0" Nov 28 12:55:40 crc kubenswrapper[4779]: I1128 12:55:40.483655 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8605d0be-235c-4b63-8781-ea140c60e622-logs\") pod \"cinder-api-0\" (UID: \"8605d0be-235c-4b63-8781-ea140c60e622\") " pod="openstack/cinder-api-0" Nov 28 12:55:40 crc kubenswrapper[4779]: I1128 12:55:40.483673 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8605d0be-235c-4b63-8781-ea140c60e622-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8605d0be-235c-4b63-8781-ea140c60e622\") " pod="openstack/cinder-api-0" Nov 28 12:55:40 crc kubenswrapper[4779]: I1128 12:55:40.484519 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8605d0be-235c-4b63-8781-ea140c60e622-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8605d0be-235c-4b63-8781-ea140c60e622\") " pod="openstack/cinder-api-0" Nov 28 12:55:40 crc kubenswrapper[4779]: I1128 12:55:40.484734 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8605d0be-235c-4b63-8781-ea140c60e622-logs\") pod \"cinder-api-0\" (UID: \"8605d0be-235c-4b63-8781-ea140c60e622\") " pod="openstack/cinder-api-0" Nov 28 12:55:40 crc kubenswrapper[4779]: I1128 12:55:40.489992 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8605d0be-235c-4b63-8781-ea140c60e622-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8605d0be-235c-4b63-8781-ea140c60e622\") " pod="openstack/cinder-api-0" Nov 28 12:55:40 crc kubenswrapper[4779]: I1128 12:55:40.495070 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8605d0be-235c-4b63-8781-ea140c60e622-public-tls-certs\") pod \"cinder-api-0\" (UID: \"8605d0be-235c-4b63-8781-ea140c60e622\") " pod="openstack/cinder-api-0" Nov 28 12:55:40 crc kubenswrapper[4779]: I1128 12:55:40.495467 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8605d0be-235c-4b63-8781-ea140c60e622-scripts\") pod \"cinder-api-0\" (UID: \"8605d0be-235c-4b63-8781-ea140c60e622\") " pod="openstack/cinder-api-0" Nov 28 12:55:40 crc kubenswrapper[4779]: I1128 12:55:40.498251 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8605d0be-235c-4b63-8781-ea140c60e622-config-data\") pod \"cinder-api-0\" (UID: \"8605d0be-235c-4b63-8781-ea140c60e622\") " pod="openstack/cinder-api-0" Nov 28 12:55:40 crc kubenswrapper[4779]: I1128 12:55:40.499607 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8605d0be-235c-4b63-8781-ea140c60e622-config-data-custom\") pod \"cinder-api-0\" (UID: \"8605d0be-235c-4b63-8781-ea140c60e622\") " pod="openstack/cinder-api-0" Nov 28 12:55:40 crc kubenswrapper[4779]: I1128 12:55:40.499722 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8605d0be-235c-4b63-8781-ea140c60e622-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"8605d0be-235c-4b63-8781-ea140c60e622\") " pod="openstack/cinder-api-0" Nov 28 12:55:40 crc kubenswrapper[4779]: I1128 12:55:40.510213 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bknqx\" (UniqueName: \"kubernetes.io/projected/8605d0be-235c-4b63-8781-ea140c60e622-kube-api-access-bknqx\") pod \"cinder-api-0\" (UID: \"8605d0be-235c-4b63-8781-ea140c60e622\") " pod="openstack/cinder-api-0" Nov 28 12:55:40 crc kubenswrapper[4779]: I1128 12:55:40.676880 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 28 12:55:40 crc kubenswrapper[4779]: I1128 12:55:40.723646 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6cf455dd68-ljtxn" Nov 28 12:55:41 crc kubenswrapper[4779]: I1128 12:55:41.221458 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 28 12:55:41 crc kubenswrapper[4779]: I1128 12:55:41.749510 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="795a9b0b-9199-4702-abff-fe32b9edd387" path="/var/lib/kubelet/pods/795a9b0b-9199-4702-abff-fe32b9edd387/volumes" Nov 28 12:55:41 crc kubenswrapper[4779]: I1128 12:55:41.985635 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8605d0be-235c-4b63-8781-ea140c60e622","Type":"ContainerStarted","Data":"3f5cb763d14aa9e8f2b82f091de7be67b6d7812c2143efa7c2fa8a03c803ea7d"} Nov 28 12:55:41 crc kubenswrapper[4779]: I1128 12:55:41.985701 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8605d0be-235c-4b63-8781-ea140c60e622","Type":"ContainerStarted","Data":"2cc42fcbfc5196ddfff6ad8813108add6ad809617d447909783fa26d79cb66eb"} Nov 28 12:55:42 crc kubenswrapper[4779]: I1128 12:55:42.565905 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7756d796d9-vcgbk" Nov 28 12:55:42 crc kubenswrapper[4779]: I1128 12:55:42.661404 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6cf455dd68-ljtxn"] Nov 28 12:55:42 crc kubenswrapper[4779]: I1128 12:55:42.661653 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6cf455dd68-ljtxn" podUID="4ebb4112-c634-428c-ae8a-55682be30c80" containerName="neutron-api" containerID="cri-o://aaa3d4e1d2a82bbd0df3b6c8e1205655c5e7d9ccfc290b0fffbaa30f796a2e28" gracePeriod=30 Nov 28 12:55:42 crc kubenswrapper[4779]: I1128 12:55:42.662118 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6cf455dd68-ljtxn" podUID="4ebb4112-c634-428c-ae8a-55682be30c80" containerName="neutron-httpd" containerID="cri-o://4a2fae7138c4bc400e0467d94ddd4bcf9e27614c48087590c798fb1a636970b6" gracePeriod=30 Nov 28 12:55:42 crc kubenswrapper[4779]: I1128 12:55:42.984882 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5b89964d86-6t622" Nov 28 12:55:42 crc kubenswrapper[4779]: I1128 12:55:42.995330 4779 generic.go:334] "Generic (PLEG): container finished" podID="4ebb4112-c634-428c-ae8a-55682be30c80" containerID="4a2fae7138c4bc400e0467d94ddd4bcf9e27614c48087590c798fb1a636970b6" exitCode=0 Nov 28 12:55:42 crc kubenswrapper[4779]: I1128 12:55:42.995405 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cf455dd68-ljtxn" event={"ID":"4ebb4112-c634-428c-ae8a-55682be30c80","Type":"ContainerDied","Data":"4a2fae7138c4bc400e0467d94ddd4bcf9e27614c48087590c798fb1a636970b6"} Nov 28 12:55:42 crc kubenswrapper[4779]: I1128 12:55:42.996977 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8605d0be-235c-4b63-8781-ea140c60e622","Type":"ContainerStarted","Data":"73410ea1f27507e0e80ab98ca23d42433dc9f3c5379e3261a5787c75e3fd73ac"} Nov 28 12:55:42 crc kubenswrapper[4779]: I1128 12:55:42.997181 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 28 12:55:43 crc kubenswrapper[4779]: I1128 12:55:43.030191 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.030172162 podStartE2EDuration="3.030172162s" podCreationTimestamp="2025-11-28 12:55:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:55:43.025144299 +0000 UTC m=+1203.590819653" watchObservedRunningTime="2025-11-28 12:55:43.030172162 +0000 UTC m=+1203.595847516" Nov 28 12:55:43 crc kubenswrapper[4779]: I1128 12:55:43.051400 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5b89964d86-6t622" Nov 28 12:55:44 crc kubenswrapper[4779]: I1128 12:55:44.405227 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5784cf869f-j87hp" Nov 28 12:55:44 crc kubenswrapper[4779]: I1128 12:55:44.445696 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-wsv2w"] Nov 28 12:55:44 crc kubenswrapper[4779]: I1128 12:55:44.445928 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-84b966f6c9-wsv2w" podUID="fe5791bd-f850-498b-8dfe-bef249904487" containerName="dnsmasq-dns" containerID="cri-o://23a734a37105ea28f53f8644364d0b778c486d689d4401cb7f984cc8910355c2" gracePeriod=10 Nov 28 12:55:44 crc kubenswrapper[4779]: I1128 12:55:44.559518 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 28 12:55:44 crc kubenswrapper[4779]: I1128 12:55:44.604547 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 12:55:44 crc kubenswrapper[4779]: I1128 12:55:44.901935 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-wsv2w" Nov 28 12:55:45 crc kubenswrapper[4779]: I1128 12:55:44.999853 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe5791bd-f850-498b-8dfe-bef249904487-ovsdbserver-nb\") pod \"fe5791bd-f850-498b-8dfe-bef249904487\" (UID: \"fe5791bd-f850-498b-8dfe-bef249904487\") " Nov 28 12:55:45 crc kubenswrapper[4779]: I1128 12:55:45.000280 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe5791bd-f850-498b-8dfe-bef249904487-config\") pod \"fe5791bd-f850-498b-8dfe-bef249904487\" (UID: \"fe5791bd-f850-498b-8dfe-bef249904487\") " Nov 28 12:55:45 crc kubenswrapper[4779]: I1128 12:55:45.000801 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe5791bd-f850-498b-8dfe-bef249904487-dns-svc\") pod \"fe5791bd-f850-498b-8dfe-bef249904487\" (UID: \"fe5791bd-f850-498b-8dfe-bef249904487\") " Nov 28 12:55:45 crc kubenswrapper[4779]: I1128 12:55:45.000863 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fe5791bd-f850-498b-8dfe-bef249904487-dns-swift-storage-0\") pod \"fe5791bd-f850-498b-8dfe-bef249904487\" (UID: \"fe5791bd-f850-498b-8dfe-bef249904487\") " Nov 28 12:55:45 crc kubenswrapper[4779]: I1128 12:55:45.000906 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe5791bd-f850-498b-8dfe-bef249904487-ovsdbserver-sb\") pod \"fe5791bd-f850-498b-8dfe-bef249904487\" (UID: \"fe5791bd-f850-498b-8dfe-bef249904487\") " Nov 28 12:55:45 crc kubenswrapper[4779]: I1128 12:55:45.000935 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5blg\" (UniqueName: \"kubernetes.io/projected/fe5791bd-f850-498b-8dfe-bef249904487-kube-api-access-k5blg\") pod \"fe5791bd-f850-498b-8dfe-bef249904487\" (UID: \"fe5791bd-f850-498b-8dfe-bef249904487\") " Nov 28 12:55:45 crc kubenswrapper[4779]: I1128 12:55:45.018298 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe5791bd-f850-498b-8dfe-bef249904487-kube-api-access-k5blg" (OuterVolumeSpecName: "kube-api-access-k5blg") pod "fe5791bd-f850-498b-8dfe-bef249904487" (UID: "fe5791bd-f850-498b-8dfe-bef249904487"). InnerVolumeSpecName "kube-api-access-k5blg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:55:45 crc kubenswrapper[4779]: I1128 12:55:45.022998 4779 generic.go:334] "Generic (PLEG): container finished" podID="fe5791bd-f850-498b-8dfe-bef249904487" containerID="23a734a37105ea28f53f8644364d0b778c486d689d4401cb7f984cc8910355c2" exitCode=0 Nov 28 12:55:45 crc kubenswrapper[4779]: I1128 12:55:45.023277 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="2c74869d-4b0d-41e9-a613-7e0f9e67c77e" containerName="cinder-scheduler" containerID="cri-o://add2d1e3bd0b37c382e4a0af9de41a560c70091dfd61b5713e6b00e61c148331" gracePeriod=30 Nov 28 12:55:45 crc kubenswrapper[4779]: I1128 12:55:45.023435 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-wsv2w" Nov 28 12:55:45 crc kubenswrapper[4779]: I1128 12:55:45.023895 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-wsv2w" event={"ID":"fe5791bd-f850-498b-8dfe-bef249904487","Type":"ContainerDied","Data":"23a734a37105ea28f53f8644364d0b778c486d689d4401cb7f984cc8910355c2"} Nov 28 12:55:45 crc kubenswrapper[4779]: I1128 12:55:45.023979 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-wsv2w" event={"ID":"fe5791bd-f850-498b-8dfe-bef249904487","Type":"ContainerDied","Data":"45fbd169684234876c32747ef29a25dedcd1ab5b739a01d4b50e96aa1f93646b"} Nov 28 12:55:45 crc kubenswrapper[4779]: I1128 12:55:45.024049 4779 scope.go:117] "RemoveContainer" containerID="23a734a37105ea28f53f8644364d0b778c486d689d4401cb7f984cc8910355c2" Nov 28 12:55:45 crc kubenswrapper[4779]: I1128 12:55:45.024219 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="2c74869d-4b0d-41e9-a613-7e0f9e67c77e" containerName="probe" containerID="cri-o://6f70ae9374f8b250a55d24e39f7298d7bb15bb3861c18daeaed8e723737e3022" gracePeriod=30 Nov 28 12:55:45 crc kubenswrapper[4779]: I1128 12:55:45.046906 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe5791bd-f850-498b-8dfe-bef249904487-config" (OuterVolumeSpecName: "config") pod "fe5791bd-f850-498b-8dfe-bef249904487" (UID: "fe5791bd-f850-498b-8dfe-bef249904487"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:55:45 crc kubenswrapper[4779]: I1128 12:55:45.058808 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe5791bd-f850-498b-8dfe-bef249904487-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fe5791bd-f850-498b-8dfe-bef249904487" (UID: "fe5791bd-f850-498b-8dfe-bef249904487"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:55:45 crc kubenswrapper[4779]: I1128 12:55:45.059748 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe5791bd-f850-498b-8dfe-bef249904487-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fe5791bd-f850-498b-8dfe-bef249904487" (UID: "fe5791bd-f850-498b-8dfe-bef249904487"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:55:45 crc kubenswrapper[4779]: I1128 12:55:45.079394 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe5791bd-f850-498b-8dfe-bef249904487-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fe5791bd-f850-498b-8dfe-bef249904487" (UID: "fe5791bd-f850-498b-8dfe-bef249904487"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:55:45 crc kubenswrapper[4779]: I1128 12:55:45.081465 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe5791bd-f850-498b-8dfe-bef249904487-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "fe5791bd-f850-498b-8dfe-bef249904487" (UID: "fe5791bd-f850-498b-8dfe-bef249904487"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:55:45 crc kubenswrapper[4779]: I1128 12:55:45.100965 4779 scope.go:117] "RemoveContainer" containerID="2218a9e512e5465c40fe81fd4578a0080372adccbe8429b71c1afa3665c81d2f" Nov 28 12:55:45 crc kubenswrapper[4779]: I1128 12:55:45.102455 4779 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe5791bd-f850-498b-8dfe-bef249904487-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:45 crc kubenswrapper[4779]: I1128 12:55:45.102475 4779 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe5791bd-f850-498b-8dfe-bef249904487-config\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:45 crc kubenswrapper[4779]: I1128 12:55:45.102484 4779 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe5791bd-f850-498b-8dfe-bef249904487-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:45 crc kubenswrapper[4779]: I1128 12:55:45.102494 4779 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fe5791bd-f850-498b-8dfe-bef249904487-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:45 crc kubenswrapper[4779]: I1128 12:55:45.102503 4779 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe5791bd-f850-498b-8dfe-bef249904487-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:45 crc kubenswrapper[4779]: I1128 12:55:45.102511 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5blg\" (UniqueName: \"kubernetes.io/projected/fe5791bd-f850-498b-8dfe-bef249904487-kube-api-access-k5blg\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:45 crc kubenswrapper[4779]: I1128 12:55:45.123174 4779 scope.go:117] "RemoveContainer" containerID="23a734a37105ea28f53f8644364d0b778c486d689d4401cb7f984cc8910355c2" Nov 28 12:55:45 crc kubenswrapper[4779]: E1128 12:55:45.123991 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23a734a37105ea28f53f8644364d0b778c486d689d4401cb7f984cc8910355c2\": container with ID starting with 23a734a37105ea28f53f8644364d0b778c486d689d4401cb7f984cc8910355c2 not found: ID does not exist" containerID="23a734a37105ea28f53f8644364d0b778c486d689d4401cb7f984cc8910355c2" Nov 28 12:55:45 crc kubenswrapper[4779]: I1128 12:55:45.124021 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23a734a37105ea28f53f8644364d0b778c486d689d4401cb7f984cc8910355c2"} err="failed to get container status \"23a734a37105ea28f53f8644364d0b778c486d689d4401cb7f984cc8910355c2\": rpc error: code = NotFound desc = could not find container \"23a734a37105ea28f53f8644364d0b778c486d689d4401cb7f984cc8910355c2\": container with ID starting with 23a734a37105ea28f53f8644364d0b778c486d689d4401cb7f984cc8910355c2 not found: ID does not exist" Nov 28 12:55:45 crc kubenswrapper[4779]: I1128 12:55:45.124040 4779 scope.go:117] "RemoveContainer" containerID="2218a9e512e5465c40fe81fd4578a0080372adccbe8429b71c1afa3665c81d2f" Nov 28 12:55:45 crc kubenswrapper[4779]: E1128 12:55:45.124437 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2218a9e512e5465c40fe81fd4578a0080372adccbe8429b71c1afa3665c81d2f\": container with ID starting with 2218a9e512e5465c40fe81fd4578a0080372adccbe8429b71c1afa3665c81d2f not found: ID does not exist" containerID="2218a9e512e5465c40fe81fd4578a0080372adccbe8429b71c1afa3665c81d2f" Nov 28 12:55:45 crc kubenswrapper[4779]: I1128 12:55:45.124457 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2218a9e512e5465c40fe81fd4578a0080372adccbe8429b71c1afa3665c81d2f"} err="failed to get container status \"2218a9e512e5465c40fe81fd4578a0080372adccbe8429b71c1afa3665c81d2f\": rpc error: code = NotFound desc = could not find container \"2218a9e512e5465c40fe81fd4578a0080372adccbe8429b71c1afa3665c81d2f\": container with ID starting with 2218a9e512e5465c40fe81fd4578a0080372adccbe8429b71c1afa3665c81d2f not found: ID does not exist" Nov 28 12:55:45 crc kubenswrapper[4779]: I1128 12:55:45.350355 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-wsv2w"] Nov 28 12:55:45 crc kubenswrapper[4779]: I1128 12:55:45.356441 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-wsv2w"] Nov 28 12:55:45 crc kubenswrapper[4779]: I1128 12:55:45.614737 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5b764d4b5d-q6jq2" Nov 28 12:55:45 crc kubenswrapper[4779]: I1128 12:55:45.623596 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5b764d4b5d-q6jq2" Nov 28 12:55:45 crc kubenswrapper[4779]: I1128 12:55:45.737055 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5b89964d86-6t622"] Nov 28 12:55:45 crc kubenswrapper[4779]: I1128 12:55:45.739268 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5b89964d86-6t622" podUID="3ecd6266-0ce0-435c-a8a3-b28526b74517" containerName="barbican-api-log" containerID="cri-o://16658e704f521573e991d45766718e7581963b8d92fb62db6cbfd15b5996b761" gracePeriod=30 Nov 28 12:55:45 crc kubenswrapper[4779]: I1128 12:55:45.739384 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5b89964d86-6t622" podUID="3ecd6266-0ce0-435c-a8a3-b28526b74517" containerName="barbican-api" containerID="cri-o://f8e7f527275d083087326188ea60ca0d052073288bd8344e2339b5baff8e9304" gracePeriod=30 Nov 28 12:55:45 crc kubenswrapper[4779]: I1128 12:55:45.759482 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe5791bd-f850-498b-8dfe-bef249904487" path="/var/lib/kubelet/pods/fe5791bd-f850-498b-8dfe-bef249904487/volumes" Nov 28 12:55:46 crc kubenswrapper[4779]: I1128 12:55:46.032821 4779 generic.go:334] "Generic (PLEG): container finished" podID="2c74869d-4b0d-41e9-a613-7e0f9e67c77e" containerID="6f70ae9374f8b250a55d24e39f7298d7bb15bb3861c18daeaed8e723737e3022" exitCode=0 Nov 28 12:55:46 crc kubenswrapper[4779]: I1128 12:55:46.032879 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2c74869d-4b0d-41e9-a613-7e0f9e67c77e","Type":"ContainerDied","Data":"6f70ae9374f8b250a55d24e39f7298d7bb15bb3861c18daeaed8e723737e3022"} Nov 28 12:55:46 crc kubenswrapper[4779]: I1128 12:55:46.036472 4779 generic.go:334] "Generic (PLEG): container finished" podID="3ecd6266-0ce0-435c-a8a3-b28526b74517" containerID="16658e704f521573e991d45766718e7581963b8d92fb62db6cbfd15b5996b761" exitCode=143 Nov 28 12:55:46 crc kubenswrapper[4779]: I1128 12:55:46.036550 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b89964d86-6t622" event={"ID":"3ecd6266-0ce0-435c-a8a3-b28526b74517","Type":"ContainerDied","Data":"16658e704f521573e991d45766718e7581963b8d92fb62db6cbfd15b5996b761"} Nov 28 12:55:46 crc kubenswrapper[4779]: I1128 12:55:46.284650 4779 patch_prober.go:28] interesting pod/machine-config-daemon-kj9g2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 12:55:46 crc kubenswrapper[4779]: I1128 12:55:46.284713 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 12:55:46 crc kubenswrapper[4779]: I1128 12:55:46.284771 4779 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" Nov 28 12:55:46 crc kubenswrapper[4779]: I1128 12:55:46.285705 4779 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"19d1e85c2d2159fafc03753bd25b2d9cba3a3d26bcb40723109739bd64095a04"} pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 12:55:46 crc kubenswrapper[4779]: I1128 12:55:46.285794 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" containerID="cri-o://19d1e85c2d2159fafc03753bd25b2d9cba3a3d26bcb40723109739bd64095a04" gracePeriod=600 Nov 28 12:55:47 crc kubenswrapper[4779]: I1128 12:55:47.049959 4779 generic.go:334] "Generic (PLEG): container finished" podID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerID="19d1e85c2d2159fafc03753bd25b2d9cba3a3d26bcb40723109739bd64095a04" exitCode=0 Nov 28 12:55:47 crc kubenswrapper[4779]: I1128 12:55:47.049986 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" event={"ID":"3b2a3eb4-4de5-491b-b466-3a35b7d745ec","Type":"ContainerDied","Data":"19d1e85c2d2159fafc03753bd25b2d9cba3a3d26bcb40723109739bd64095a04"} Nov 28 12:55:47 crc kubenswrapper[4779]: I1128 12:55:47.050266 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" event={"ID":"3b2a3eb4-4de5-491b-b466-3a35b7d745ec","Type":"ContainerStarted","Data":"7c21214830b8f1e0b08f1ae5ac2fb71de0793255942c5f72a4ead485743abffa"} Nov 28 12:55:47 crc kubenswrapper[4779]: I1128 12:55:47.050302 4779 scope.go:117] "RemoveContainer" containerID="2ec718b6174bac7e525f03254a91895c0abe5e9151b5cc34c2ee2019a1b96a1c" Nov 28 12:55:47 crc kubenswrapper[4779]: I1128 12:55:47.075189 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-68b65c9788-nmrvn" Nov 28 12:55:47 crc kubenswrapper[4779]: I1128 12:55:47.922432 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6cf455dd68-ljtxn" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.002973 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.058423 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4ebb4112-c634-428c-ae8a-55682be30c80-httpd-config\") pod \"4ebb4112-c634-428c-ae8a-55682be30c80\" (UID: \"4ebb4112-c634-428c-ae8a-55682be30c80\") " Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.058494 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ml677\" (UniqueName: \"kubernetes.io/projected/4ebb4112-c634-428c-ae8a-55682be30c80-kube-api-access-ml677\") pod \"4ebb4112-c634-428c-ae8a-55682be30c80\" (UID: \"4ebb4112-c634-428c-ae8a-55682be30c80\") " Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.058743 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ebb4112-c634-428c-ae8a-55682be30c80-combined-ca-bundle\") pod \"4ebb4112-c634-428c-ae8a-55682be30c80\" (UID: \"4ebb4112-c634-428c-ae8a-55682be30c80\") " Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.058773 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4ebb4112-c634-428c-ae8a-55682be30c80-config\") pod \"4ebb4112-c634-428c-ae8a-55682be30c80\" (UID: \"4ebb4112-c634-428c-ae8a-55682be30c80\") " Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.058798 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c74869d-4b0d-41e9-a613-7e0f9e67c77e-scripts\") pod \"2c74869d-4b0d-41e9-a613-7e0f9e67c77e\" (UID: \"2c74869d-4b0d-41e9-a613-7e0f9e67c77e\") " Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.058855 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ebb4112-c634-428c-ae8a-55682be30c80-ovndb-tls-certs\") pod \"4ebb4112-c634-428c-ae8a-55682be30c80\" (UID: \"4ebb4112-c634-428c-ae8a-55682be30c80\") " Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.058941 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c74869d-4b0d-41e9-a613-7e0f9e67c77e-config-data\") pod \"2c74869d-4b0d-41e9-a613-7e0f9e67c77e\" (UID: \"2c74869d-4b0d-41e9-a613-7e0f9e67c77e\") " Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.059006 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c74869d-4b0d-41e9-a613-7e0f9e67c77e-combined-ca-bundle\") pod \"2c74869d-4b0d-41e9-a613-7e0f9e67c77e\" (UID: \"2c74869d-4b0d-41e9-a613-7e0f9e67c77e\") " Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.064745 4779 generic.go:334] "Generic (PLEG): container finished" podID="2c74869d-4b0d-41e9-a613-7e0f9e67c77e" containerID="add2d1e3bd0b37c382e4a0af9de41a560c70091dfd61b5713e6b00e61c148331" exitCode=0 Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.064815 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2c74869d-4b0d-41e9-a613-7e0f9e67c77e","Type":"ContainerDied","Data":"add2d1e3bd0b37c382e4a0af9de41a560c70091dfd61b5713e6b00e61c148331"} Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.064843 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2c74869d-4b0d-41e9-a613-7e0f9e67c77e","Type":"ContainerDied","Data":"fa0b0b9e84346132797758c1591d9e08a2d98059983a862d2f5f47263183e04b"} Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.064860 4779 scope.go:117] "RemoveContainer" containerID="6f70ae9374f8b250a55d24e39f7298d7bb15bb3861c18daeaed8e723737e3022" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.064971 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.065389 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c74869d-4b0d-41e9-a613-7e0f9e67c77e-scripts" (OuterVolumeSpecName: "scripts") pod "2c74869d-4b0d-41e9-a613-7e0f9e67c77e" (UID: "2c74869d-4b0d-41e9-a613-7e0f9e67c77e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.065405 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ebb4112-c634-428c-ae8a-55682be30c80-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "4ebb4112-c634-428c-ae8a-55682be30c80" (UID: "4ebb4112-c634-428c-ae8a-55682be30c80"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.072118 4779 generic.go:334] "Generic (PLEG): container finished" podID="4ebb4112-c634-428c-ae8a-55682be30c80" containerID="aaa3d4e1d2a82bbd0df3b6c8e1205655c5e7d9ccfc290b0fffbaa30f796a2e28" exitCode=0 Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.072211 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cf455dd68-ljtxn" event={"ID":"4ebb4112-c634-428c-ae8a-55682be30c80","Type":"ContainerDied","Data":"aaa3d4e1d2a82bbd0df3b6c8e1205655c5e7d9ccfc290b0fffbaa30f796a2e28"} Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.072223 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6cf455dd68-ljtxn" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.072238 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cf455dd68-ljtxn" event={"ID":"4ebb4112-c634-428c-ae8a-55682be30c80","Type":"ContainerDied","Data":"c74e15d9a598364068a61051c24796a0778af418c366f7f222054b1138d51028"} Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.074850 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ebb4112-c634-428c-ae8a-55682be30c80-kube-api-access-ml677" (OuterVolumeSpecName: "kube-api-access-ml677") pod "4ebb4112-c634-428c-ae8a-55682be30c80" (UID: "4ebb4112-c634-428c-ae8a-55682be30c80"). InnerVolumeSpecName "kube-api-access-ml677". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.090324 4779 scope.go:117] "RemoveContainer" containerID="add2d1e3bd0b37c382e4a0af9de41a560c70091dfd61b5713e6b00e61c148331" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.125502 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ebb4112-c634-428c-ae8a-55682be30c80-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4ebb4112-c634-428c-ae8a-55682be30c80" (UID: "4ebb4112-c634-428c-ae8a-55682be30c80"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.128059 4779 scope.go:117] "RemoveContainer" containerID="6f70ae9374f8b250a55d24e39f7298d7bb15bb3861c18daeaed8e723737e3022" Nov 28 12:55:48 crc kubenswrapper[4779]: E1128 12:55:48.129755 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f70ae9374f8b250a55d24e39f7298d7bb15bb3861c18daeaed8e723737e3022\": container with ID starting with 6f70ae9374f8b250a55d24e39f7298d7bb15bb3861c18daeaed8e723737e3022 not found: ID does not exist" containerID="6f70ae9374f8b250a55d24e39f7298d7bb15bb3861c18daeaed8e723737e3022" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.129837 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f70ae9374f8b250a55d24e39f7298d7bb15bb3861c18daeaed8e723737e3022"} err="failed to get container status \"6f70ae9374f8b250a55d24e39f7298d7bb15bb3861c18daeaed8e723737e3022\": rpc error: code = NotFound desc = could not find container \"6f70ae9374f8b250a55d24e39f7298d7bb15bb3861c18daeaed8e723737e3022\": container with ID starting with 6f70ae9374f8b250a55d24e39f7298d7bb15bb3861c18daeaed8e723737e3022 not found: ID does not exist" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.129906 4779 scope.go:117] "RemoveContainer" containerID="add2d1e3bd0b37c382e4a0af9de41a560c70091dfd61b5713e6b00e61c148331" Nov 28 12:55:48 crc kubenswrapper[4779]: E1128 12:55:48.130788 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"add2d1e3bd0b37c382e4a0af9de41a560c70091dfd61b5713e6b00e61c148331\": container with ID starting with add2d1e3bd0b37c382e4a0af9de41a560c70091dfd61b5713e6b00e61c148331 not found: ID does not exist" containerID="add2d1e3bd0b37c382e4a0af9de41a560c70091dfd61b5713e6b00e61c148331" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.130838 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"add2d1e3bd0b37c382e4a0af9de41a560c70091dfd61b5713e6b00e61c148331"} err="failed to get container status \"add2d1e3bd0b37c382e4a0af9de41a560c70091dfd61b5713e6b00e61c148331\": rpc error: code = NotFound desc = could not find container \"add2d1e3bd0b37c382e4a0af9de41a560c70091dfd61b5713e6b00e61c148331\": container with ID starting with add2d1e3bd0b37c382e4a0af9de41a560c70091dfd61b5713e6b00e61c148331 not found: ID does not exist" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.130869 4779 scope.go:117] "RemoveContainer" containerID="4a2fae7138c4bc400e0467d94ddd4bcf9e27614c48087590c798fb1a636970b6" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.131400 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c74869d-4b0d-41e9-a613-7e0f9e67c77e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c74869d-4b0d-41e9-a613-7e0f9e67c77e" (UID: "2c74869d-4b0d-41e9-a613-7e0f9e67c77e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.132241 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ebb4112-c634-428c-ae8a-55682be30c80-config" (OuterVolumeSpecName: "config") pod "4ebb4112-c634-428c-ae8a-55682be30c80" (UID: "4ebb4112-c634-428c-ae8a-55682be30c80"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.150513 4779 scope.go:117] "RemoveContainer" containerID="aaa3d4e1d2a82bbd0df3b6c8e1205655c5e7d9ccfc290b0fffbaa30f796a2e28" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.157738 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ebb4112-c634-428c-ae8a-55682be30c80-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "4ebb4112-c634-428c-ae8a-55682be30c80" (UID: "4ebb4112-c634-428c-ae8a-55682be30c80"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.160570 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c74869d-4b0d-41e9-a613-7e0f9e67c77e-config-data-custom\") pod \"2c74869d-4b0d-41e9-a613-7e0f9e67c77e\" (UID: \"2c74869d-4b0d-41e9-a613-7e0f9e67c77e\") " Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.160616 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2c74869d-4b0d-41e9-a613-7e0f9e67c77e-etc-machine-id\") pod \"2c74869d-4b0d-41e9-a613-7e0f9e67c77e\" (UID: \"2c74869d-4b0d-41e9-a613-7e0f9e67c77e\") " Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.160636 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdt4r\" (UniqueName: \"kubernetes.io/projected/2c74869d-4b0d-41e9-a613-7e0f9e67c77e-kube-api-access-kdt4r\") pod \"2c74869d-4b0d-41e9-a613-7e0f9e67c77e\" (UID: \"2c74869d-4b0d-41e9-a613-7e0f9e67c77e\") " Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.160783 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c74869d-4b0d-41e9-a613-7e0f9e67c77e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2c74869d-4b0d-41e9-a613-7e0f9e67c77e" (UID: "2c74869d-4b0d-41e9-a613-7e0f9e67c77e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.161302 4779 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/4ebb4112-c634-428c-ae8a-55682be30c80-config\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.161326 4779 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c74869d-4b0d-41e9-a613-7e0f9e67c77e-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.161335 4779 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ebb4112-c634-428c-ae8a-55682be30c80-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.161344 4779 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c74869d-4b0d-41e9-a613-7e0f9e67c77e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.161353 4779 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4ebb4112-c634-428c-ae8a-55682be30c80-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.161362 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ml677\" (UniqueName: \"kubernetes.io/projected/4ebb4112-c634-428c-ae8a-55682be30c80-kube-api-access-ml677\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.161370 4779 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2c74869d-4b0d-41e9-a613-7e0f9e67c77e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.161378 4779 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ebb4112-c634-428c-ae8a-55682be30c80-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.163937 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c74869d-4b0d-41e9-a613-7e0f9e67c77e-kube-api-access-kdt4r" (OuterVolumeSpecName: "kube-api-access-kdt4r") pod "2c74869d-4b0d-41e9-a613-7e0f9e67c77e" (UID: "2c74869d-4b0d-41e9-a613-7e0f9e67c77e"). InnerVolumeSpecName "kube-api-access-kdt4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.164506 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c74869d-4b0d-41e9-a613-7e0f9e67c77e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2c74869d-4b0d-41e9-a613-7e0f9e67c77e" (UID: "2c74869d-4b0d-41e9-a613-7e0f9e67c77e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.171461 4779 scope.go:117] "RemoveContainer" containerID="4a2fae7138c4bc400e0467d94ddd4bcf9e27614c48087590c798fb1a636970b6" Nov 28 12:55:48 crc kubenswrapper[4779]: E1128 12:55:48.171927 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a2fae7138c4bc400e0467d94ddd4bcf9e27614c48087590c798fb1a636970b6\": container with ID starting with 4a2fae7138c4bc400e0467d94ddd4bcf9e27614c48087590c798fb1a636970b6 not found: ID does not exist" containerID="4a2fae7138c4bc400e0467d94ddd4bcf9e27614c48087590c798fb1a636970b6" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.171970 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a2fae7138c4bc400e0467d94ddd4bcf9e27614c48087590c798fb1a636970b6"} err="failed to get container status \"4a2fae7138c4bc400e0467d94ddd4bcf9e27614c48087590c798fb1a636970b6\": rpc error: code = NotFound desc = could not find container \"4a2fae7138c4bc400e0467d94ddd4bcf9e27614c48087590c798fb1a636970b6\": container with ID starting with 4a2fae7138c4bc400e0467d94ddd4bcf9e27614c48087590c798fb1a636970b6 not found: ID does not exist" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.171994 4779 scope.go:117] "RemoveContainer" containerID="aaa3d4e1d2a82bbd0df3b6c8e1205655c5e7d9ccfc290b0fffbaa30f796a2e28" Nov 28 12:55:48 crc kubenswrapper[4779]: E1128 12:55:48.182302 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aaa3d4e1d2a82bbd0df3b6c8e1205655c5e7d9ccfc290b0fffbaa30f796a2e28\": container with ID starting with aaa3d4e1d2a82bbd0df3b6c8e1205655c5e7d9ccfc290b0fffbaa30f796a2e28 not found: ID does not exist" containerID="aaa3d4e1d2a82bbd0df3b6c8e1205655c5e7d9ccfc290b0fffbaa30f796a2e28" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.182339 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aaa3d4e1d2a82bbd0df3b6c8e1205655c5e7d9ccfc290b0fffbaa30f796a2e28"} err="failed to get container status \"aaa3d4e1d2a82bbd0df3b6c8e1205655c5e7d9ccfc290b0fffbaa30f796a2e28\": rpc error: code = NotFound desc = could not find container \"aaa3d4e1d2a82bbd0df3b6c8e1205655c5e7d9ccfc290b0fffbaa30f796a2e28\": container with ID starting with aaa3d4e1d2a82bbd0df3b6c8e1205655c5e7d9ccfc290b0fffbaa30f796a2e28 not found: ID does not exist" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.186310 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c74869d-4b0d-41e9-a613-7e0f9e67c77e-config-data" (OuterVolumeSpecName: "config-data") pod "2c74869d-4b0d-41e9-a613-7e0f9e67c77e" (UID: "2c74869d-4b0d-41e9-a613-7e0f9e67c77e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.262623 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdt4r\" (UniqueName: \"kubernetes.io/projected/2c74869d-4b0d-41e9-a613-7e0f9e67c77e-kube-api-access-kdt4r\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.262855 4779 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c74869d-4b0d-41e9-a613-7e0f9e67c77e-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.262920 4779 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c74869d-4b0d-41e9-a613-7e0f9e67c77e-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.398719 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.411478 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.427177 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 12:55:48 crc kubenswrapper[4779]: E1128 12:55:48.427774 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ebb4112-c634-428c-ae8a-55682be30c80" containerName="neutron-api" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.427804 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ebb4112-c634-428c-ae8a-55682be30c80" containerName="neutron-api" Nov 28 12:55:48 crc kubenswrapper[4779]: E1128 12:55:48.427832 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe5791bd-f850-498b-8dfe-bef249904487" containerName="dnsmasq-dns" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.427844 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe5791bd-f850-498b-8dfe-bef249904487" containerName="dnsmasq-dns" Nov 28 12:55:48 crc kubenswrapper[4779]: E1128 12:55:48.427890 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe5791bd-f850-498b-8dfe-bef249904487" containerName="init" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.427903 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe5791bd-f850-498b-8dfe-bef249904487" containerName="init" Nov 28 12:55:48 crc kubenswrapper[4779]: E1128 12:55:48.427936 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c74869d-4b0d-41e9-a613-7e0f9e67c77e" containerName="probe" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.427948 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c74869d-4b0d-41e9-a613-7e0f9e67c77e" containerName="probe" Nov 28 12:55:48 crc kubenswrapper[4779]: E1128 12:55:48.427972 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ebb4112-c634-428c-ae8a-55682be30c80" containerName="neutron-httpd" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.427985 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ebb4112-c634-428c-ae8a-55682be30c80" containerName="neutron-httpd" Nov 28 12:55:48 crc kubenswrapper[4779]: E1128 12:55:48.428006 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c74869d-4b0d-41e9-a613-7e0f9e67c77e" containerName="cinder-scheduler" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.428018 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c74869d-4b0d-41e9-a613-7e0f9e67c77e" containerName="cinder-scheduler" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.428347 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe5791bd-f850-498b-8dfe-bef249904487" containerName="dnsmasq-dns" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.428388 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c74869d-4b0d-41e9-a613-7e0f9e67c77e" containerName="cinder-scheduler" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.428410 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ebb4112-c634-428c-ae8a-55682be30c80" containerName="neutron-httpd" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.428429 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c74869d-4b0d-41e9-a613-7e0f9e67c77e" containerName="probe" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.428459 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ebb4112-c634-428c-ae8a-55682be30c80" containerName="neutron-api" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.429991 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.433538 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.439015 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6cf455dd68-ljtxn"] Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.451526 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6cf455dd68-ljtxn"] Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.460202 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.572765 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvzz5\" (UniqueName: \"kubernetes.io/projected/b208660d-de0e-4218-a31b-66ce968db066-kube-api-access-mvzz5\") pod \"cinder-scheduler-0\" (UID: \"b208660d-de0e-4218-a31b-66ce968db066\") " pod="openstack/cinder-scheduler-0" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.572984 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b208660d-de0e-4218-a31b-66ce968db066-config-data\") pod \"cinder-scheduler-0\" (UID: \"b208660d-de0e-4218-a31b-66ce968db066\") " pod="openstack/cinder-scheduler-0" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.573181 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b208660d-de0e-4218-a31b-66ce968db066-scripts\") pod \"cinder-scheduler-0\" (UID: \"b208660d-de0e-4218-a31b-66ce968db066\") " pod="openstack/cinder-scheduler-0" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.573433 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b208660d-de0e-4218-a31b-66ce968db066-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b208660d-de0e-4218-a31b-66ce968db066\") " pod="openstack/cinder-scheduler-0" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.573487 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b208660d-de0e-4218-a31b-66ce968db066-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b208660d-de0e-4218-a31b-66ce968db066\") " pod="openstack/cinder-scheduler-0" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.573529 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b208660d-de0e-4218-a31b-66ce968db066-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b208660d-de0e-4218-a31b-66ce968db066\") " pod="openstack/cinder-scheduler-0" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.675987 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b208660d-de0e-4218-a31b-66ce968db066-scripts\") pod \"cinder-scheduler-0\" (UID: \"b208660d-de0e-4218-a31b-66ce968db066\") " pod="openstack/cinder-scheduler-0" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.676081 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b208660d-de0e-4218-a31b-66ce968db066-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b208660d-de0e-4218-a31b-66ce968db066\") " pod="openstack/cinder-scheduler-0" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.676120 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b208660d-de0e-4218-a31b-66ce968db066-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b208660d-de0e-4218-a31b-66ce968db066\") " pod="openstack/cinder-scheduler-0" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.676141 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b208660d-de0e-4218-a31b-66ce968db066-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b208660d-de0e-4218-a31b-66ce968db066\") " pod="openstack/cinder-scheduler-0" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.676170 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvzz5\" (UniqueName: \"kubernetes.io/projected/b208660d-de0e-4218-a31b-66ce968db066-kube-api-access-mvzz5\") pod \"cinder-scheduler-0\" (UID: \"b208660d-de0e-4218-a31b-66ce968db066\") " pod="openstack/cinder-scheduler-0" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.676205 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b208660d-de0e-4218-a31b-66ce968db066-config-data\") pod \"cinder-scheduler-0\" (UID: \"b208660d-de0e-4218-a31b-66ce968db066\") " pod="openstack/cinder-scheduler-0" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.680446 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b208660d-de0e-4218-a31b-66ce968db066-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b208660d-de0e-4218-a31b-66ce968db066\") " pod="openstack/cinder-scheduler-0" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.694161 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b208660d-de0e-4218-a31b-66ce968db066-config-data\") pod \"cinder-scheduler-0\" (UID: \"b208660d-de0e-4218-a31b-66ce968db066\") " pod="openstack/cinder-scheduler-0" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.694512 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b208660d-de0e-4218-a31b-66ce968db066-scripts\") pod \"cinder-scheduler-0\" (UID: \"b208660d-de0e-4218-a31b-66ce968db066\") " pod="openstack/cinder-scheduler-0" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.694874 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b208660d-de0e-4218-a31b-66ce968db066-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b208660d-de0e-4218-a31b-66ce968db066\") " pod="openstack/cinder-scheduler-0" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.697807 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b208660d-de0e-4218-a31b-66ce968db066-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b208660d-de0e-4218-a31b-66ce968db066\") " pod="openstack/cinder-scheduler-0" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.723627 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvzz5\" (UniqueName: \"kubernetes.io/projected/b208660d-de0e-4218-a31b-66ce968db066-kube-api-access-mvzz5\") pod \"cinder-scheduler-0\" (UID: \"b208660d-de0e-4218-a31b-66ce968db066\") " pod="openstack/cinder-scheduler-0" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.754803 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.929159 4779 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5b89964d86-6t622" podUID="3ecd6266-0ce0-435c-a8a3-b28526b74517" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": read tcp 10.217.0.2:46492->10.217.0.159:9311: read: connection reset by peer" Nov 28 12:55:48 crc kubenswrapper[4779]: I1128 12:55:48.929197 4779 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5b89964d86-6t622" podUID="3ecd6266-0ce0-435c-a8a3-b28526b74517" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": read tcp 10.217.0.2:46508->10.217.0.159:9311: read: connection reset by peer" Nov 28 12:55:49 crc kubenswrapper[4779]: I1128 12:55:49.092811 4779 generic.go:334] "Generic (PLEG): container finished" podID="3ecd6266-0ce0-435c-a8a3-b28526b74517" containerID="f8e7f527275d083087326188ea60ca0d052073288bd8344e2339b5baff8e9304" exitCode=0 Nov 28 12:55:49 crc kubenswrapper[4779]: I1128 12:55:49.093060 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b89964d86-6t622" event={"ID":"3ecd6266-0ce0-435c-a8a3-b28526b74517","Type":"ContainerDied","Data":"f8e7f527275d083087326188ea60ca0d052073288bd8344e2339b5baff8e9304"} Nov 28 12:55:49 crc kubenswrapper[4779]: I1128 12:55:49.219502 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b89964d86-6t622" Nov 28 12:55:49 crc kubenswrapper[4779]: W1128 12:55:49.271497 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb208660d_de0e_4218_a31b_66ce968db066.slice/crio-b421ae3dc3e4ec71d06b35bfe539353866786bbeb7f08acb14c86721b26a1ca2 WatchSource:0}: Error finding container b421ae3dc3e4ec71d06b35bfe539353866786bbeb7f08acb14c86721b26a1ca2: Status 404 returned error can't find the container with id b421ae3dc3e4ec71d06b35bfe539353866786bbeb7f08acb14c86721b26a1ca2 Nov 28 12:55:49 crc kubenswrapper[4779]: I1128 12:55:49.279676 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 12:55:49 crc kubenswrapper[4779]: I1128 12:55:49.390273 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3ecd6266-0ce0-435c-a8a3-b28526b74517-config-data-custom\") pod \"3ecd6266-0ce0-435c-a8a3-b28526b74517\" (UID: \"3ecd6266-0ce0-435c-a8a3-b28526b74517\") " Nov 28 12:55:49 crc kubenswrapper[4779]: I1128 12:55:49.390396 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ecd6266-0ce0-435c-a8a3-b28526b74517-combined-ca-bundle\") pod \"3ecd6266-0ce0-435c-a8a3-b28526b74517\" (UID: \"3ecd6266-0ce0-435c-a8a3-b28526b74517\") " Nov 28 12:55:49 crc kubenswrapper[4779]: I1128 12:55:49.390423 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ecd6266-0ce0-435c-a8a3-b28526b74517-config-data\") pod \"3ecd6266-0ce0-435c-a8a3-b28526b74517\" (UID: \"3ecd6266-0ce0-435c-a8a3-b28526b74517\") " Nov 28 12:55:49 crc kubenswrapper[4779]: I1128 12:55:49.390451 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4jn5\" (UniqueName: \"kubernetes.io/projected/3ecd6266-0ce0-435c-a8a3-b28526b74517-kube-api-access-l4jn5\") pod \"3ecd6266-0ce0-435c-a8a3-b28526b74517\" (UID: \"3ecd6266-0ce0-435c-a8a3-b28526b74517\") " Nov 28 12:55:49 crc kubenswrapper[4779]: I1128 12:55:49.390541 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ecd6266-0ce0-435c-a8a3-b28526b74517-logs\") pod \"3ecd6266-0ce0-435c-a8a3-b28526b74517\" (UID: \"3ecd6266-0ce0-435c-a8a3-b28526b74517\") " Nov 28 12:55:49 crc kubenswrapper[4779]: I1128 12:55:49.391436 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ecd6266-0ce0-435c-a8a3-b28526b74517-logs" (OuterVolumeSpecName: "logs") pod "3ecd6266-0ce0-435c-a8a3-b28526b74517" (UID: "3ecd6266-0ce0-435c-a8a3-b28526b74517"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:55:49 crc kubenswrapper[4779]: I1128 12:55:49.393694 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ecd6266-0ce0-435c-a8a3-b28526b74517-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3ecd6266-0ce0-435c-a8a3-b28526b74517" (UID: "3ecd6266-0ce0-435c-a8a3-b28526b74517"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:55:49 crc kubenswrapper[4779]: I1128 12:55:49.394203 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ecd6266-0ce0-435c-a8a3-b28526b74517-kube-api-access-l4jn5" (OuterVolumeSpecName: "kube-api-access-l4jn5") pod "3ecd6266-0ce0-435c-a8a3-b28526b74517" (UID: "3ecd6266-0ce0-435c-a8a3-b28526b74517"). InnerVolumeSpecName "kube-api-access-l4jn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:55:49 crc kubenswrapper[4779]: I1128 12:55:49.416427 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ecd6266-0ce0-435c-a8a3-b28526b74517-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ecd6266-0ce0-435c-a8a3-b28526b74517" (UID: "3ecd6266-0ce0-435c-a8a3-b28526b74517"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:55:49 crc kubenswrapper[4779]: I1128 12:55:49.464930 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ecd6266-0ce0-435c-a8a3-b28526b74517-config-data" (OuterVolumeSpecName: "config-data") pod "3ecd6266-0ce0-435c-a8a3-b28526b74517" (UID: "3ecd6266-0ce0-435c-a8a3-b28526b74517"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:55:49 crc kubenswrapper[4779]: I1128 12:55:49.491971 4779 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3ecd6266-0ce0-435c-a8a3-b28526b74517-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:49 crc kubenswrapper[4779]: I1128 12:55:49.491994 4779 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ecd6266-0ce0-435c-a8a3-b28526b74517-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:49 crc kubenswrapper[4779]: I1128 12:55:49.492002 4779 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ecd6266-0ce0-435c-a8a3-b28526b74517-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:49 crc kubenswrapper[4779]: I1128 12:55:49.492011 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4jn5\" (UniqueName: \"kubernetes.io/projected/3ecd6266-0ce0-435c-a8a3-b28526b74517-kube-api-access-l4jn5\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:49 crc kubenswrapper[4779]: I1128 12:55:49.492021 4779 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ecd6266-0ce0-435c-a8a3-b28526b74517-logs\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:49 crc kubenswrapper[4779]: I1128 12:55:49.739955 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c74869d-4b0d-41e9-a613-7e0f9e67c77e" path="/var/lib/kubelet/pods/2c74869d-4b0d-41e9-a613-7e0f9e67c77e/volumes" Nov 28 12:55:49 crc kubenswrapper[4779]: I1128 12:55:49.741544 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ebb4112-c634-428c-ae8a-55682be30c80" path="/var/lib/kubelet/pods/4ebb4112-c634-428c-ae8a-55682be30c80/volumes" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.104434 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b89964d86-6t622" event={"ID":"3ecd6266-0ce0-435c-a8a3-b28526b74517","Type":"ContainerDied","Data":"f3fa016ede6dc10977b3e30d8a9442b152a8306ebf04c69f2959a53d8f56d334"} Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.105526 4779 scope.go:117] "RemoveContainer" containerID="f8e7f527275d083087326188ea60ca0d052073288bd8344e2339b5baff8e9304" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.105624 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b89964d86-6t622" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.110829 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b208660d-de0e-4218-a31b-66ce968db066","Type":"ContainerStarted","Data":"9a173e48b0da3b467b4b038bceae1dd26dbcb4cea1c5762fc19cff17b0d2f31a"} Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.110850 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b208660d-de0e-4218-a31b-66ce968db066","Type":"ContainerStarted","Data":"b421ae3dc3e4ec71d06b35bfe539353866786bbeb7f08acb14c86721b26a1ca2"} Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.126924 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5b89964d86-6t622"] Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.134435 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5b89964d86-6t622"] Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.155328 4779 scope.go:117] "RemoveContainer" containerID="16658e704f521573e991d45766718e7581963b8d92fb62db6cbfd15b5996b761" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.465256 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-55ff4b54d5-48p68"] Nov 28 12:55:50 crc kubenswrapper[4779]: E1128 12:55:50.465644 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ecd6266-0ce0-435c-a8a3-b28526b74517" containerName="barbican-api" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.465655 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ecd6266-0ce0-435c-a8a3-b28526b74517" containerName="barbican-api" Nov 28 12:55:50 crc kubenswrapper[4779]: E1128 12:55:50.465668 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ecd6266-0ce0-435c-a8a3-b28526b74517" containerName="barbican-api-log" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.465674 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ecd6266-0ce0-435c-a8a3-b28526b74517" containerName="barbican-api-log" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.465837 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ecd6266-0ce0-435c-a8a3-b28526b74517" containerName="barbican-api-log" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.465854 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ecd6266-0ce0-435c-a8a3-b28526b74517" containerName="barbican-api" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.466438 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-55ff4b54d5-48p68" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.472824 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-prts7" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.473016 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.473162 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.487787 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-55ff4b54d5-48p68"] Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.526284 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f6bc4c6c9-pcgm5"] Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.527605 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f6bc4c6c9-pcgm5" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.556416 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f6bc4c6c9-pcgm5"] Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.578758 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-864dfdcc4d-7wcth"] Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.579819 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-864dfdcc4d-7wcth" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.584576 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.592726 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-6b7bf76b6-kbx6h"] Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.596526 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6b7bf76b6-kbx6h" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.613441 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.616134 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-864dfdcc4d-7wcth"] Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.617236 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27nm7\" (UniqueName: \"kubernetes.io/projected/ed331d56-a8fd-4fd5-8d71-45f853a35fa8-kube-api-access-27nm7\") pod \"heat-engine-55ff4b54d5-48p68\" (UID: \"ed331d56-a8fd-4fd5-8d71-45f853a35fa8\") " pod="openstack/heat-engine-55ff4b54d5-48p68" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.617276 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed331d56-a8fd-4fd5-8d71-45f853a35fa8-combined-ca-bundle\") pod \"heat-engine-55ff4b54d5-48p68\" (UID: \"ed331d56-a8fd-4fd5-8d71-45f853a35fa8\") " pod="openstack/heat-engine-55ff4b54d5-48p68" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.617366 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed331d56-a8fd-4fd5-8d71-45f853a35fa8-config-data\") pod \"heat-engine-55ff4b54d5-48p68\" (UID: \"ed331d56-a8fd-4fd5-8d71-45f853a35fa8\") " pod="openstack/heat-engine-55ff4b54d5-48p68" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.617410 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed331d56-a8fd-4fd5-8d71-45f853a35fa8-config-data-custom\") pod \"heat-engine-55ff4b54d5-48p68\" (UID: \"ed331d56-a8fd-4fd5-8d71-45f853a35fa8\") " pod="openstack/heat-engine-55ff4b54d5-48p68" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.659185 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6b7bf76b6-kbx6h"] Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.718946 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/67f75fbe-5004-449a-bd16-51659985e95e-ovsdbserver-sb\") pod \"dnsmasq-dns-f6bc4c6c9-pcgm5\" (UID: \"67f75fbe-5004-449a-bd16-51659985e95e\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-pcgm5" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.718995 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/67f75fbe-5004-449a-bd16-51659985e95e-dns-swift-storage-0\") pod \"dnsmasq-dns-f6bc4c6c9-pcgm5\" (UID: \"67f75fbe-5004-449a-bd16-51659985e95e\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-pcgm5" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.719017 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1604b21d-7590-4a34-9cf2-e3d03f2db385-config-data-custom\") pod \"heat-cfnapi-6b7bf76b6-kbx6h\" (UID: \"1604b21d-7590-4a34-9cf2-e3d03f2db385\") " pod="openstack/heat-cfnapi-6b7bf76b6-kbx6h" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.719042 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1604b21d-7590-4a34-9cf2-e3d03f2db385-combined-ca-bundle\") pod \"heat-cfnapi-6b7bf76b6-kbx6h\" (UID: \"1604b21d-7590-4a34-9cf2-e3d03f2db385\") " pod="openstack/heat-cfnapi-6b7bf76b6-kbx6h" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.719060 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/67f75fbe-5004-449a-bd16-51659985e95e-ovsdbserver-nb\") pod \"dnsmasq-dns-f6bc4c6c9-pcgm5\" (UID: \"67f75fbe-5004-449a-bd16-51659985e95e\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-pcgm5" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.719084 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/67f75fbe-5004-449a-bd16-51659985e95e-dns-svc\") pod \"dnsmasq-dns-f6bc4c6c9-pcgm5\" (UID: \"67f75fbe-5004-449a-bd16-51659985e95e\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-pcgm5" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.719114 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67f75fbe-5004-449a-bd16-51659985e95e-config\") pod \"dnsmasq-dns-f6bc4c6c9-pcgm5\" (UID: \"67f75fbe-5004-449a-bd16-51659985e95e\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-pcgm5" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.719142 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/240f13c7-a251-4e85-b0c2-fafc0c03d52c-config-data-custom\") pod \"heat-api-864dfdcc4d-7wcth\" (UID: \"240f13c7-a251-4e85-b0c2-fafc0c03d52c\") " pod="openstack/heat-api-864dfdcc4d-7wcth" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.719164 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed331d56-a8fd-4fd5-8d71-45f853a35fa8-config-data\") pod \"heat-engine-55ff4b54d5-48p68\" (UID: \"ed331d56-a8fd-4fd5-8d71-45f853a35fa8\") " pod="openstack/heat-engine-55ff4b54d5-48p68" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.719203 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv8qm\" (UniqueName: \"kubernetes.io/projected/1604b21d-7590-4a34-9cf2-e3d03f2db385-kube-api-access-pv8qm\") pod \"heat-cfnapi-6b7bf76b6-kbx6h\" (UID: \"1604b21d-7590-4a34-9cf2-e3d03f2db385\") " pod="openstack/heat-cfnapi-6b7bf76b6-kbx6h" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.719219 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1604b21d-7590-4a34-9cf2-e3d03f2db385-config-data\") pod \"heat-cfnapi-6b7bf76b6-kbx6h\" (UID: \"1604b21d-7590-4a34-9cf2-e3d03f2db385\") " pod="openstack/heat-cfnapi-6b7bf76b6-kbx6h" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.719241 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed331d56-a8fd-4fd5-8d71-45f853a35fa8-config-data-custom\") pod \"heat-engine-55ff4b54d5-48p68\" (UID: \"ed331d56-a8fd-4fd5-8d71-45f853a35fa8\") " pod="openstack/heat-engine-55ff4b54d5-48p68" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.719268 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27nm7\" (UniqueName: \"kubernetes.io/projected/ed331d56-a8fd-4fd5-8d71-45f853a35fa8-kube-api-access-27nm7\") pod \"heat-engine-55ff4b54d5-48p68\" (UID: \"ed331d56-a8fd-4fd5-8d71-45f853a35fa8\") " pod="openstack/heat-engine-55ff4b54d5-48p68" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.719295 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed331d56-a8fd-4fd5-8d71-45f853a35fa8-combined-ca-bundle\") pod \"heat-engine-55ff4b54d5-48p68\" (UID: \"ed331d56-a8fd-4fd5-8d71-45f853a35fa8\") " pod="openstack/heat-engine-55ff4b54d5-48p68" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.719842 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8wxr\" (UniqueName: \"kubernetes.io/projected/240f13c7-a251-4e85-b0c2-fafc0c03d52c-kube-api-access-r8wxr\") pod \"heat-api-864dfdcc4d-7wcth\" (UID: \"240f13c7-a251-4e85-b0c2-fafc0c03d52c\") " pod="openstack/heat-api-864dfdcc4d-7wcth" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.719900 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/240f13c7-a251-4e85-b0c2-fafc0c03d52c-config-data\") pod \"heat-api-864dfdcc4d-7wcth\" (UID: \"240f13c7-a251-4e85-b0c2-fafc0c03d52c\") " pod="openstack/heat-api-864dfdcc4d-7wcth" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.719917 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/240f13c7-a251-4e85-b0c2-fafc0c03d52c-combined-ca-bundle\") pod \"heat-api-864dfdcc4d-7wcth\" (UID: \"240f13c7-a251-4e85-b0c2-fafc0c03d52c\") " pod="openstack/heat-api-864dfdcc4d-7wcth" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.720173 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q92gr\" (UniqueName: \"kubernetes.io/projected/67f75fbe-5004-449a-bd16-51659985e95e-kube-api-access-q92gr\") pod \"dnsmasq-dns-f6bc4c6c9-pcgm5\" (UID: \"67f75fbe-5004-449a-bd16-51659985e95e\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-pcgm5" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.727322 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed331d56-a8fd-4fd5-8d71-45f853a35fa8-combined-ca-bundle\") pod \"heat-engine-55ff4b54d5-48p68\" (UID: \"ed331d56-a8fd-4fd5-8d71-45f853a35fa8\") " pod="openstack/heat-engine-55ff4b54d5-48p68" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.734064 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed331d56-a8fd-4fd5-8d71-45f853a35fa8-config-data\") pod \"heat-engine-55ff4b54d5-48p68\" (UID: \"ed331d56-a8fd-4fd5-8d71-45f853a35fa8\") " pod="openstack/heat-engine-55ff4b54d5-48p68" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.743779 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27nm7\" (UniqueName: \"kubernetes.io/projected/ed331d56-a8fd-4fd5-8d71-45f853a35fa8-kube-api-access-27nm7\") pod \"heat-engine-55ff4b54d5-48p68\" (UID: \"ed331d56-a8fd-4fd5-8d71-45f853a35fa8\") " pod="openstack/heat-engine-55ff4b54d5-48p68" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.749806 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed331d56-a8fd-4fd5-8d71-45f853a35fa8-config-data-custom\") pod \"heat-engine-55ff4b54d5-48p68\" (UID: \"ed331d56-a8fd-4fd5-8d71-45f853a35fa8\") " pod="openstack/heat-engine-55ff4b54d5-48p68" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.799569 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-55ff4b54d5-48p68" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.821314 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/67f75fbe-5004-449a-bd16-51659985e95e-ovsdbserver-sb\") pod \"dnsmasq-dns-f6bc4c6c9-pcgm5\" (UID: \"67f75fbe-5004-449a-bd16-51659985e95e\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-pcgm5" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.821352 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/67f75fbe-5004-449a-bd16-51659985e95e-dns-swift-storage-0\") pod \"dnsmasq-dns-f6bc4c6c9-pcgm5\" (UID: \"67f75fbe-5004-449a-bd16-51659985e95e\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-pcgm5" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.821370 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1604b21d-7590-4a34-9cf2-e3d03f2db385-config-data-custom\") pod \"heat-cfnapi-6b7bf76b6-kbx6h\" (UID: \"1604b21d-7590-4a34-9cf2-e3d03f2db385\") " pod="openstack/heat-cfnapi-6b7bf76b6-kbx6h" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.821391 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1604b21d-7590-4a34-9cf2-e3d03f2db385-combined-ca-bundle\") pod \"heat-cfnapi-6b7bf76b6-kbx6h\" (UID: \"1604b21d-7590-4a34-9cf2-e3d03f2db385\") " pod="openstack/heat-cfnapi-6b7bf76b6-kbx6h" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.821413 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/67f75fbe-5004-449a-bd16-51659985e95e-ovsdbserver-nb\") pod \"dnsmasq-dns-f6bc4c6c9-pcgm5\" (UID: \"67f75fbe-5004-449a-bd16-51659985e95e\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-pcgm5" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.821440 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/67f75fbe-5004-449a-bd16-51659985e95e-dns-svc\") pod \"dnsmasq-dns-f6bc4c6c9-pcgm5\" (UID: \"67f75fbe-5004-449a-bd16-51659985e95e\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-pcgm5" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.821457 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67f75fbe-5004-449a-bd16-51659985e95e-config\") pod \"dnsmasq-dns-f6bc4c6c9-pcgm5\" (UID: \"67f75fbe-5004-449a-bd16-51659985e95e\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-pcgm5" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.821481 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/240f13c7-a251-4e85-b0c2-fafc0c03d52c-config-data-custom\") pod \"heat-api-864dfdcc4d-7wcth\" (UID: \"240f13c7-a251-4e85-b0c2-fafc0c03d52c\") " pod="openstack/heat-api-864dfdcc4d-7wcth" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.821527 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pv8qm\" (UniqueName: \"kubernetes.io/projected/1604b21d-7590-4a34-9cf2-e3d03f2db385-kube-api-access-pv8qm\") pod \"heat-cfnapi-6b7bf76b6-kbx6h\" (UID: \"1604b21d-7590-4a34-9cf2-e3d03f2db385\") " pod="openstack/heat-cfnapi-6b7bf76b6-kbx6h" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.821543 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1604b21d-7590-4a34-9cf2-e3d03f2db385-config-data\") pod \"heat-cfnapi-6b7bf76b6-kbx6h\" (UID: \"1604b21d-7590-4a34-9cf2-e3d03f2db385\") " pod="openstack/heat-cfnapi-6b7bf76b6-kbx6h" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.821638 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8wxr\" (UniqueName: \"kubernetes.io/projected/240f13c7-a251-4e85-b0c2-fafc0c03d52c-kube-api-access-r8wxr\") pod \"heat-api-864dfdcc4d-7wcth\" (UID: \"240f13c7-a251-4e85-b0c2-fafc0c03d52c\") " pod="openstack/heat-api-864dfdcc4d-7wcth" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.821660 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/240f13c7-a251-4e85-b0c2-fafc0c03d52c-config-data\") pod \"heat-api-864dfdcc4d-7wcth\" (UID: \"240f13c7-a251-4e85-b0c2-fafc0c03d52c\") " pod="openstack/heat-api-864dfdcc4d-7wcth" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.821674 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/240f13c7-a251-4e85-b0c2-fafc0c03d52c-combined-ca-bundle\") pod \"heat-api-864dfdcc4d-7wcth\" (UID: \"240f13c7-a251-4e85-b0c2-fafc0c03d52c\") " pod="openstack/heat-api-864dfdcc4d-7wcth" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.821716 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q92gr\" (UniqueName: \"kubernetes.io/projected/67f75fbe-5004-449a-bd16-51659985e95e-kube-api-access-q92gr\") pod \"dnsmasq-dns-f6bc4c6c9-pcgm5\" (UID: \"67f75fbe-5004-449a-bd16-51659985e95e\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-pcgm5" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.822690 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/67f75fbe-5004-449a-bd16-51659985e95e-ovsdbserver-nb\") pod \"dnsmasq-dns-f6bc4c6c9-pcgm5\" (UID: \"67f75fbe-5004-449a-bd16-51659985e95e\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-pcgm5" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.822853 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/67f75fbe-5004-449a-bd16-51659985e95e-dns-swift-storage-0\") pod \"dnsmasq-dns-f6bc4c6c9-pcgm5\" (UID: \"67f75fbe-5004-449a-bd16-51659985e95e\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-pcgm5" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.823236 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/67f75fbe-5004-449a-bd16-51659985e95e-dns-svc\") pod \"dnsmasq-dns-f6bc4c6c9-pcgm5\" (UID: \"67f75fbe-5004-449a-bd16-51659985e95e\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-pcgm5" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.837190 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/67f75fbe-5004-449a-bd16-51659985e95e-ovsdbserver-sb\") pod \"dnsmasq-dns-f6bc4c6c9-pcgm5\" (UID: \"67f75fbe-5004-449a-bd16-51659985e95e\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-pcgm5" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.837221 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67f75fbe-5004-449a-bd16-51659985e95e-config\") pod \"dnsmasq-dns-f6bc4c6c9-pcgm5\" (UID: \"67f75fbe-5004-449a-bd16-51659985e95e\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-pcgm5" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.839049 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1604b21d-7590-4a34-9cf2-e3d03f2db385-combined-ca-bundle\") pod \"heat-cfnapi-6b7bf76b6-kbx6h\" (UID: \"1604b21d-7590-4a34-9cf2-e3d03f2db385\") " pod="openstack/heat-cfnapi-6b7bf76b6-kbx6h" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.839685 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1604b21d-7590-4a34-9cf2-e3d03f2db385-config-data-custom\") pod \"heat-cfnapi-6b7bf76b6-kbx6h\" (UID: \"1604b21d-7590-4a34-9cf2-e3d03f2db385\") " pod="openstack/heat-cfnapi-6b7bf76b6-kbx6h" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.840896 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/240f13c7-a251-4e85-b0c2-fafc0c03d52c-config-data-custom\") pod \"heat-api-864dfdcc4d-7wcth\" (UID: \"240f13c7-a251-4e85-b0c2-fafc0c03d52c\") " pod="openstack/heat-api-864dfdcc4d-7wcth" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.846588 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/240f13c7-a251-4e85-b0c2-fafc0c03d52c-combined-ca-bundle\") pod \"heat-api-864dfdcc4d-7wcth\" (UID: \"240f13c7-a251-4e85-b0c2-fafc0c03d52c\") " pod="openstack/heat-api-864dfdcc4d-7wcth" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.850773 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1604b21d-7590-4a34-9cf2-e3d03f2db385-config-data\") pod \"heat-cfnapi-6b7bf76b6-kbx6h\" (UID: \"1604b21d-7590-4a34-9cf2-e3d03f2db385\") " pod="openstack/heat-cfnapi-6b7bf76b6-kbx6h" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.855675 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q92gr\" (UniqueName: \"kubernetes.io/projected/67f75fbe-5004-449a-bd16-51659985e95e-kube-api-access-q92gr\") pod \"dnsmasq-dns-f6bc4c6c9-pcgm5\" (UID: \"67f75fbe-5004-449a-bd16-51659985e95e\") " pod="openstack/dnsmasq-dns-f6bc4c6c9-pcgm5" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.858768 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8wxr\" (UniqueName: \"kubernetes.io/projected/240f13c7-a251-4e85-b0c2-fafc0c03d52c-kube-api-access-r8wxr\") pod \"heat-api-864dfdcc4d-7wcth\" (UID: \"240f13c7-a251-4e85-b0c2-fafc0c03d52c\") " pod="openstack/heat-api-864dfdcc4d-7wcth" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.864420 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv8qm\" (UniqueName: \"kubernetes.io/projected/1604b21d-7590-4a34-9cf2-e3d03f2db385-kube-api-access-pv8qm\") pod \"heat-cfnapi-6b7bf76b6-kbx6h\" (UID: \"1604b21d-7590-4a34-9cf2-e3d03f2db385\") " pod="openstack/heat-cfnapi-6b7bf76b6-kbx6h" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.878812 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f6bc4c6c9-pcgm5" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.880300 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/240f13c7-a251-4e85-b0c2-fafc0c03d52c-config-data\") pod \"heat-api-864dfdcc4d-7wcth\" (UID: \"240f13c7-a251-4e85-b0c2-fafc0c03d52c\") " pod="openstack/heat-api-864dfdcc4d-7wcth" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.914482 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-864dfdcc4d-7wcth" Nov 28 12:55:50 crc kubenswrapper[4779]: I1128 12:55:50.939543 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6b7bf76b6-kbx6h" Nov 28 12:55:51 crc kubenswrapper[4779]: I1128 12:55:51.150448 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b208660d-de0e-4218-a31b-66ce968db066","Type":"ContainerStarted","Data":"f90ab31a5fd12e248941efe959a1c14bb20792773513c1c81c012cdde5eb78b7"} Nov 28 12:55:51 crc kubenswrapper[4779]: I1128 12:55:51.164051 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 28 12:55:51 crc kubenswrapper[4779]: I1128 12:55:51.169753 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 28 12:55:51 crc kubenswrapper[4779]: I1128 12:55:51.171673 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 28 12:55:51 crc kubenswrapper[4779]: I1128 12:55:51.171877 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-9mzhh" Nov 28 12:55:51 crc kubenswrapper[4779]: I1128 12:55:51.176017 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 28 12:55:51 crc kubenswrapper[4779]: I1128 12:55:51.186043 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 28 12:55:51 crc kubenswrapper[4779]: I1128 12:55:51.195863 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.195845569 podStartE2EDuration="3.195845569s" podCreationTimestamp="2025-11-28 12:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:55:51.16668667 +0000 UTC m=+1211.732362034" watchObservedRunningTime="2025-11-28 12:55:51.195845569 +0000 UTC m=+1211.761520923" Nov 28 12:55:51 crc kubenswrapper[4779]: I1128 12:55:51.230950 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d36c7807-80b6-4159-99be-b2b6ea1fe291-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d36c7807-80b6-4159-99be-b2b6ea1fe291\") " pod="openstack/openstackclient" Nov 28 12:55:51 crc kubenswrapper[4779]: I1128 12:55:51.231042 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xsn2\" (UniqueName: \"kubernetes.io/projected/d36c7807-80b6-4159-99be-b2b6ea1fe291-kube-api-access-8xsn2\") pod \"openstackclient\" (UID: \"d36c7807-80b6-4159-99be-b2b6ea1fe291\") " pod="openstack/openstackclient" Nov 28 12:55:51 crc kubenswrapper[4779]: I1128 12:55:51.231122 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d36c7807-80b6-4159-99be-b2b6ea1fe291-openstack-config\") pod \"openstackclient\" (UID: \"d36c7807-80b6-4159-99be-b2b6ea1fe291\") " pod="openstack/openstackclient" Nov 28 12:55:51 crc kubenswrapper[4779]: I1128 12:55:51.231147 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d36c7807-80b6-4159-99be-b2b6ea1fe291-openstack-config-secret\") pod \"openstackclient\" (UID: \"d36c7807-80b6-4159-99be-b2b6ea1fe291\") " pod="openstack/openstackclient" Nov 28 12:55:51 crc kubenswrapper[4779]: I1128 12:55:51.332573 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d36c7807-80b6-4159-99be-b2b6ea1fe291-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d36c7807-80b6-4159-99be-b2b6ea1fe291\") " pod="openstack/openstackclient" Nov 28 12:55:51 crc kubenswrapper[4779]: I1128 12:55:51.332677 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xsn2\" (UniqueName: \"kubernetes.io/projected/d36c7807-80b6-4159-99be-b2b6ea1fe291-kube-api-access-8xsn2\") pod \"openstackclient\" (UID: \"d36c7807-80b6-4159-99be-b2b6ea1fe291\") " pod="openstack/openstackclient" Nov 28 12:55:51 crc kubenswrapper[4779]: I1128 12:55:51.332730 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d36c7807-80b6-4159-99be-b2b6ea1fe291-openstack-config\") pod \"openstackclient\" (UID: \"d36c7807-80b6-4159-99be-b2b6ea1fe291\") " pod="openstack/openstackclient" Nov 28 12:55:51 crc kubenswrapper[4779]: I1128 12:55:51.332750 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d36c7807-80b6-4159-99be-b2b6ea1fe291-openstack-config-secret\") pod \"openstackclient\" (UID: \"d36c7807-80b6-4159-99be-b2b6ea1fe291\") " pod="openstack/openstackclient" Nov 28 12:55:51 crc kubenswrapper[4779]: I1128 12:55:51.341749 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d36c7807-80b6-4159-99be-b2b6ea1fe291-openstack-config\") pod \"openstackclient\" (UID: \"d36c7807-80b6-4159-99be-b2b6ea1fe291\") " pod="openstack/openstackclient" Nov 28 12:55:51 crc kubenswrapper[4779]: I1128 12:55:51.344607 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d36c7807-80b6-4159-99be-b2b6ea1fe291-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d36c7807-80b6-4159-99be-b2b6ea1fe291\") " pod="openstack/openstackclient" Nov 28 12:55:51 crc kubenswrapper[4779]: I1128 12:55:51.348810 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d36c7807-80b6-4159-99be-b2b6ea1fe291-openstack-config-secret\") pod \"openstackclient\" (UID: \"d36c7807-80b6-4159-99be-b2b6ea1fe291\") " pod="openstack/openstackclient" Nov 28 12:55:51 crc kubenswrapper[4779]: I1128 12:55:51.358375 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xsn2\" (UniqueName: \"kubernetes.io/projected/d36c7807-80b6-4159-99be-b2b6ea1fe291-kube-api-access-8xsn2\") pod \"openstackclient\" (UID: \"d36c7807-80b6-4159-99be-b2b6ea1fe291\") " pod="openstack/openstackclient" Nov 28 12:55:51 crc kubenswrapper[4779]: I1128 12:55:51.368183 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Nov 28 12:55:51 crc kubenswrapper[4779]: I1128 12:55:51.372053 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 28 12:55:51 crc kubenswrapper[4779]: I1128 12:55:51.382263 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Nov 28 12:55:51 crc kubenswrapper[4779]: I1128 12:55:51.396112 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-55ff4b54d5-48p68"] Nov 28 12:55:51 crc kubenswrapper[4779]: I1128 12:55:51.406277 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f6bc4c6c9-pcgm5"] Nov 28 12:55:51 crc kubenswrapper[4779]: I1128 12:55:51.450142 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 28 12:55:51 crc kubenswrapper[4779]: I1128 12:55:51.451302 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 28 12:55:51 crc kubenswrapper[4779]: I1128 12:55:51.457470 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 28 12:55:51 crc kubenswrapper[4779]: I1128 12:55:51.493230 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-864dfdcc4d-7wcth"] Nov 28 12:55:51 crc kubenswrapper[4779]: I1128 12:55:51.561003 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4a8f5701-7e1d-414b-aa88-4af10f82a58e-openstack-config\") pod \"openstackclient\" (UID: \"4a8f5701-7e1d-414b-aa88-4af10f82a58e\") " pod="openstack/openstackclient" Nov 28 12:55:51 crc kubenswrapper[4779]: I1128 12:55:51.561052 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a8f5701-7e1d-414b-aa88-4af10f82a58e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4a8f5701-7e1d-414b-aa88-4af10f82a58e\") " pod="openstack/openstackclient" Nov 28 12:55:51 crc kubenswrapper[4779]: I1128 12:55:51.561090 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4a8f5701-7e1d-414b-aa88-4af10f82a58e-openstack-config-secret\") pod \"openstackclient\" (UID: \"4a8f5701-7e1d-414b-aa88-4af10f82a58e\") " pod="openstack/openstackclient" Nov 28 12:55:51 crc kubenswrapper[4779]: I1128 12:55:51.561144 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d624j\" (UniqueName: \"kubernetes.io/projected/4a8f5701-7e1d-414b-aa88-4af10f82a58e-kube-api-access-d624j\") pod \"openstackclient\" (UID: \"4a8f5701-7e1d-414b-aa88-4af10f82a58e\") " pod="openstack/openstackclient" Nov 28 12:55:51 crc kubenswrapper[4779]: E1128 12:55:51.617302 4779 log.go:32] "RunPodSandbox from runtime service failed" err=< Nov 28 12:55:51 crc kubenswrapper[4779]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_d36c7807-80b6-4159-99be-b2b6ea1fe291_0(f0377bf77ca13c4fdd1a8da1ffb87478108cc835b1e6f3fcbf21bcfcc51a6250): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"f0377bf77ca13c4fdd1a8da1ffb87478108cc835b1e6f3fcbf21bcfcc51a6250" Netns:"/var/run/netns/8c1b85d4-f3fc-47ec-aa6b-dc33d636b82c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=f0377bf77ca13c4fdd1a8da1ffb87478108cc835b1e6f3fcbf21bcfcc51a6250;K8S_POD_UID=d36c7807-80b6-4159-99be-b2b6ea1fe291" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/d36c7807-80b6-4159-99be-b2b6ea1fe291]: expected pod UID "d36c7807-80b6-4159-99be-b2b6ea1fe291" but got "4a8f5701-7e1d-414b-aa88-4af10f82a58e" from Kube API Nov 28 12:55:51 crc kubenswrapper[4779]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 28 12:55:51 crc kubenswrapper[4779]: > Nov 28 12:55:51 crc kubenswrapper[4779]: E1128 12:55:51.617645 4779 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Nov 28 12:55:51 crc kubenswrapper[4779]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_d36c7807-80b6-4159-99be-b2b6ea1fe291_0(f0377bf77ca13c4fdd1a8da1ffb87478108cc835b1e6f3fcbf21bcfcc51a6250): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"f0377bf77ca13c4fdd1a8da1ffb87478108cc835b1e6f3fcbf21bcfcc51a6250" Netns:"/var/run/netns/8c1b85d4-f3fc-47ec-aa6b-dc33d636b82c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=f0377bf77ca13c4fdd1a8da1ffb87478108cc835b1e6f3fcbf21bcfcc51a6250;K8S_POD_UID=d36c7807-80b6-4159-99be-b2b6ea1fe291" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/d36c7807-80b6-4159-99be-b2b6ea1fe291]: expected pod UID "d36c7807-80b6-4159-99be-b2b6ea1fe291" but got "4a8f5701-7e1d-414b-aa88-4af10f82a58e" from Kube API Nov 28 12:55:51 crc kubenswrapper[4779]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 28 12:55:51 crc kubenswrapper[4779]: > pod="openstack/openstackclient" Nov 28 12:55:51 crc kubenswrapper[4779]: I1128 12:55:51.632273 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6b7bf76b6-kbx6h"] Nov 28 12:55:51 crc kubenswrapper[4779]: I1128 12:55:51.663417 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4a8f5701-7e1d-414b-aa88-4af10f82a58e-openstack-config\") pod \"openstackclient\" (UID: \"4a8f5701-7e1d-414b-aa88-4af10f82a58e\") " pod="openstack/openstackclient" Nov 28 12:55:51 crc kubenswrapper[4779]: I1128 12:55:51.663470 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a8f5701-7e1d-414b-aa88-4af10f82a58e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4a8f5701-7e1d-414b-aa88-4af10f82a58e\") " pod="openstack/openstackclient" Nov 28 12:55:51 crc kubenswrapper[4779]: I1128 12:55:51.663500 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4a8f5701-7e1d-414b-aa88-4af10f82a58e-openstack-config-secret\") pod \"openstackclient\" (UID: \"4a8f5701-7e1d-414b-aa88-4af10f82a58e\") " pod="openstack/openstackclient" Nov 28 12:55:51 crc kubenswrapper[4779]: I1128 12:55:51.663536 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d624j\" (UniqueName: \"kubernetes.io/projected/4a8f5701-7e1d-414b-aa88-4af10f82a58e-kube-api-access-d624j\") pod \"openstackclient\" (UID: \"4a8f5701-7e1d-414b-aa88-4af10f82a58e\") " pod="openstack/openstackclient" Nov 28 12:55:51 crc kubenswrapper[4779]: I1128 12:55:51.664587 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4a8f5701-7e1d-414b-aa88-4af10f82a58e-openstack-config\") pod \"openstackclient\" (UID: \"4a8f5701-7e1d-414b-aa88-4af10f82a58e\") " pod="openstack/openstackclient" Nov 28 12:55:51 crc kubenswrapper[4779]: I1128 12:55:51.673644 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a8f5701-7e1d-414b-aa88-4af10f82a58e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4a8f5701-7e1d-414b-aa88-4af10f82a58e\") " pod="openstack/openstackclient" Nov 28 12:55:51 crc kubenswrapper[4779]: I1128 12:55:51.673942 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4a8f5701-7e1d-414b-aa88-4af10f82a58e-openstack-config-secret\") pod \"openstackclient\" (UID: \"4a8f5701-7e1d-414b-aa88-4af10f82a58e\") " pod="openstack/openstackclient" Nov 28 12:55:51 crc kubenswrapper[4779]: I1128 12:55:51.686170 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d624j\" (UniqueName: \"kubernetes.io/projected/4a8f5701-7e1d-414b-aa88-4af10f82a58e-kube-api-access-d624j\") pod \"openstackclient\" (UID: \"4a8f5701-7e1d-414b-aa88-4af10f82a58e\") " pod="openstack/openstackclient" Nov 28 12:55:51 crc kubenswrapper[4779]: I1128 12:55:51.745920 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ecd6266-0ce0-435c-a8a3-b28526b74517" path="/var/lib/kubelet/pods/3ecd6266-0ce0-435c-a8a3-b28526b74517/volumes" Nov 28 12:55:51 crc kubenswrapper[4779]: I1128 12:55:51.818879 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 28 12:55:52 crc kubenswrapper[4779]: I1128 12:55:52.158629 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6b7bf76b6-kbx6h" event={"ID":"1604b21d-7590-4a34-9cf2-e3d03f2db385","Type":"ContainerStarted","Data":"86e410d0d5a6272d7b21dd2630d535a862050629b6d2737319434f006762f2ef"} Nov 28 12:55:52 crc kubenswrapper[4779]: I1128 12:55:52.159553 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-864dfdcc4d-7wcth" event={"ID":"240f13c7-a251-4e85-b0c2-fafc0c03d52c","Type":"ContainerStarted","Data":"b686a61f2029a4e7e319807e7c55f1a6645ff5e29eb5d9b00b0c5acfc9085d67"} Nov 28 12:55:52 crc kubenswrapper[4779]: I1128 12:55:52.160829 4779 generic.go:334] "Generic (PLEG): container finished" podID="67f75fbe-5004-449a-bd16-51659985e95e" containerID="dac14d327ca31947e0c5b44051e25278635e0a3811f5c4bb270eb3324ab0a45a" exitCode=0 Nov 28 12:55:52 crc kubenswrapper[4779]: I1128 12:55:52.161130 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f6bc4c6c9-pcgm5" event={"ID":"67f75fbe-5004-449a-bd16-51659985e95e","Type":"ContainerDied","Data":"dac14d327ca31947e0c5b44051e25278635e0a3811f5c4bb270eb3324ab0a45a"} Nov 28 12:55:52 crc kubenswrapper[4779]: I1128 12:55:52.161152 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f6bc4c6c9-pcgm5" event={"ID":"67f75fbe-5004-449a-bd16-51659985e95e","Type":"ContainerStarted","Data":"e326ad7149fa46402d1a16f8eb7dbdca6e68e92808c924974a95f2e913850e91"} Nov 28 12:55:52 crc kubenswrapper[4779]: I1128 12:55:52.162689 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-55ff4b54d5-48p68" event={"ID":"ed331d56-a8fd-4fd5-8d71-45f853a35fa8","Type":"ContainerStarted","Data":"a46164af1ca68791f72ea8171ee935cc2b683dc226613895d2c7a1ad49678b4e"} Nov 28 12:55:52 crc kubenswrapper[4779]: I1128 12:55:52.162733 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-55ff4b54d5-48p68" event={"ID":"ed331d56-a8fd-4fd5-8d71-45f853a35fa8","Type":"ContainerStarted","Data":"45e855e310cc8afb9e5b6376890bb26e81baa516668025688fdd8eff7a208c1f"} Nov 28 12:55:52 crc kubenswrapper[4779]: I1128 12:55:52.162711 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 28 12:55:52 crc kubenswrapper[4779]: I1128 12:55:52.222617 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-55ff4b54d5-48p68" podStartSLOduration=2.222600657 podStartE2EDuration="2.222600657s" podCreationTimestamp="2025-11-28 12:55:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:55:52.215412427 +0000 UTC m=+1212.781087781" watchObservedRunningTime="2025-11-28 12:55:52.222600657 +0000 UTC m=+1212.788276011" Nov 28 12:55:52 crc kubenswrapper[4779]: I1128 12:55:52.262393 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 28 12:55:52 crc kubenswrapper[4779]: I1128 12:55:52.271387 4779 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="d36c7807-80b6-4159-99be-b2b6ea1fe291" podUID="4a8f5701-7e1d-414b-aa88-4af10f82a58e" Nov 28 12:55:52 crc kubenswrapper[4779]: I1128 12:55:52.278131 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 28 12:55:52 crc kubenswrapper[4779]: I1128 12:55:52.379221 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d36c7807-80b6-4159-99be-b2b6ea1fe291-combined-ca-bundle\") pod \"d36c7807-80b6-4159-99be-b2b6ea1fe291\" (UID: \"d36c7807-80b6-4159-99be-b2b6ea1fe291\") " Nov 28 12:55:52 crc kubenswrapper[4779]: I1128 12:55:52.379462 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d36c7807-80b6-4159-99be-b2b6ea1fe291-openstack-config-secret\") pod \"d36c7807-80b6-4159-99be-b2b6ea1fe291\" (UID: \"d36c7807-80b6-4159-99be-b2b6ea1fe291\") " Nov 28 12:55:52 crc kubenswrapper[4779]: I1128 12:55:52.379492 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d36c7807-80b6-4159-99be-b2b6ea1fe291-openstack-config\") pod \"d36c7807-80b6-4159-99be-b2b6ea1fe291\" (UID: \"d36c7807-80b6-4159-99be-b2b6ea1fe291\") " Nov 28 12:55:52 crc kubenswrapper[4779]: I1128 12:55:52.379539 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xsn2\" (UniqueName: \"kubernetes.io/projected/d36c7807-80b6-4159-99be-b2b6ea1fe291-kube-api-access-8xsn2\") pod \"d36c7807-80b6-4159-99be-b2b6ea1fe291\" (UID: \"d36c7807-80b6-4159-99be-b2b6ea1fe291\") " Nov 28 12:55:52 crc kubenswrapper[4779]: I1128 12:55:52.380348 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d36c7807-80b6-4159-99be-b2b6ea1fe291-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "d36c7807-80b6-4159-99be-b2b6ea1fe291" (UID: "d36c7807-80b6-4159-99be-b2b6ea1fe291"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:55:52 crc kubenswrapper[4779]: I1128 12:55:52.383617 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d36c7807-80b6-4159-99be-b2b6ea1fe291-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "d36c7807-80b6-4159-99be-b2b6ea1fe291" (UID: "d36c7807-80b6-4159-99be-b2b6ea1fe291"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:55:52 crc kubenswrapper[4779]: I1128 12:55:52.385147 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d36c7807-80b6-4159-99be-b2b6ea1fe291-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d36c7807-80b6-4159-99be-b2b6ea1fe291" (UID: "d36c7807-80b6-4159-99be-b2b6ea1fe291"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:55:52 crc kubenswrapper[4779]: I1128 12:55:52.385317 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d36c7807-80b6-4159-99be-b2b6ea1fe291-kube-api-access-8xsn2" (OuterVolumeSpecName: "kube-api-access-8xsn2") pod "d36c7807-80b6-4159-99be-b2b6ea1fe291" (UID: "d36c7807-80b6-4159-99be-b2b6ea1fe291"). InnerVolumeSpecName "kube-api-access-8xsn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:55:52 crc kubenswrapper[4779]: I1128 12:55:52.481000 4779 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d36c7807-80b6-4159-99be-b2b6ea1fe291-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:52 crc kubenswrapper[4779]: I1128 12:55:52.481031 4779 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d36c7807-80b6-4159-99be-b2b6ea1fe291-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:52 crc kubenswrapper[4779]: I1128 12:55:52.481040 4779 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d36c7807-80b6-4159-99be-b2b6ea1fe291-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:52 crc kubenswrapper[4779]: I1128 12:55:52.481047 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xsn2\" (UniqueName: \"kubernetes.io/projected/d36c7807-80b6-4159-99be-b2b6ea1fe291-kube-api-access-8xsn2\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:53 crc kubenswrapper[4779]: I1128 12:55:53.023884 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 28 12:55:53 crc kubenswrapper[4779]: I1128 12:55:53.178288 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f6bc4c6c9-pcgm5" event={"ID":"67f75fbe-5004-449a-bd16-51659985e95e","Type":"ContainerStarted","Data":"cddf9679e5f2f9f9de75352b786204b93933f8aeecebc3aef560f473634e57b8"} Nov 28 12:55:53 crc kubenswrapper[4779]: I1128 12:55:53.179386 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f6bc4c6c9-pcgm5" Nov 28 12:55:53 crc kubenswrapper[4779]: I1128 12:55:53.182260 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"4a8f5701-7e1d-414b-aa88-4af10f82a58e","Type":"ContainerStarted","Data":"0cfc841e5b2598183b7191de6bb6c075b746346172c8d5dc1a4d993e1da02022"} Nov 28 12:55:53 crc kubenswrapper[4779]: I1128 12:55:53.182340 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 28 12:55:53 crc kubenswrapper[4779]: I1128 12:55:53.182506 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-55ff4b54d5-48p68" Nov 28 12:55:53 crc kubenswrapper[4779]: I1128 12:55:53.221480 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f6bc4c6c9-pcgm5" podStartSLOduration=3.221464129 podStartE2EDuration="3.221464129s" podCreationTimestamp="2025-11-28 12:55:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:55:53.21997117 +0000 UTC m=+1213.785646524" watchObservedRunningTime="2025-11-28 12:55:53.221464129 +0000 UTC m=+1213.787139483" Nov 28 12:55:53 crc kubenswrapper[4779]: I1128 12:55:53.225847 4779 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="d36c7807-80b6-4159-99be-b2b6ea1fe291" podUID="4a8f5701-7e1d-414b-aa88-4af10f82a58e" Nov 28 12:55:53 crc kubenswrapper[4779]: I1128 12:55:53.747219 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d36c7807-80b6-4159-99be-b2b6ea1fe291" path="/var/lib/kubelet/pods/d36c7807-80b6-4159-99be-b2b6ea1fe291/volumes" Nov 28 12:55:53 crc kubenswrapper[4779]: I1128 12:55:53.755506 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 28 12:55:53 crc kubenswrapper[4779]: I1128 12:55:53.842579 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-9c6b99df5-82cnl"] Nov 28 12:55:53 crc kubenswrapper[4779]: I1128 12:55:53.847037 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-9c6b99df5-82cnl" Nov 28 12:55:53 crc kubenswrapper[4779]: I1128 12:55:53.849561 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Nov 28 12:55:53 crc kubenswrapper[4779]: I1128 12:55:53.849791 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Nov 28 12:55:53 crc kubenswrapper[4779]: I1128 12:55:53.849893 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 28 12:55:53 crc kubenswrapper[4779]: I1128 12:55:53.867932 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-9c6b99df5-82cnl"] Nov 28 12:55:53 crc kubenswrapper[4779]: I1128 12:55:53.922995 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75d5987a-c7cb-400e-8efb-7375385f0e20-log-httpd\") pod \"swift-proxy-9c6b99df5-82cnl\" (UID: \"75d5987a-c7cb-400e-8efb-7375385f0e20\") " pod="openstack/swift-proxy-9c6b99df5-82cnl" Nov 28 12:55:53 crc kubenswrapper[4779]: I1128 12:55:53.923036 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75d5987a-c7cb-400e-8efb-7375385f0e20-combined-ca-bundle\") pod \"swift-proxy-9c6b99df5-82cnl\" (UID: \"75d5987a-c7cb-400e-8efb-7375385f0e20\") " pod="openstack/swift-proxy-9c6b99df5-82cnl" Nov 28 12:55:53 crc kubenswrapper[4779]: I1128 12:55:53.924077 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75d5987a-c7cb-400e-8efb-7375385f0e20-public-tls-certs\") pod \"swift-proxy-9c6b99df5-82cnl\" (UID: \"75d5987a-c7cb-400e-8efb-7375385f0e20\") " pod="openstack/swift-proxy-9c6b99df5-82cnl" Nov 28 12:55:53 crc kubenswrapper[4779]: I1128 12:55:53.924168 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcmhn\" (UniqueName: \"kubernetes.io/projected/75d5987a-c7cb-400e-8efb-7375385f0e20-kube-api-access-pcmhn\") pod \"swift-proxy-9c6b99df5-82cnl\" (UID: \"75d5987a-c7cb-400e-8efb-7375385f0e20\") " pod="openstack/swift-proxy-9c6b99df5-82cnl" Nov 28 12:55:53 crc kubenswrapper[4779]: I1128 12:55:53.924204 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75d5987a-c7cb-400e-8efb-7375385f0e20-config-data\") pod \"swift-proxy-9c6b99df5-82cnl\" (UID: \"75d5987a-c7cb-400e-8efb-7375385f0e20\") " pod="openstack/swift-proxy-9c6b99df5-82cnl" Nov 28 12:55:53 crc kubenswrapper[4779]: I1128 12:55:53.924220 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75d5987a-c7cb-400e-8efb-7375385f0e20-run-httpd\") pod \"swift-proxy-9c6b99df5-82cnl\" (UID: \"75d5987a-c7cb-400e-8efb-7375385f0e20\") " pod="openstack/swift-proxy-9c6b99df5-82cnl" Nov 28 12:55:53 crc kubenswrapper[4779]: I1128 12:55:53.924238 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/75d5987a-c7cb-400e-8efb-7375385f0e20-etc-swift\") pod \"swift-proxy-9c6b99df5-82cnl\" (UID: \"75d5987a-c7cb-400e-8efb-7375385f0e20\") " pod="openstack/swift-proxy-9c6b99df5-82cnl" Nov 28 12:55:53 crc kubenswrapper[4779]: I1128 12:55:53.924281 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/75d5987a-c7cb-400e-8efb-7375385f0e20-internal-tls-certs\") pod \"swift-proxy-9c6b99df5-82cnl\" (UID: \"75d5987a-c7cb-400e-8efb-7375385f0e20\") " pod="openstack/swift-proxy-9c6b99df5-82cnl" Nov 28 12:55:54 crc kubenswrapper[4779]: I1128 12:55:54.026121 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75d5987a-c7cb-400e-8efb-7375385f0e20-log-httpd\") pod \"swift-proxy-9c6b99df5-82cnl\" (UID: \"75d5987a-c7cb-400e-8efb-7375385f0e20\") " pod="openstack/swift-proxy-9c6b99df5-82cnl" Nov 28 12:55:54 crc kubenswrapper[4779]: I1128 12:55:54.027103 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75d5987a-c7cb-400e-8efb-7375385f0e20-combined-ca-bundle\") pod \"swift-proxy-9c6b99df5-82cnl\" (UID: \"75d5987a-c7cb-400e-8efb-7375385f0e20\") " pod="openstack/swift-proxy-9c6b99df5-82cnl" Nov 28 12:55:54 crc kubenswrapper[4779]: I1128 12:55:54.027035 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75d5987a-c7cb-400e-8efb-7375385f0e20-log-httpd\") pod \"swift-proxy-9c6b99df5-82cnl\" (UID: \"75d5987a-c7cb-400e-8efb-7375385f0e20\") " pod="openstack/swift-proxy-9c6b99df5-82cnl" Nov 28 12:55:54 crc kubenswrapper[4779]: I1128 12:55:54.027233 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75d5987a-c7cb-400e-8efb-7375385f0e20-public-tls-certs\") pod \"swift-proxy-9c6b99df5-82cnl\" (UID: \"75d5987a-c7cb-400e-8efb-7375385f0e20\") " pod="openstack/swift-proxy-9c6b99df5-82cnl" Nov 28 12:55:54 crc kubenswrapper[4779]: I1128 12:55:54.027286 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcmhn\" (UniqueName: \"kubernetes.io/projected/75d5987a-c7cb-400e-8efb-7375385f0e20-kube-api-access-pcmhn\") pod \"swift-proxy-9c6b99df5-82cnl\" (UID: \"75d5987a-c7cb-400e-8efb-7375385f0e20\") " pod="openstack/swift-proxy-9c6b99df5-82cnl" Nov 28 12:55:54 crc kubenswrapper[4779]: I1128 12:55:54.027877 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75d5987a-c7cb-400e-8efb-7375385f0e20-config-data\") pod \"swift-proxy-9c6b99df5-82cnl\" (UID: \"75d5987a-c7cb-400e-8efb-7375385f0e20\") " pod="openstack/swift-proxy-9c6b99df5-82cnl" Nov 28 12:55:54 crc kubenswrapper[4779]: I1128 12:55:54.027900 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75d5987a-c7cb-400e-8efb-7375385f0e20-run-httpd\") pod \"swift-proxy-9c6b99df5-82cnl\" (UID: \"75d5987a-c7cb-400e-8efb-7375385f0e20\") " pod="openstack/swift-proxy-9c6b99df5-82cnl" Nov 28 12:55:54 crc kubenswrapper[4779]: I1128 12:55:54.027938 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/75d5987a-c7cb-400e-8efb-7375385f0e20-etc-swift\") pod \"swift-proxy-9c6b99df5-82cnl\" (UID: \"75d5987a-c7cb-400e-8efb-7375385f0e20\") " pod="openstack/swift-proxy-9c6b99df5-82cnl" Nov 28 12:55:54 crc kubenswrapper[4779]: I1128 12:55:54.027986 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/75d5987a-c7cb-400e-8efb-7375385f0e20-internal-tls-certs\") pod \"swift-proxy-9c6b99df5-82cnl\" (UID: \"75d5987a-c7cb-400e-8efb-7375385f0e20\") " pod="openstack/swift-proxy-9c6b99df5-82cnl" Nov 28 12:55:54 crc kubenswrapper[4779]: I1128 12:55:54.028488 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75d5987a-c7cb-400e-8efb-7375385f0e20-run-httpd\") pod \"swift-proxy-9c6b99df5-82cnl\" (UID: \"75d5987a-c7cb-400e-8efb-7375385f0e20\") " pod="openstack/swift-proxy-9c6b99df5-82cnl" Nov 28 12:55:54 crc kubenswrapper[4779]: I1128 12:55:54.031307 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75d5987a-c7cb-400e-8efb-7375385f0e20-combined-ca-bundle\") pod \"swift-proxy-9c6b99df5-82cnl\" (UID: \"75d5987a-c7cb-400e-8efb-7375385f0e20\") " pod="openstack/swift-proxy-9c6b99df5-82cnl" Nov 28 12:55:54 crc kubenswrapper[4779]: I1128 12:55:54.031846 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75d5987a-c7cb-400e-8efb-7375385f0e20-public-tls-certs\") pod \"swift-proxy-9c6b99df5-82cnl\" (UID: \"75d5987a-c7cb-400e-8efb-7375385f0e20\") " pod="openstack/swift-proxy-9c6b99df5-82cnl" Nov 28 12:55:54 crc kubenswrapper[4779]: I1128 12:55:54.032171 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75d5987a-c7cb-400e-8efb-7375385f0e20-config-data\") pod \"swift-proxy-9c6b99df5-82cnl\" (UID: \"75d5987a-c7cb-400e-8efb-7375385f0e20\") " pod="openstack/swift-proxy-9c6b99df5-82cnl" Nov 28 12:55:54 crc kubenswrapper[4779]: I1128 12:55:54.033076 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/75d5987a-c7cb-400e-8efb-7375385f0e20-internal-tls-certs\") pod \"swift-proxy-9c6b99df5-82cnl\" (UID: \"75d5987a-c7cb-400e-8efb-7375385f0e20\") " pod="openstack/swift-proxy-9c6b99df5-82cnl" Nov 28 12:55:54 crc kubenswrapper[4779]: I1128 12:55:54.040068 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/75d5987a-c7cb-400e-8efb-7375385f0e20-etc-swift\") pod \"swift-proxy-9c6b99df5-82cnl\" (UID: \"75d5987a-c7cb-400e-8efb-7375385f0e20\") " pod="openstack/swift-proxy-9c6b99df5-82cnl" Nov 28 12:55:54 crc kubenswrapper[4779]: I1128 12:55:54.042904 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcmhn\" (UniqueName: \"kubernetes.io/projected/75d5987a-c7cb-400e-8efb-7375385f0e20-kube-api-access-pcmhn\") pod \"swift-proxy-9c6b99df5-82cnl\" (UID: \"75d5987a-c7cb-400e-8efb-7375385f0e20\") " pod="openstack/swift-proxy-9c6b99df5-82cnl" Nov 28 12:55:54 crc kubenswrapper[4779]: I1128 12:55:54.178258 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-9c6b99df5-82cnl" Nov 28 12:55:54 crc kubenswrapper[4779]: I1128 12:55:54.190841 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6b7bf76b6-kbx6h" event={"ID":"1604b21d-7590-4a34-9cf2-e3d03f2db385","Type":"ContainerStarted","Data":"bfb97f0358d284d94f52421e9edc4d3f896df828fcc1e9f49163b623682331b6"} Nov 28 12:55:54 crc kubenswrapper[4779]: I1128 12:55:54.190970 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6b7bf76b6-kbx6h" Nov 28 12:55:54 crc kubenswrapper[4779]: I1128 12:55:54.193136 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-864dfdcc4d-7wcth" event={"ID":"240f13c7-a251-4e85-b0c2-fafc0c03d52c","Type":"ContainerStarted","Data":"c264d2dafa86be1299c2b98d8efd0e997671227c6702da51282cecb82e2fa9f9"} Nov 28 12:55:54 crc kubenswrapper[4779]: I1128 12:55:54.221750 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-6b7bf76b6-kbx6h" podStartSLOduration=2.220588545 podStartE2EDuration="4.221726628s" podCreationTimestamp="2025-11-28 12:55:50 +0000 UTC" firstStartedPulling="2025-11-28 12:55:51.662269055 +0000 UTC m=+1212.227944409" lastFinishedPulling="2025-11-28 12:55:53.663407138 +0000 UTC m=+1214.229082492" observedRunningTime="2025-11-28 12:55:54.207875483 +0000 UTC m=+1214.773550837" watchObservedRunningTime="2025-11-28 12:55:54.221726628 +0000 UTC m=+1214.787401982" Nov 28 12:55:54 crc kubenswrapper[4779]: I1128 12:55:54.239572 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-864dfdcc4d-7wcth" podStartSLOduration=2.079936884 podStartE2EDuration="4.239557389s" podCreationTimestamp="2025-11-28 12:55:50 +0000 UTC" firstStartedPulling="2025-11-28 12:55:51.499559192 +0000 UTC m=+1212.065234546" lastFinishedPulling="2025-11-28 12:55:53.659179697 +0000 UTC m=+1214.224855051" observedRunningTime="2025-11-28 12:55:54.232224155 +0000 UTC m=+1214.797899499" watchObservedRunningTime="2025-11-28 12:55:54.239557389 +0000 UTC m=+1214.805232743" Nov 28 12:55:54 crc kubenswrapper[4779]: I1128 12:55:54.827462 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-9c6b99df5-82cnl"] Nov 28 12:55:54 crc kubenswrapper[4779]: W1128 12:55:54.831569 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75d5987a_c7cb_400e_8efb_7375385f0e20.slice/crio-8e571415d65a442556189ceed57cd3317146a9a08dc83015ac842d96e2d0e279 WatchSource:0}: Error finding container 8e571415d65a442556189ceed57cd3317146a9a08dc83015ac842d96e2d0e279: Status 404 returned error can't find the container with id 8e571415d65a442556189ceed57cd3317146a9a08dc83015ac842d96e2d0e279 Nov 28 12:55:55 crc kubenswrapper[4779]: I1128 12:55:55.209737 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-9c6b99df5-82cnl" event={"ID":"75d5987a-c7cb-400e-8efb-7375385f0e20","Type":"ContainerStarted","Data":"d90b98aa94feabf469c3aa9c377beefb36efa4cd4e2820fa3c171a2d33aa0500"} Nov 28 12:55:55 crc kubenswrapper[4779]: I1128 12:55:55.210028 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-864dfdcc4d-7wcth" Nov 28 12:55:55 crc kubenswrapper[4779]: I1128 12:55:55.210044 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-9c6b99df5-82cnl" event={"ID":"75d5987a-c7cb-400e-8efb-7375385f0e20","Type":"ContainerStarted","Data":"8e571415d65a442556189ceed57cd3317146a9a08dc83015ac842d96e2d0e279"} Nov 28 12:55:56 crc kubenswrapper[4779]: I1128 12:55:56.245420 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-9c6b99df5-82cnl" event={"ID":"75d5987a-c7cb-400e-8efb-7375385f0e20","Type":"ContainerStarted","Data":"17500b307b70bbf63f7e181d5b25cc1ba91bd0bbd80edddc1e172dfde1e372a1"} Nov 28 12:55:56 crc kubenswrapper[4779]: I1128 12:55:56.245932 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-9c6b99df5-82cnl" Nov 28 12:55:56 crc kubenswrapper[4779]: I1128 12:55:56.245947 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-9c6b99df5-82cnl" Nov 28 12:55:56 crc kubenswrapper[4779]: I1128 12:55:56.289533 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-9c6b99df5-82cnl" podStartSLOduration=3.28951289 podStartE2EDuration="3.28951289s" podCreationTimestamp="2025-11-28 12:55:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:55:56.274483704 +0000 UTC m=+1216.840159058" watchObservedRunningTime="2025-11-28 12:55:56.28951289 +0000 UTC m=+1216.855188254" Nov 28 12:55:56 crc kubenswrapper[4779]: I1128 12:55:56.948589 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 12:55:56 crc kubenswrapper[4779]: I1128 12:55:56.948926 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bd014390-52f0-4d0c-944f-aed58d5b179f" containerName="ceilometer-central-agent" containerID="cri-o://1933d485b31e0a6d50e4c9567d1bda24d717564e5c93aa56578ef001ae4e7d68" gracePeriod=30 Nov 28 12:55:56 crc kubenswrapper[4779]: I1128 12:55:56.949076 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bd014390-52f0-4d0c-944f-aed58d5b179f" containerName="proxy-httpd" containerID="cri-o://af291712d598b8d6fd36a612c4cdabc5152532d875d560ae2f111be115da9951" gracePeriod=30 Nov 28 12:55:56 crc kubenswrapper[4779]: I1128 12:55:56.949173 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bd014390-52f0-4d0c-944f-aed58d5b179f" containerName="sg-core" containerID="cri-o://0ae6312f8ee755e61ab4466220c49a10118979506e8a763f2c414ca5d589ed50" gracePeriod=30 Nov 28 12:55:56 crc kubenswrapper[4779]: I1128 12:55:56.949215 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bd014390-52f0-4d0c-944f-aed58d5b179f" containerName="ceilometer-notification-agent" containerID="cri-o://c07e218bebc32e46a5262a66789e3b2e6c91840016611a839e148ea842e4ab06" gracePeriod=30 Nov 28 12:55:56 crc kubenswrapper[4779]: I1128 12:55:56.973211 4779 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="bd014390-52f0-4d0c-944f-aed58d5b179f" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Nov 28 12:55:57 crc kubenswrapper[4779]: I1128 12:55:57.129397 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-6dc88d6fdd-9vtxx"] Nov 28 12:55:57 crc kubenswrapper[4779]: I1128 12:55:57.130727 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6dc88d6fdd-9vtxx" Nov 28 12:55:57 crc kubenswrapper[4779]: I1128 12:55:57.148947 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6dc88d6fdd-9vtxx"] Nov 28 12:55:57 crc kubenswrapper[4779]: I1128 12:55:57.159618 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-7c7488cf49-mbbwj"] Nov 28 12:55:57 crc kubenswrapper[4779]: I1128 12:55:57.171215 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7c7488cf49-mbbwj" Nov 28 12:55:57 crc kubenswrapper[4779]: I1128 12:55:57.180012 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-66496b66cd-vwvjx"] Nov 28 12:55:57 crc kubenswrapper[4779]: I1128 12:55:57.181272 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-66496b66cd-vwvjx" Nov 28 12:55:57 crc kubenswrapper[4779]: I1128 12:55:57.194335 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7c7488cf49-mbbwj"] Nov 28 12:55:57 crc kubenswrapper[4779]: I1128 12:55:57.216662 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e8eae11-8b73-41ea-a4ac-6c58de45319e-config-data\") pod \"heat-cfnapi-7c7488cf49-mbbwj\" (UID: \"0e8eae11-8b73-41ea-a4ac-6c58de45319e\") " pod="openstack/heat-cfnapi-7c7488cf49-mbbwj" Nov 28 12:55:57 crc kubenswrapper[4779]: I1128 12:55:57.216732 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a86fc8ed-8b8b-4a8a-8b27-0aa2d40fb61b-config-data-custom\") pod \"heat-engine-6dc88d6fdd-9vtxx\" (UID: \"a86fc8ed-8b8b-4a8a-8b27-0aa2d40fb61b\") " pod="openstack/heat-engine-6dc88d6fdd-9vtxx" Nov 28 12:55:57 crc kubenswrapper[4779]: I1128 12:55:57.216761 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c363caa-b3e7-4f43-9bf9-8846a5e72c34-combined-ca-bundle\") pod \"heat-api-66496b66cd-vwvjx\" (UID: \"8c363caa-b3e7-4f43-9bf9-8846a5e72c34\") " pod="openstack/heat-api-66496b66cd-vwvjx" Nov 28 12:55:57 crc kubenswrapper[4779]: I1128 12:55:57.216803 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a86fc8ed-8b8b-4a8a-8b27-0aa2d40fb61b-combined-ca-bundle\") pod \"heat-engine-6dc88d6fdd-9vtxx\" (UID: \"a86fc8ed-8b8b-4a8a-8b27-0aa2d40fb61b\") " pod="openstack/heat-engine-6dc88d6fdd-9vtxx" Nov 28 12:55:57 crc kubenswrapper[4779]: I1128 12:55:57.216829 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8c363caa-b3e7-4f43-9bf9-8846a5e72c34-config-data-custom\") pod \"heat-api-66496b66cd-vwvjx\" (UID: \"8c363caa-b3e7-4f43-9bf9-8846a5e72c34\") " pod="openstack/heat-api-66496b66cd-vwvjx" Nov 28 12:55:57 crc kubenswrapper[4779]: I1128 12:55:57.216859 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a86fc8ed-8b8b-4a8a-8b27-0aa2d40fb61b-config-data\") pod \"heat-engine-6dc88d6fdd-9vtxx\" (UID: \"a86fc8ed-8b8b-4a8a-8b27-0aa2d40fb61b\") " pod="openstack/heat-engine-6dc88d6fdd-9vtxx" Nov 28 12:55:57 crc kubenswrapper[4779]: I1128 12:55:57.216896 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e8eae11-8b73-41ea-a4ac-6c58de45319e-config-data-custom\") pod \"heat-cfnapi-7c7488cf49-mbbwj\" (UID: \"0e8eae11-8b73-41ea-a4ac-6c58de45319e\") " pod="openstack/heat-cfnapi-7c7488cf49-mbbwj" Nov 28 12:55:57 crc kubenswrapper[4779]: I1128 12:55:57.216921 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c363caa-b3e7-4f43-9bf9-8846a5e72c34-config-data\") pod \"heat-api-66496b66cd-vwvjx\" (UID: \"8c363caa-b3e7-4f43-9bf9-8846a5e72c34\") " pod="openstack/heat-api-66496b66cd-vwvjx" Nov 28 12:55:57 crc kubenswrapper[4779]: I1128 12:55:57.216938 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr52v\" (UniqueName: \"kubernetes.io/projected/0e8eae11-8b73-41ea-a4ac-6c58de45319e-kube-api-access-sr52v\") pod \"heat-cfnapi-7c7488cf49-mbbwj\" (UID: \"0e8eae11-8b73-41ea-a4ac-6c58de45319e\") " pod="openstack/heat-cfnapi-7c7488cf49-mbbwj" Nov 28 12:55:57 crc kubenswrapper[4779]: I1128 12:55:57.216961 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b69nj\" (UniqueName: \"kubernetes.io/projected/a86fc8ed-8b8b-4a8a-8b27-0aa2d40fb61b-kube-api-access-b69nj\") pod \"heat-engine-6dc88d6fdd-9vtxx\" (UID: \"a86fc8ed-8b8b-4a8a-8b27-0aa2d40fb61b\") " pod="openstack/heat-engine-6dc88d6fdd-9vtxx" Nov 28 12:55:57 crc kubenswrapper[4779]: I1128 12:55:57.216988 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e8eae11-8b73-41ea-a4ac-6c58de45319e-combined-ca-bundle\") pod \"heat-cfnapi-7c7488cf49-mbbwj\" (UID: \"0e8eae11-8b73-41ea-a4ac-6c58de45319e\") " pod="openstack/heat-cfnapi-7c7488cf49-mbbwj" Nov 28 12:55:57 crc kubenswrapper[4779]: I1128 12:55:57.217010 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrpwj\" (UniqueName: \"kubernetes.io/projected/8c363caa-b3e7-4f43-9bf9-8846a5e72c34-kube-api-access-wrpwj\") pod \"heat-api-66496b66cd-vwvjx\" (UID: \"8c363caa-b3e7-4f43-9bf9-8846a5e72c34\") " pod="openstack/heat-api-66496b66cd-vwvjx" Nov 28 12:55:57 crc kubenswrapper[4779]: I1128 12:55:57.233169 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-66496b66cd-vwvjx"] Nov 28 12:55:57 crc kubenswrapper[4779]: I1128 12:55:57.274065 4779 generic.go:334] "Generic (PLEG): container finished" podID="bd014390-52f0-4d0c-944f-aed58d5b179f" containerID="af291712d598b8d6fd36a612c4cdabc5152532d875d560ae2f111be115da9951" exitCode=0 Nov 28 12:55:57 crc kubenswrapper[4779]: I1128 12:55:57.274108 4779 generic.go:334] "Generic (PLEG): container finished" podID="bd014390-52f0-4d0c-944f-aed58d5b179f" containerID="0ae6312f8ee755e61ab4466220c49a10118979506e8a763f2c414ca5d589ed50" exitCode=2 Nov 28 12:55:57 crc kubenswrapper[4779]: I1128 12:55:57.275108 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd014390-52f0-4d0c-944f-aed58d5b179f","Type":"ContainerDied","Data":"af291712d598b8d6fd36a612c4cdabc5152532d875d560ae2f111be115da9951"} Nov 28 12:55:57 crc kubenswrapper[4779]: I1128 12:55:57.275136 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd014390-52f0-4d0c-944f-aed58d5b179f","Type":"ContainerDied","Data":"0ae6312f8ee755e61ab4466220c49a10118979506e8a763f2c414ca5d589ed50"} Nov 28 12:55:57 crc kubenswrapper[4779]: I1128 12:55:57.319374 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e8eae11-8b73-41ea-a4ac-6c58de45319e-config-data-custom\") pod \"heat-cfnapi-7c7488cf49-mbbwj\" (UID: \"0e8eae11-8b73-41ea-a4ac-6c58de45319e\") " pod="openstack/heat-cfnapi-7c7488cf49-mbbwj" Nov 28 12:55:57 crc kubenswrapper[4779]: I1128 12:55:57.319436 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c363caa-b3e7-4f43-9bf9-8846a5e72c34-config-data\") pod \"heat-api-66496b66cd-vwvjx\" (UID: \"8c363caa-b3e7-4f43-9bf9-8846a5e72c34\") " pod="openstack/heat-api-66496b66cd-vwvjx" Nov 28 12:55:57 crc kubenswrapper[4779]: I1128 12:55:57.319458 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sr52v\" (UniqueName: \"kubernetes.io/projected/0e8eae11-8b73-41ea-a4ac-6c58de45319e-kube-api-access-sr52v\") pod \"heat-cfnapi-7c7488cf49-mbbwj\" (UID: \"0e8eae11-8b73-41ea-a4ac-6c58de45319e\") " pod="openstack/heat-cfnapi-7c7488cf49-mbbwj" Nov 28 12:55:57 crc kubenswrapper[4779]: I1128 12:55:57.319483 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b69nj\" (UniqueName: \"kubernetes.io/projected/a86fc8ed-8b8b-4a8a-8b27-0aa2d40fb61b-kube-api-access-b69nj\") pod \"heat-engine-6dc88d6fdd-9vtxx\" (UID: \"a86fc8ed-8b8b-4a8a-8b27-0aa2d40fb61b\") " pod="openstack/heat-engine-6dc88d6fdd-9vtxx" Nov 28 12:55:57 crc kubenswrapper[4779]: I1128 12:55:57.319511 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e8eae11-8b73-41ea-a4ac-6c58de45319e-combined-ca-bundle\") pod \"heat-cfnapi-7c7488cf49-mbbwj\" (UID: \"0e8eae11-8b73-41ea-a4ac-6c58de45319e\") " pod="openstack/heat-cfnapi-7c7488cf49-mbbwj" Nov 28 12:55:57 crc kubenswrapper[4779]: I1128 12:55:57.319534 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrpwj\" (UniqueName: \"kubernetes.io/projected/8c363caa-b3e7-4f43-9bf9-8846a5e72c34-kube-api-access-wrpwj\") pod \"heat-api-66496b66cd-vwvjx\" (UID: \"8c363caa-b3e7-4f43-9bf9-8846a5e72c34\") " pod="openstack/heat-api-66496b66cd-vwvjx" Nov 28 12:55:57 crc kubenswrapper[4779]: I1128 12:55:57.319564 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e8eae11-8b73-41ea-a4ac-6c58de45319e-config-data\") pod \"heat-cfnapi-7c7488cf49-mbbwj\" (UID: \"0e8eae11-8b73-41ea-a4ac-6c58de45319e\") " pod="openstack/heat-cfnapi-7c7488cf49-mbbwj" Nov 28 12:55:57 crc kubenswrapper[4779]: I1128 12:55:57.319615 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a86fc8ed-8b8b-4a8a-8b27-0aa2d40fb61b-config-data-custom\") pod \"heat-engine-6dc88d6fdd-9vtxx\" (UID: \"a86fc8ed-8b8b-4a8a-8b27-0aa2d40fb61b\") " pod="openstack/heat-engine-6dc88d6fdd-9vtxx" Nov 28 12:55:57 crc kubenswrapper[4779]: I1128 12:55:57.319639 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c363caa-b3e7-4f43-9bf9-8846a5e72c34-combined-ca-bundle\") pod \"heat-api-66496b66cd-vwvjx\" (UID: \"8c363caa-b3e7-4f43-9bf9-8846a5e72c34\") " pod="openstack/heat-api-66496b66cd-vwvjx" Nov 28 12:55:57 crc kubenswrapper[4779]: I1128 12:55:57.319697 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a86fc8ed-8b8b-4a8a-8b27-0aa2d40fb61b-combined-ca-bundle\") pod \"heat-engine-6dc88d6fdd-9vtxx\" (UID: \"a86fc8ed-8b8b-4a8a-8b27-0aa2d40fb61b\") " pod="openstack/heat-engine-6dc88d6fdd-9vtxx" Nov 28 12:55:57 crc kubenswrapper[4779]: I1128 12:55:57.319719 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8c363caa-b3e7-4f43-9bf9-8846a5e72c34-config-data-custom\") pod \"heat-api-66496b66cd-vwvjx\" (UID: \"8c363caa-b3e7-4f43-9bf9-8846a5e72c34\") " pod="openstack/heat-api-66496b66cd-vwvjx" Nov 28 12:55:57 crc kubenswrapper[4779]: I1128 12:55:57.319742 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a86fc8ed-8b8b-4a8a-8b27-0aa2d40fb61b-config-data\") pod \"heat-engine-6dc88d6fdd-9vtxx\" (UID: \"a86fc8ed-8b8b-4a8a-8b27-0aa2d40fb61b\") " pod="openstack/heat-engine-6dc88d6fdd-9vtxx" Nov 28 12:55:57 crc kubenswrapper[4779]: I1128 12:55:57.327463 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e8eae11-8b73-41ea-a4ac-6c58de45319e-config-data-custom\") pod \"heat-cfnapi-7c7488cf49-mbbwj\" (UID: \"0e8eae11-8b73-41ea-a4ac-6c58de45319e\") " pod="openstack/heat-cfnapi-7c7488cf49-mbbwj" Nov 28 12:55:57 crc kubenswrapper[4779]: I1128 12:55:57.333909 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a86fc8ed-8b8b-4a8a-8b27-0aa2d40fb61b-config-data\") pod \"heat-engine-6dc88d6fdd-9vtxx\" (UID: \"a86fc8ed-8b8b-4a8a-8b27-0aa2d40fb61b\") " pod="openstack/heat-engine-6dc88d6fdd-9vtxx" Nov 28 12:55:57 crc kubenswrapper[4779]: I1128 12:55:57.334329 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c363caa-b3e7-4f43-9bf9-8846a5e72c34-config-data\") pod \"heat-api-66496b66cd-vwvjx\" (UID: \"8c363caa-b3e7-4f43-9bf9-8846a5e72c34\") " pod="openstack/heat-api-66496b66cd-vwvjx" Nov 28 12:55:57 crc kubenswrapper[4779]: I1128 12:55:57.336240 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e8eae11-8b73-41ea-a4ac-6c58de45319e-combined-ca-bundle\") pod \"heat-cfnapi-7c7488cf49-mbbwj\" (UID: \"0e8eae11-8b73-41ea-a4ac-6c58de45319e\") " pod="openstack/heat-cfnapi-7c7488cf49-mbbwj" Nov 28 12:55:57 crc kubenswrapper[4779]: I1128 12:55:57.336625 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a86fc8ed-8b8b-4a8a-8b27-0aa2d40fb61b-config-data-custom\") pod \"heat-engine-6dc88d6fdd-9vtxx\" (UID: \"a86fc8ed-8b8b-4a8a-8b27-0aa2d40fb61b\") " pod="openstack/heat-engine-6dc88d6fdd-9vtxx" Nov 28 12:55:57 crc kubenswrapper[4779]: I1128 12:55:57.336768 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8c363caa-b3e7-4f43-9bf9-8846a5e72c34-config-data-custom\") pod \"heat-api-66496b66cd-vwvjx\" (UID: \"8c363caa-b3e7-4f43-9bf9-8846a5e72c34\") " pod="openstack/heat-api-66496b66cd-vwvjx" Nov 28 12:55:57 crc kubenswrapper[4779]: I1128 12:55:57.337243 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a86fc8ed-8b8b-4a8a-8b27-0aa2d40fb61b-combined-ca-bundle\") pod \"heat-engine-6dc88d6fdd-9vtxx\" (UID: \"a86fc8ed-8b8b-4a8a-8b27-0aa2d40fb61b\") " pod="openstack/heat-engine-6dc88d6fdd-9vtxx" Nov 28 12:55:57 crc kubenswrapper[4779]: I1128 12:55:57.338960 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c363caa-b3e7-4f43-9bf9-8846a5e72c34-combined-ca-bundle\") pod \"heat-api-66496b66cd-vwvjx\" (UID: \"8c363caa-b3e7-4f43-9bf9-8846a5e72c34\") " pod="openstack/heat-api-66496b66cd-vwvjx" Nov 28 12:55:57 crc kubenswrapper[4779]: I1128 12:55:57.339836 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e8eae11-8b73-41ea-a4ac-6c58de45319e-config-data\") pod \"heat-cfnapi-7c7488cf49-mbbwj\" (UID: \"0e8eae11-8b73-41ea-a4ac-6c58de45319e\") " pod="openstack/heat-cfnapi-7c7488cf49-mbbwj" Nov 28 12:55:57 crc kubenswrapper[4779]: I1128 12:55:57.345341 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sr52v\" (UniqueName: \"kubernetes.io/projected/0e8eae11-8b73-41ea-a4ac-6c58de45319e-kube-api-access-sr52v\") pod \"heat-cfnapi-7c7488cf49-mbbwj\" (UID: \"0e8eae11-8b73-41ea-a4ac-6c58de45319e\") " pod="openstack/heat-cfnapi-7c7488cf49-mbbwj" Nov 28 12:55:57 crc kubenswrapper[4779]: I1128 12:55:57.348770 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b69nj\" (UniqueName: \"kubernetes.io/projected/a86fc8ed-8b8b-4a8a-8b27-0aa2d40fb61b-kube-api-access-b69nj\") pod \"heat-engine-6dc88d6fdd-9vtxx\" (UID: \"a86fc8ed-8b8b-4a8a-8b27-0aa2d40fb61b\") " pod="openstack/heat-engine-6dc88d6fdd-9vtxx" Nov 28 12:55:57 crc kubenswrapper[4779]: I1128 12:55:57.349343 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrpwj\" (UniqueName: \"kubernetes.io/projected/8c363caa-b3e7-4f43-9bf9-8846a5e72c34-kube-api-access-wrpwj\") pod \"heat-api-66496b66cd-vwvjx\" (UID: \"8c363caa-b3e7-4f43-9bf9-8846a5e72c34\") " pod="openstack/heat-api-66496b66cd-vwvjx" Nov 28 12:55:57 crc kubenswrapper[4779]: I1128 12:55:57.445794 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6dc88d6fdd-9vtxx" Nov 28 12:55:57 crc kubenswrapper[4779]: I1128 12:55:57.555766 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7c7488cf49-mbbwj" Nov 28 12:55:57 crc kubenswrapper[4779]: I1128 12:55:57.559372 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-66496b66cd-vwvjx" Nov 28 12:55:58 crc kubenswrapper[4779]: I1128 12:55:58.004350 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6dc88d6fdd-9vtxx"] Nov 28 12:55:58 crc kubenswrapper[4779]: I1128 12:55:58.155488 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7c7488cf49-mbbwj"] Nov 28 12:55:58 crc kubenswrapper[4779]: W1128 12:55:58.161230 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e8eae11_8b73_41ea_a4ac_6c58de45319e.slice/crio-0b19323dbb732df5270c371b2ccc3126c8a26cb7f6602fbe0676e6a89241ac39 WatchSource:0}: Error finding container 0b19323dbb732df5270c371b2ccc3126c8a26cb7f6602fbe0676e6a89241ac39: Status 404 returned error can't find the container with id 0b19323dbb732df5270c371b2ccc3126c8a26cb7f6602fbe0676e6a89241ac39 Nov 28 12:55:58 crc kubenswrapper[4779]: I1128 12:55:58.167496 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-66496b66cd-vwvjx"] Nov 28 12:55:58 crc kubenswrapper[4779]: I1128 12:55:58.245206 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-674bfd5544-x2xz6" Nov 28 12:55:58 crc kubenswrapper[4779]: I1128 12:55:58.299005 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6dc88d6fdd-9vtxx" event={"ID":"a86fc8ed-8b8b-4a8a-8b27-0aa2d40fb61b","Type":"ContainerStarted","Data":"a025b3dfa818f69935ae8d34d50a1eb6a69ee4b15d3f8d1fcad350cdd3713d6f"} Nov 28 12:55:58 crc kubenswrapper[4779]: I1128 12:55:58.304082 4779 generic.go:334] "Generic (PLEG): container finished" podID="bd014390-52f0-4d0c-944f-aed58d5b179f" containerID="c07e218bebc32e46a5262a66789e3b2e6c91840016611a839e148ea842e4ab06" exitCode=0 Nov 28 12:55:58 crc kubenswrapper[4779]: I1128 12:55:58.304119 4779 generic.go:334] "Generic (PLEG): container finished" podID="bd014390-52f0-4d0c-944f-aed58d5b179f" containerID="1933d485b31e0a6d50e4c9567d1bda24d717564e5c93aa56578ef001ae4e7d68" exitCode=0 Nov 28 12:55:58 crc kubenswrapper[4779]: I1128 12:55:58.304150 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd014390-52f0-4d0c-944f-aed58d5b179f","Type":"ContainerDied","Data":"c07e218bebc32e46a5262a66789e3b2e6c91840016611a839e148ea842e4ab06"} Nov 28 12:55:58 crc kubenswrapper[4779]: I1128 12:55:58.304165 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd014390-52f0-4d0c-944f-aed58d5b179f","Type":"ContainerDied","Data":"1933d485b31e0a6d50e4c9567d1bda24d717564e5c93aa56578ef001ae4e7d68"} Nov 28 12:55:58 crc kubenswrapper[4779]: I1128 12:55:58.305392 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-66496b66cd-vwvjx" event={"ID":"8c363caa-b3e7-4f43-9bf9-8846a5e72c34","Type":"ContainerStarted","Data":"afad1b2161d51c693125344e12f6601466a89c9241cb862167e11edc78e0c43f"} Nov 28 12:55:58 crc kubenswrapper[4779]: I1128 12:55:58.306862 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7c7488cf49-mbbwj" event={"ID":"0e8eae11-8b73-41ea-a4ac-6c58de45319e","Type":"ContainerStarted","Data":"0b19323dbb732df5270c371b2ccc3126c8a26cb7f6602fbe0676e6a89241ac39"} Nov 28 12:55:58 crc kubenswrapper[4779]: I1128 12:55:58.435337 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-674bfd5544-x2xz6" Nov 28 12:55:58 crc kubenswrapper[4779]: I1128 12:55:58.571386 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 12:55:58 crc kubenswrapper[4779]: I1128 12:55:58.679381 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bd014390-52f0-4d0c-944f-aed58d5b179f-sg-core-conf-yaml\") pod \"bd014390-52f0-4d0c-944f-aed58d5b179f\" (UID: \"bd014390-52f0-4d0c-944f-aed58d5b179f\") " Nov 28 12:55:58 crc kubenswrapper[4779]: I1128 12:55:58.679486 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd014390-52f0-4d0c-944f-aed58d5b179f-config-data\") pod \"bd014390-52f0-4d0c-944f-aed58d5b179f\" (UID: \"bd014390-52f0-4d0c-944f-aed58d5b179f\") " Nov 28 12:55:58 crc kubenswrapper[4779]: I1128 12:55:58.679507 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd014390-52f0-4d0c-944f-aed58d5b179f-combined-ca-bundle\") pod \"bd014390-52f0-4d0c-944f-aed58d5b179f\" (UID: \"bd014390-52f0-4d0c-944f-aed58d5b179f\") " Nov 28 12:55:58 crc kubenswrapper[4779]: I1128 12:55:58.679561 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd014390-52f0-4d0c-944f-aed58d5b179f-scripts\") pod \"bd014390-52f0-4d0c-944f-aed58d5b179f\" (UID: \"bd014390-52f0-4d0c-944f-aed58d5b179f\") " Nov 28 12:55:58 crc kubenswrapper[4779]: I1128 12:55:58.679607 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8cp5\" (UniqueName: \"kubernetes.io/projected/bd014390-52f0-4d0c-944f-aed58d5b179f-kube-api-access-g8cp5\") pod \"bd014390-52f0-4d0c-944f-aed58d5b179f\" (UID: \"bd014390-52f0-4d0c-944f-aed58d5b179f\") " Nov 28 12:55:58 crc kubenswrapper[4779]: I1128 12:55:58.679688 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd014390-52f0-4d0c-944f-aed58d5b179f-run-httpd\") pod \"bd014390-52f0-4d0c-944f-aed58d5b179f\" (UID: \"bd014390-52f0-4d0c-944f-aed58d5b179f\") " Nov 28 12:55:58 crc kubenswrapper[4779]: I1128 12:55:58.679745 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd014390-52f0-4d0c-944f-aed58d5b179f-log-httpd\") pod \"bd014390-52f0-4d0c-944f-aed58d5b179f\" (UID: \"bd014390-52f0-4d0c-944f-aed58d5b179f\") " Nov 28 12:55:58 crc kubenswrapper[4779]: I1128 12:55:58.680644 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd014390-52f0-4d0c-944f-aed58d5b179f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bd014390-52f0-4d0c-944f-aed58d5b179f" (UID: "bd014390-52f0-4d0c-944f-aed58d5b179f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:55:58 crc kubenswrapper[4779]: I1128 12:55:58.683835 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd014390-52f0-4d0c-944f-aed58d5b179f-scripts" (OuterVolumeSpecName: "scripts") pod "bd014390-52f0-4d0c-944f-aed58d5b179f" (UID: "bd014390-52f0-4d0c-944f-aed58d5b179f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:55:58 crc kubenswrapper[4779]: I1128 12:55:58.686358 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd014390-52f0-4d0c-944f-aed58d5b179f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bd014390-52f0-4d0c-944f-aed58d5b179f" (UID: "bd014390-52f0-4d0c-944f-aed58d5b179f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:55:58 crc kubenswrapper[4779]: I1128 12:55:58.686403 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd014390-52f0-4d0c-944f-aed58d5b179f-kube-api-access-g8cp5" (OuterVolumeSpecName: "kube-api-access-g8cp5") pod "bd014390-52f0-4d0c-944f-aed58d5b179f" (UID: "bd014390-52f0-4d0c-944f-aed58d5b179f"). InnerVolumeSpecName "kube-api-access-g8cp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:55:58 crc kubenswrapper[4779]: I1128 12:55:58.736199 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd014390-52f0-4d0c-944f-aed58d5b179f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bd014390-52f0-4d0c-944f-aed58d5b179f" (UID: "bd014390-52f0-4d0c-944f-aed58d5b179f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:55:58 crc kubenswrapper[4779]: I1128 12:55:58.781722 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8cp5\" (UniqueName: \"kubernetes.io/projected/bd014390-52f0-4d0c-944f-aed58d5b179f-kube-api-access-g8cp5\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:58 crc kubenswrapper[4779]: I1128 12:55:58.781750 4779 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd014390-52f0-4d0c-944f-aed58d5b179f-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:58 crc kubenswrapper[4779]: I1128 12:55:58.781759 4779 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd014390-52f0-4d0c-944f-aed58d5b179f-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:58 crc kubenswrapper[4779]: I1128 12:55:58.781767 4779 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bd014390-52f0-4d0c-944f-aed58d5b179f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:58 crc kubenswrapper[4779]: I1128 12:55:58.781776 4779 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd014390-52f0-4d0c-944f-aed58d5b179f-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:58 crc kubenswrapper[4779]: I1128 12:55:58.783265 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd014390-52f0-4d0c-944f-aed58d5b179f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bd014390-52f0-4d0c-944f-aed58d5b179f" (UID: "bd014390-52f0-4d0c-944f-aed58d5b179f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:55:58 crc kubenswrapper[4779]: I1128 12:55:58.784273 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd014390-52f0-4d0c-944f-aed58d5b179f-config-data" (OuterVolumeSpecName: "config-data") pod "bd014390-52f0-4d0c-944f-aed58d5b179f" (UID: "bd014390-52f0-4d0c-944f-aed58d5b179f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:55:58 crc kubenswrapper[4779]: I1128 12:55:58.906625 4779 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd014390-52f0-4d0c-944f-aed58d5b179f-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:58 crc kubenswrapper[4779]: I1128 12:55:58.906663 4779 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd014390-52f0-4d0c-944f-aed58d5b179f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:55:58 crc kubenswrapper[4779]: I1128 12:55:58.923438 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-864dfdcc4d-7wcth"] Nov 28 12:55:58 crc kubenswrapper[4779]: I1128 12:55:58.923935 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-864dfdcc4d-7wcth" podUID="240f13c7-a251-4e85-b0c2-fafc0c03d52c" containerName="heat-api" containerID="cri-o://c264d2dafa86be1299c2b98d8efd0e997671227c6702da51282cecb82e2fa9f9" gracePeriod=60 Nov 28 12:55:58 crc kubenswrapper[4779]: I1128 12:55:58.939885 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6b7bf76b6-kbx6h"] Nov 28 12:55:58 crc kubenswrapper[4779]: I1128 12:55:58.940082 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-6b7bf76b6-kbx6h" podUID="1604b21d-7590-4a34-9cf2-e3d03f2db385" containerName="heat-cfnapi" containerID="cri-o://bfb97f0358d284d94f52421e9edc4d3f896df828fcc1e9f49163b623682331b6" gracePeriod=60 Nov 28 12:55:58 crc kubenswrapper[4779]: I1128 12:55:58.950357 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-74c96b7975-gndjl"] Nov 28 12:55:58 crc kubenswrapper[4779]: E1128 12:55:58.950708 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd014390-52f0-4d0c-944f-aed58d5b179f" containerName="ceilometer-notification-agent" Nov 28 12:55:58 crc kubenswrapper[4779]: I1128 12:55:58.950724 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd014390-52f0-4d0c-944f-aed58d5b179f" containerName="ceilometer-notification-agent" Nov 28 12:55:58 crc kubenswrapper[4779]: E1128 12:55:58.950734 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd014390-52f0-4d0c-944f-aed58d5b179f" containerName="ceilometer-central-agent" Nov 28 12:55:58 crc kubenswrapper[4779]: I1128 12:55:58.950739 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd014390-52f0-4d0c-944f-aed58d5b179f" containerName="ceilometer-central-agent" Nov 28 12:55:58 crc kubenswrapper[4779]: E1128 12:55:58.950769 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd014390-52f0-4d0c-944f-aed58d5b179f" containerName="proxy-httpd" Nov 28 12:55:58 crc kubenswrapper[4779]: I1128 12:55:58.950775 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd014390-52f0-4d0c-944f-aed58d5b179f" containerName="proxy-httpd" Nov 28 12:55:58 crc kubenswrapper[4779]: E1128 12:55:58.950789 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd014390-52f0-4d0c-944f-aed58d5b179f" containerName="sg-core" Nov 28 12:55:58 crc kubenswrapper[4779]: I1128 12:55:58.950795 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd014390-52f0-4d0c-944f-aed58d5b179f" containerName="sg-core" Nov 28 12:55:58 crc kubenswrapper[4779]: I1128 12:55:58.950934 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd014390-52f0-4d0c-944f-aed58d5b179f" containerName="ceilometer-central-agent" Nov 28 12:55:58 crc kubenswrapper[4779]: I1128 12:55:58.950945 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd014390-52f0-4d0c-944f-aed58d5b179f" containerName="sg-core" Nov 28 12:55:58 crc kubenswrapper[4779]: I1128 12:55:58.950959 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd014390-52f0-4d0c-944f-aed58d5b179f" containerName="proxy-httpd" Nov 28 12:55:58 crc kubenswrapper[4779]: I1128 12:55:58.950972 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd014390-52f0-4d0c-944f-aed58d5b179f" containerName="ceilometer-notification-agent" Nov 28 12:55:58 crc kubenswrapper[4779]: I1128 12:55:58.952797 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-74c96b7975-gndjl" Nov 28 12:55:58 crc kubenswrapper[4779]: I1128 12:55:58.961314 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Nov 28 12:55:58 crc kubenswrapper[4779]: I1128 12:55:58.961464 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Nov 28 12:55:58 crc kubenswrapper[4779]: I1128 12:55:58.962445 4779 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-864dfdcc4d-7wcth" podUID="240f13c7-a251-4e85-b0c2-fafc0c03d52c" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.168:8004/healthcheck\": EOF" Nov 28 12:55:58 crc kubenswrapper[4779]: I1128 12:55:58.975727 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-5675dff4b5-5c9sq"] Nov 28 12:55:58 crc kubenswrapper[4779]: I1128 12:55:58.977059 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5675dff4b5-5c9sq" Nov 28 12:55:58 crc kubenswrapper[4779]: I1128 12:55:58.981000 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Nov 28 12:55:58 crc kubenswrapper[4779]: I1128 12:55:58.981241 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Nov 28 12:55:58 crc kubenswrapper[4779]: I1128 12:55:58.996903 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-74c96b7975-gndjl"] Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.014028 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5675dff4b5-5c9sq"] Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.020280 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.110734 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6ecf1b7-5d5c-4a0f-9fcb-caed8534c325-combined-ca-bundle\") pod \"heat-cfnapi-74c96b7975-gndjl\" (UID: \"b6ecf1b7-5d5c-4a0f-9fcb-caed8534c325\") " pod="openstack/heat-cfnapi-74c96b7975-gndjl" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.110947 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ec26de16-988c-4242-8de5-e379eeff18d8-config-data-custom\") pod \"heat-api-5675dff4b5-5c9sq\" (UID: \"ec26de16-988c-4242-8de5-e379eeff18d8\") " pod="openstack/heat-api-5675dff4b5-5c9sq" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.111524 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6ecf1b7-5d5c-4a0f-9fcb-caed8534c325-internal-tls-certs\") pod \"heat-cfnapi-74c96b7975-gndjl\" (UID: \"b6ecf1b7-5d5c-4a0f-9fcb-caed8534c325\") " pod="openstack/heat-cfnapi-74c96b7975-gndjl" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.111556 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp6tk\" (UniqueName: \"kubernetes.io/projected/b6ecf1b7-5d5c-4a0f-9fcb-caed8534c325-kube-api-access-pp6tk\") pod \"heat-cfnapi-74c96b7975-gndjl\" (UID: \"b6ecf1b7-5d5c-4a0f-9fcb-caed8534c325\") " pod="openstack/heat-cfnapi-74c96b7975-gndjl" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.111646 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec26de16-988c-4242-8de5-e379eeff18d8-public-tls-certs\") pod \"heat-api-5675dff4b5-5c9sq\" (UID: \"ec26de16-988c-4242-8de5-e379eeff18d8\") " pod="openstack/heat-api-5675dff4b5-5c9sq" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.111745 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txsrz\" (UniqueName: \"kubernetes.io/projected/ec26de16-988c-4242-8de5-e379eeff18d8-kube-api-access-txsrz\") pod \"heat-api-5675dff4b5-5c9sq\" (UID: \"ec26de16-988c-4242-8de5-e379eeff18d8\") " pod="openstack/heat-api-5675dff4b5-5c9sq" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.111796 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec26de16-988c-4242-8de5-e379eeff18d8-combined-ca-bundle\") pod \"heat-api-5675dff4b5-5c9sq\" (UID: \"ec26de16-988c-4242-8de5-e379eeff18d8\") " pod="openstack/heat-api-5675dff4b5-5c9sq" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.111960 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6ecf1b7-5d5c-4a0f-9fcb-caed8534c325-public-tls-certs\") pod \"heat-cfnapi-74c96b7975-gndjl\" (UID: \"b6ecf1b7-5d5c-4a0f-9fcb-caed8534c325\") " pod="openstack/heat-cfnapi-74c96b7975-gndjl" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.111999 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b6ecf1b7-5d5c-4a0f-9fcb-caed8534c325-config-data-custom\") pod \"heat-cfnapi-74c96b7975-gndjl\" (UID: \"b6ecf1b7-5d5c-4a0f-9fcb-caed8534c325\") " pod="openstack/heat-cfnapi-74c96b7975-gndjl" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.112072 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6ecf1b7-5d5c-4a0f-9fcb-caed8534c325-config-data\") pod \"heat-cfnapi-74c96b7975-gndjl\" (UID: \"b6ecf1b7-5d5c-4a0f-9fcb-caed8534c325\") " pod="openstack/heat-cfnapi-74c96b7975-gndjl" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.112673 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec26de16-988c-4242-8de5-e379eeff18d8-internal-tls-certs\") pod \"heat-api-5675dff4b5-5c9sq\" (UID: \"ec26de16-988c-4242-8de5-e379eeff18d8\") " pod="openstack/heat-api-5675dff4b5-5c9sq" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.112726 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec26de16-988c-4242-8de5-e379eeff18d8-config-data\") pod \"heat-api-5675dff4b5-5c9sq\" (UID: \"ec26de16-988c-4242-8de5-e379eeff18d8\") " pod="openstack/heat-api-5675dff4b5-5c9sq" Nov 28 12:55:59 crc kubenswrapper[4779]: E1128 12:55:59.126627 4779 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c363caa_b3e7_4f43_9bf9_8846a5e72c34.slice/crio-conmon-710e68ca25b60cbcc0e090e4464d40ac156b5c3f28ac680ba1a9c8821b2b6238.scope\": RecentStats: unable to find data in memory cache]" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.218165 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6ecf1b7-5d5c-4a0f-9fcb-caed8534c325-config-data\") pod \"heat-cfnapi-74c96b7975-gndjl\" (UID: \"b6ecf1b7-5d5c-4a0f-9fcb-caed8534c325\") " pod="openstack/heat-cfnapi-74c96b7975-gndjl" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.218228 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec26de16-988c-4242-8de5-e379eeff18d8-internal-tls-certs\") pod \"heat-api-5675dff4b5-5c9sq\" (UID: \"ec26de16-988c-4242-8de5-e379eeff18d8\") " pod="openstack/heat-api-5675dff4b5-5c9sq" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.218248 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec26de16-988c-4242-8de5-e379eeff18d8-config-data\") pod \"heat-api-5675dff4b5-5c9sq\" (UID: \"ec26de16-988c-4242-8de5-e379eeff18d8\") " pod="openstack/heat-api-5675dff4b5-5c9sq" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.218267 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6ecf1b7-5d5c-4a0f-9fcb-caed8534c325-combined-ca-bundle\") pod \"heat-cfnapi-74c96b7975-gndjl\" (UID: \"b6ecf1b7-5d5c-4a0f-9fcb-caed8534c325\") " pod="openstack/heat-cfnapi-74c96b7975-gndjl" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.218312 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ec26de16-988c-4242-8de5-e379eeff18d8-config-data-custom\") pod \"heat-api-5675dff4b5-5c9sq\" (UID: \"ec26de16-988c-4242-8de5-e379eeff18d8\") " pod="openstack/heat-api-5675dff4b5-5c9sq" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.218334 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6ecf1b7-5d5c-4a0f-9fcb-caed8534c325-internal-tls-certs\") pod \"heat-cfnapi-74c96b7975-gndjl\" (UID: \"b6ecf1b7-5d5c-4a0f-9fcb-caed8534c325\") " pod="openstack/heat-cfnapi-74c96b7975-gndjl" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.218350 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp6tk\" (UniqueName: \"kubernetes.io/projected/b6ecf1b7-5d5c-4a0f-9fcb-caed8534c325-kube-api-access-pp6tk\") pod \"heat-cfnapi-74c96b7975-gndjl\" (UID: \"b6ecf1b7-5d5c-4a0f-9fcb-caed8534c325\") " pod="openstack/heat-cfnapi-74c96b7975-gndjl" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.218373 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec26de16-988c-4242-8de5-e379eeff18d8-public-tls-certs\") pod \"heat-api-5675dff4b5-5c9sq\" (UID: \"ec26de16-988c-4242-8de5-e379eeff18d8\") " pod="openstack/heat-api-5675dff4b5-5c9sq" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.218402 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txsrz\" (UniqueName: \"kubernetes.io/projected/ec26de16-988c-4242-8de5-e379eeff18d8-kube-api-access-txsrz\") pod \"heat-api-5675dff4b5-5c9sq\" (UID: \"ec26de16-988c-4242-8de5-e379eeff18d8\") " pod="openstack/heat-api-5675dff4b5-5c9sq" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.218436 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec26de16-988c-4242-8de5-e379eeff18d8-combined-ca-bundle\") pod \"heat-api-5675dff4b5-5c9sq\" (UID: \"ec26de16-988c-4242-8de5-e379eeff18d8\") " pod="openstack/heat-api-5675dff4b5-5c9sq" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.218467 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6ecf1b7-5d5c-4a0f-9fcb-caed8534c325-public-tls-certs\") pod \"heat-cfnapi-74c96b7975-gndjl\" (UID: \"b6ecf1b7-5d5c-4a0f-9fcb-caed8534c325\") " pod="openstack/heat-cfnapi-74c96b7975-gndjl" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.218483 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b6ecf1b7-5d5c-4a0f-9fcb-caed8534c325-config-data-custom\") pod \"heat-cfnapi-74c96b7975-gndjl\" (UID: \"b6ecf1b7-5d5c-4a0f-9fcb-caed8534c325\") " pod="openstack/heat-cfnapi-74c96b7975-gndjl" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.225266 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec26de16-988c-4242-8de5-e379eeff18d8-public-tls-certs\") pod \"heat-api-5675dff4b5-5c9sq\" (UID: \"ec26de16-988c-4242-8de5-e379eeff18d8\") " pod="openstack/heat-api-5675dff4b5-5c9sq" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.227005 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6ecf1b7-5d5c-4a0f-9fcb-caed8534c325-public-tls-certs\") pod \"heat-cfnapi-74c96b7975-gndjl\" (UID: \"b6ecf1b7-5d5c-4a0f-9fcb-caed8534c325\") " pod="openstack/heat-cfnapi-74c96b7975-gndjl" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.227057 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6ecf1b7-5d5c-4a0f-9fcb-caed8534c325-combined-ca-bundle\") pod \"heat-cfnapi-74c96b7975-gndjl\" (UID: \"b6ecf1b7-5d5c-4a0f-9fcb-caed8534c325\") " pod="openstack/heat-cfnapi-74c96b7975-gndjl" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.227586 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6ecf1b7-5d5c-4a0f-9fcb-caed8534c325-internal-tls-certs\") pod \"heat-cfnapi-74c96b7975-gndjl\" (UID: \"b6ecf1b7-5d5c-4a0f-9fcb-caed8534c325\") " pod="openstack/heat-cfnapi-74c96b7975-gndjl" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.236848 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6ecf1b7-5d5c-4a0f-9fcb-caed8534c325-config-data\") pod \"heat-cfnapi-74c96b7975-gndjl\" (UID: \"b6ecf1b7-5d5c-4a0f-9fcb-caed8534c325\") " pod="openstack/heat-cfnapi-74c96b7975-gndjl" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.239242 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b6ecf1b7-5d5c-4a0f-9fcb-caed8534c325-config-data-custom\") pod \"heat-cfnapi-74c96b7975-gndjl\" (UID: \"b6ecf1b7-5d5c-4a0f-9fcb-caed8534c325\") " pod="openstack/heat-cfnapi-74c96b7975-gndjl" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.239819 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txsrz\" (UniqueName: \"kubernetes.io/projected/ec26de16-988c-4242-8de5-e379eeff18d8-kube-api-access-txsrz\") pod \"heat-api-5675dff4b5-5c9sq\" (UID: \"ec26de16-988c-4242-8de5-e379eeff18d8\") " pod="openstack/heat-api-5675dff4b5-5c9sq" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.241593 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec26de16-988c-4242-8de5-e379eeff18d8-config-data\") pod \"heat-api-5675dff4b5-5c9sq\" (UID: \"ec26de16-988c-4242-8de5-e379eeff18d8\") " pod="openstack/heat-api-5675dff4b5-5c9sq" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.242426 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp6tk\" (UniqueName: \"kubernetes.io/projected/b6ecf1b7-5d5c-4a0f-9fcb-caed8534c325-kube-api-access-pp6tk\") pod \"heat-cfnapi-74c96b7975-gndjl\" (UID: \"b6ecf1b7-5d5c-4a0f-9fcb-caed8534c325\") " pod="openstack/heat-cfnapi-74c96b7975-gndjl" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.242500 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec26de16-988c-4242-8de5-e379eeff18d8-combined-ca-bundle\") pod \"heat-api-5675dff4b5-5c9sq\" (UID: \"ec26de16-988c-4242-8de5-e379eeff18d8\") " pod="openstack/heat-api-5675dff4b5-5c9sq" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.245235 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ec26de16-988c-4242-8de5-e379eeff18d8-config-data-custom\") pod \"heat-api-5675dff4b5-5c9sq\" (UID: \"ec26de16-988c-4242-8de5-e379eeff18d8\") " pod="openstack/heat-api-5675dff4b5-5c9sq" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.248396 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec26de16-988c-4242-8de5-e379eeff18d8-internal-tls-certs\") pod \"heat-api-5675dff4b5-5c9sq\" (UID: \"ec26de16-988c-4242-8de5-e379eeff18d8\") " pod="openstack/heat-api-5675dff4b5-5c9sq" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.293034 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-74c96b7975-gndjl" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.304833 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5675dff4b5-5c9sq" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.329515 4779 generic.go:334] "Generic (PLEG): container finished" podID="0e8eae11-8b73-41ea-a4ac-6c58de45319e" containerID="c7130811ed245c344916d9297859c0259452d61763f260dff06f476171b4bad7" exitCode=1 Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.329572 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7c7488cf49-mbbwj" event={"ID":"0e8eae11-8b73-41ea-a4ac-6c58de45319e","Type":"ContainerDied","Data":"c7130811ed245c344916d9297859c0259452d61763f260dff06f476171b4bad7"} Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.329966 4779 scope.go:117] "RemoveContainer" containerID="c7130811ed245c344916d9297859c0259452d61763f260dff06f476171b4bad7" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.333201 4779 generic.go:334] "Generic (PLEG): container finished" podID="1604b21d-7590-4a34-9cf2-e3d03f2db385" containerID="bfb97f0358d284d94f52421e9edc4d3f896df828fcc1e9f49163b623682331b6" exitCode=0 Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.333241 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6b7bf76b6-kbx6h" event={"ID":"1604b21d-7590-4a34-9cf2-e3d03f2db385","Type":"ContainerDied","Data":"bfb97f0358d284d94f52421e9edc4d3f896df828fcc1e9f49163b623682331b6"} Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.344348 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6dc88d6fdd-9vtxx" event={"ID":"a86fc8ed-8b8b-4a8a-8b27-0aa2d40fb61b","Type":"ContainerStarted","Data":"86613955a4483fb18504956b72f7cf346571e0cf552669e87acbd8cdc7992164"} Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.345113 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-6dc88d6fdd-9vtxx" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.359354 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.361200 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd014390-52f0-4d0c-944f-aed58d5b179f","Type":"ContainerDied","Data":"16fdcc6b856e9fee8f87f8464616cb1621b38a9396acf5da34785964e5c7b868"} Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.361263 4779 scope.go:117] "RemoveContainer" containerID="af291712d598b8d6fd36a612c4cdabc5152532d875d560ae2f111be115da9951" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.373741 4779 generic.go:334] "Generic (PLEG): container finished" podID="8c363caa-b3e7-4f43-9bf9-8846a5e72c34" containerID="710e68ca25b60cbcc0e090e4464d40ac156b5c3f28ac680ba1a9c8821b2b6238" exitCode=1 Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.373784 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-66496b66cd-vwvjx" event={"ID":"8c363caa-b3e7-4f43-9bf9-8846a5e72c34","Type":"ContainerDied","Data":"710e68ca25b60cbcc0e090e4464d40ac156b5c3f28ac680ba1a9c8821b2b6238"} Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.374316 4779 scope.go:117] "RemoveContainer" containerID="710e68ca25b60cbcc0e090e4464d40ac156b5c3f28ac680ba1a9c8821b2b6238" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.391297 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-6dc88d6fdd-9vtxx" podStartSLOduration=2.391272301 podStartE2EDuration="2.391272301s" podCreationTimestamp="2025-11-28 12:55:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:55:59.372020173 +0000 UTC m=+1219.937695527" watchObservedRunningTime="2025-11-28 12:55:59.391272301 +0000 UTC m=+1219.956947665" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.515416 4779 scope.go:117] "RemoveContainer" containerID="0ae6312f8ee755e61ab4466220c49a10118979506e8a763f2c414ca5d589ed50" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.535664 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.560157 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.579230 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.581659 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.588833 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.589023 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.593237 4779 scope.go:117] "RemoveContainer" containerID="c07e218bebc32e46a5262a66789e3b2e6c91840016611a839e148ea842e4ab06" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.593669 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.629775 4779 scope.go:117] "RemoveContainer" containerID="1933d485b31e0a6d50e4c9567d1bda24d717564e5c93aa56578ef001ae4e7d68" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.740731 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87e78f52-9c09-473f-b884-20bc130d6ede-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"87e78f52-9c09-473f-b884-20bc130d6ede\") " pod="openstack/ceilometer-0" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.740787 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87e78f52-9c09-473f-b884-20bc130d6ede-config-data\") pod \"ceilometer-0\" (UID: \"87e78f52-9c09-473f-b884-20bc130d6ede\") " pod="openstack/ceilometer-0" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.740812 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87e78f52-9c09-473f-b884-20bc130d6ede-scripts\") pod \"ceilometer-0\" (UID: \"87e78f52-9c09-473f-b884-20bc130d6ede\") " pod="openstack/ceilometer-0" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.740854 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/87e78f52-9c09-473f-b884-20bc130d6ede-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"87e78f52-9c09-473f-b884-20bc130d6ede\") " pod="openstack/ceilometer-0" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.740906 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmt79\" (UniqueName: \"kubernetes.io/projected/87e78f52-9c09-473f-b884-20bc130d6ede-kube-api-access-mmt79\") pod \"ceilometer-0\" (UID: \"87e78f52-9c09-473f-b884-20bc130d6ede\") " pod="openstack/ceilometer-0" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.740944 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87e78f52-9c09-473f-b884-20bc130d6ede-run-httpd\") pod \"ceilometer-0\" (UID: \"87e78f52-9c09-473f-b884-20bc130d6ede\") " pod="openstack/ceilometer-0" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.740973 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87e78f52-9c09-473f-b884-20bc130d6ede-log-httpd\") pod \"ceilometer-0\" (UID: \"87e78f52-9c09-473f-b884-20bc130d6ede\") " pod="openstack/ceilometer-0" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.753940 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd014390-52f0-4d0c-944f-aed58d5b179f" path="/var/lib/kubelet/pods/bd014390-52f0-4d0c-944f-aed58d5b179f/volumes" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.843427 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87e78f52-9c09-473f-b884-20bc130d6ede-run-httpd\") pod \"ceilometer-0\" (UID: \"87e78f52-9c09-473f-b884-20bc130d6ede\") " pod="openstack/ceilometer-0" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.843473 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87e78f52-9c09-473f-b884-20bc130d6ede-log-httpd\") pod \"ceilometer-0\" (UID: \"87e78f52-9c09-473f-b884-20bc130d6ede\") " pod="openstack/ceilometer-0" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.843533 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87e78f52-9c09-473f-b884-20bc130d6ede-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"87e78f52-9c09-473f-b884-20bc130d6ede\") " pod="openstack/ceilometer-0" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.843570 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87e78f52-9c09-473f-b884-20bc130d6ede-config-data\") pod \"ceilometer-0\" (UID: \"87e78f52-9c09-473f-b884-20bc130d6ede\") " pod="openstack/ceilometer-0" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.843594 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87e78f52-9c09-473f-b884-20bc130d6ede-scripts\") pod \"ceilometer-0\" (UID: \"87e78f52-9c09-473f-b884-20bc130d6ede\") " pod="openstack/ceilometer-0" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.843633 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/87e78f52-9c09-473f-b884-20bc130d6ede-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"87e78f52-9c09-473f-b884-20bc130d6ede\") " pod="openstack/ceilometer-0" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.843653 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmt79\" (UniqueName: \"kubernetes.io/projected/87e78f52-9c09-473f-b884-20bc130d6ede-kube-api-access-mmt79\") pod \"ceilometer-0\" (UID: \"87e78f52-9c09-473f-b884-20bc130d6ede\") " pod="openstack/ceilometer-0" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.846609 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87e78f52-9c09-473f-b884-20bc130d6ede-run-httpd\") pod \"ceilometer-0\" (UID: \"87e78f52-9c09-473f-b884-20bc130d6ede\") " pod="openstack/ceilometer-0" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.846819 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87e78f52-9c09-473f-b884-20bc130d6ede-log-httpd\") pod \"ceilometer-0\" (UID: \"87e78f52-9c09-473f-b884-20bc130d6ede\") " pod="openstack/ceilometer-0" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.850562 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/87e78f52-9c09-473f-b884-20bc130d6ede-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"87e78f52-9c09-473f-b884-20bc130d6ede\") " pod="openstack/ceilometer-0" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.850807 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87e78f52-9c09-473f-b884-20bc130d6ede-scripts\") pod \"ceilometer-0\" (UID: \"87e78f52-9c09-473f-b884-20bc130d6ede\") " pod="openstack/ceilometer-0" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.851152 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87e78f52-9c09-473f-b884-20bc130d6ede-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"87e78f52-9c09-473f-b884-20bc130d6ede\") " pod="openstack/ceilometer-0" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.852017 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87e78f52-9c09-473f-b884-20bc130d6ede-config-data\") pod \"ceilometer-0\" (UID: \"87e78f52-9c09-473f-b884-20bc130d6ede\") " pod="openstack/ceilometer-0" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.868572 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmt79\" (UniqueName: \"kubernetes.io/projected/87e78f52-9c09-473f-b884-20bc130d6ede-kube-api-access-mmt79\") pod \"ceilometer-0\" (UID: \"87e78f52-9c09-473f-b884-20bc130d6ede\") " pod="openstack/ceilometer-0" Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.871624 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-74c96b7975-gndjl"] Nov 28 12:55:59 crc kubenswrapper[4779]: I1128 12:55:59.925266 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 12:55:59 crc kubenswrapper[4779]: W1128 12:55:59.937494 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6ecf1b7_5d5c_4a0f_9fcb_caed8534c325.slice/crio-105590c8bac73866214e390c2d9fb2e68f79bc501f78690d9f99cd5f5e4faf64 WatchSource:0}: Error finding container 105590c8bac73866214e390c2d9fb2e68f79bc501f78690d9f99cd5f5e4faf64: Status 404 returned error can't find the container with id 105590c8bac73866214e390c2d9fb2e68f79bc501f78690d9f99cd5f5e4faf64 Nov 28 12:56:00 crc kubenswrapper[4779]: I1128 12:56:00.041698 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5675dff4b5-5c9sq"] Nov 28 12:56:00 crc kubenswrapper[4779]: W1128 12:56:00.043033 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec26de16_988c_4242_8de5_e379eeff18d8.slice/crio-cd9c514d2e1fb5827e4a38470d744820f9083ed74b3ad9a907ff999b1ca89169 WatchSource:0}: Error finding container cd9c514d2e1fb5827e4a38470d744820f9083ed74b3ad9a907ff999b1ca89169: Status 404 returned error can't find the container with id cd9c514d2e1fb5827e4a38470d744820f9083ed74b3ad9a907ff999b1ca89169 Nov 28 12:56:00 crc kubenswrapper[4779]: I1128 12:56:00.393468 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-74c96b7975-gndjl" event={"ID":"b6ecf1b7-5d5c-4a0f-9fcb-caed8534c325","Type":"ContainerStarted","Data":"105590c8bac73866214e390c2d9fb2e68f79bc501f78690d9f99cd5f5e4faf64"} Nov 28 12:56:00 crc kubenswrapper[4779]: I1128 12:56:00.399456 4779 generic.go:334] "Generic (PLEG): container finished" podID="8c363caa-b3e7-4f43-9bf9-8846a5e72c34" containerID="4332371bb729a4894b6315da2aa228a1048c00c59c447a6af9dcdf9e4babaa03" exitCode=1 Nov 28 12:56:00 crc kubenswrapper[4779]: I1128 12:56:00.399510 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-66496b66cd-vwvjx" event={"ID":"8c363caa-b3e7-4f43-9bf9-8846a5e72c34","Type":"ContainerDied","Data":"4332371bb729a4894b6315da2aa228a1048c00c59c447a6af9dcdf9e4babaa03"} Nov 28 12:56:00 crc kubenswrapper[4779]: I1128 12:56:00.399538 4779 scope.go:117] "RemoveContainer" containerID="710e68ca25b60cbcc0e090e4464d40ac156b5c3f28ac680ba1a9c8821b2b6238" Nov 28 12:56:00 crc kubenswrapper[4779]: I1128 12:56:00.401962 4779 scope.go:117] "RemoveContainer" containerID="4332371bb729a4894b6315da2aa228a1048c00c59c447a6af9dcdf9e4babaa03" Nov 28 12:56:00 crc kubenswrapper[4779]: E1128 12:56:00.402321 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-66496b66cd-vwvjx_openstack(8c363caa-b3e7-4f43-9bf9-8846a5e72c34)\"" pod="openstack/heat-api-66496b66cd-vwvjx" podUID="8c363caa-b3e7-4f43-9bf9-8846a5e72c34" Nov 28 12:56:00 crc kubenswrapper[4779]: I1128 12:56:00.402467 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5675dff4b5-5c9sq" event={"ID":"ec26de16-988c-4242-8de5-e379eeff18d8","Type":"ContainerStarted","Data":"cd9c514d2e1fb5827e4a38470d744820f9083ed74b3ad9a907ff999b1ca89169"} Nov 28 12:56:00 crc kubenswrapper[4779]: I1128 12:56:00.411222 4779 generic.go:334] "Generic (PLEG): container finished" podID="0e8eae11-8b73-41ea-a4ac-6c58de45319e" containerID="719e836f82801fccc411277b39ff104f0c7d3ec4a8f1c73fb520c6150630cba9" exitCode=1 Nov 28 12:56:00 crc kubenswrapper[4779]: I1128 12:56:00.411309 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7c7488cf49-mbbwj" event={"ID":"0e8eae11-8b73-41ea-a4ac-6c58de45319e","Type":"ContainerDied","Data":"719e836f82801fccc411277b39ff104f0c7d3ec4a8f1c73fb520c6150630cba9"} Nov 28 12:56:00 crc kubenswrapper[4779]: I1128 12:56:00.412331 4779 scope.go:117] "RemoveContainer" containerID="719e836f82801fccc411277b39ff104f0c7d3ec4a8f1c73fb520c6150630cba9" Nov 28 12:56:00 crc kubenswrapper[4779]: E1128 12:56:00.412618 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-7c7488cf49-mbbwj_openstack(0e8eae11-8b73-41ea-a4ac-6c58de45319e)\"" pod="openstack/heat-cfnapi-7c7488cf49-mbbwj" podUID="0e8eae11-8b73-41ea-a4ac-6c58de45319e" Nov 28 12:56:00 crc kubenswrapper[4779]: I1128 12:56:00.421644 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6b7bf76b6-kbx6h" event={"ID":"1604b21d-7590-4a34-9cf2-e3d03f2db385","Type":"ContainerDied","Data":"86e410d0d5a6272d7b21dd2630d535a862050629b6d2737319434f006762f2ef"} Nov 28 12:56:00 crc kubenswrapper[4779]: I1128 12:56:00.421681 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86e410d0d5a6272d7b21dd2630d535a862050629b6d2737319434f006762f2ef" Nov 28 12:56:00 crc kubenswrapper[4779]: I1128 12:56:00.461216 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6b7bf76b6-kbx6h" Nov 28 12:56:00 crc kubenswrapper[4779]: I1128 12:56:00.541883 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 12:56:00 crc kubenswrapper[4779]: I1128 12:56:00.545430 4779 scope.go:117] "RemoveContainer" containerID="c7130811ed245c344916d9297859c0259452d61763f260dff06f476171b4bad7" Nov 28 12:56:00 crc kubenswrapper[4779]: W1128 12:56:00.558144 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87e78f52_9c09_473f_b884_20bc130d6ede.slice/crio-80bc8c55bdeb515790b99c9f3dc54b05db23928a9fa10ac42f315f3d915e89d2 WatchSource:0}: Error finding container 80bc8c55bdeb515790b99c9f3dc54b05db23928a9fa10ac42f315f3d915e89d2: Status 404 returned error can't find the container with id 80bc8c55bdeb515790b99c9f3dc54b05db23928a9fa10ac42f315f3d915e89d2 Nov 28 12:56:00 crc kubenswrapper[4779]: I1128 12:56:00.558551 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1604b21d-7590-4a34-9cf2-e3d03f2db385-config-data\") pod \"1604b21d-7590-4a34-9cf2-e3d03f2db385\" (UID: \"1604b21d-7590-4a34-9cf2-e3d03f2db385\") " Nov 28 12:56:00 crc kubenswrapper[4779]: I1128 12:56:00.558708 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1604b21d-7590-4a34-9cf2-e3d03f2db385-config-data-custom\") pod \"1604b21d-7590-4a34-9cf2-e3d03f2db385\" (UID: \"1604b21d-7590-4a34-9cf2-e3d03f2db385\") " Nov 28 12:56:00 crc kubenswrapper[4779]: I1128 12:56:00.558748 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1604b21d-7590-4a34-9cf2-e3d03f2db385-combined-ca-bundle\") pod \"1604b21d-7590-4a34-9cf2-e3d03f2db385\" (UID: \"1604b21d-7590-4a34-9cf2-e3d03f2db385\") " Nov 28 12:56:00 crc kubenswrapper[4779]: I1128 12:56:00.558822 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pv8qm\" (UniqueName: \"kubernetes.io/projected/1604b21d-7590-4a34-9cf2-e3d03f2db385-kube-api-access-pv8qm\") pod \"1604b21d-7590-4a34-9cf2-e3d03f2db385\" (UID: \"1604b21d-7590-4a34-9cf2-e3d03f2db385\") " Nov 28 12:56:00 crc kubenswrapper[4779]: I1128 12:56:00.575125 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1604b21d-7590-4a34-9cf2-e3d03f2db385-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1604b21d-7590-4a34-9cf2-e3d03f2db385" (UID: "1604b21d-7590-4a34-9cf2-e3d03f2db385"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:56:00 crc kubenswrapper[4779]: I1128 12:56:00.575258 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1604b21d-7590-4a34-9cf2-e3d03f2db385-kube-api-access-pv8qm" (OuterVolumeSpecName: "kube-api-access-pv8qm") pod "1604b21d-7590-4a34-9cf2-e3d03f2db385" (UID: "1604b21d-7590-4a34-9cf2-e3d03f2db385"). InnerVolumeSpecName "kube-api-access-pv8qm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:56:00 crc kubenswrapper[4779]: I1128 12:56:00.644019 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1604b21d-7590-4a34-9cf2-e3d03f2db385-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1604b21d-7590-4a34-9cf2-e3d03f2db385" (UID: "1604b21d-7590-4a34-9cf2-e3d03f2db385"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:56:00 crc kubenswrapper[4779]: I1128 12:56:00.660494 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pv8qm\" (UniqueName: \"kubernetes.io/projected/1604b21d-7590-4a34-9cf2-e3d03f2db385-kube-api-access-pv8qm\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:00 crc kubenswrapper[4779]: I1128 12:56:00.660525 4779 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1604b21d-7590-4a34-9cf2-e3d03f2db385-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:00 crc kubenswrapper[4779]: I1128 12:56:00.660539 4779 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1604b21d-7590-4a34-9cf2-e3d03f2db385-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:00 crc kubenswrapper[4779]: I1128 12:56:00.670241 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1604b21d-7590-4a34-9cf2-e3d03f2db385-config-data" (OuterVolumeSpecName: "config-data") pod "1604b21d-7590-4a34-9cf2-e3d03f2db385" (UID: "1604b21d-7590-4a34-9cf2-e3d03f2db385"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:56:00 crc kubenswrapper[4779]: I1128 12:56:00.762171 4779 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1604b21d-7590-4a34-9cf2-e3d03f2db385-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:00 crc kubenswrapper[4779]: I1128 12:56:00.880254 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f6bc4c6c9-pcgm5" Nov 28 12:56:00 crc kubenswrapper[4779]: I1128 12:56:00.941356 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-j87hp"] Nov 28 12:56:00 crc kubenswrapper[4779]: I1128 12:56:00.941581 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5784cf869f-j87hp" podUID="126d0821-3736-467c-b13c-a5697d834177" containerName="dnsmasq-dns" containerID="cri-o://25c80160d37c24d67d161ffa904d288e792389023e84e3a070b6a06b940f6168" gracePeriod=10 Nov 28 12:56:01 crc kubenswrapper[4779]: I1128 12:56:01.438999 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5675dff4b5-5c9sq" event={"ID":"ec26de16-988c-4242-8de5-e379eeff18d8","Type":"ContainerStarted","Data":"915a2eab8f3d54efa3a6ef37d89018cd7dbe4440dc9f38004421052ef68193df"} Nov 28 12:56:01 crc kubenswrapper[4779]: I1128 12:56:01.439346 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5675dff4b5-5c9sq" Nov 28 12:56:01 crc kubenswrapper[4779]: I1128 12:56:01.441949 4779 scope.go:117] "RemoveContainer" containerID="719e836f82801fccc411277b39ff104f0c7d3ec4a8f1c73fb520c6150630cba9" Nov 28 12:56:01 crc kubenswrapper[4779]: E1128 12:56:01.442169 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-7c7488cf49-mbbwj_openstack(0e8eae11-8b73-41ea-a4ac-6c58de45319e)\"" pod="openstack/heat-cfnapi-7c7488cf49-mbbwj" podUID="0e8eae11-8b73-41ea-a4ac-6c58de45319e" Nov 28 12:56:01 crc kubenswrapper[4779]: I1128 12:56:01.442645 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87e78f52-9c09-473f-b884-20bc130d6ede","Type":"ContainerStarted","Data":"80bc8c55bdeb515790b99c9f3dc54b05db23928a9fa10ac42f315f3d915e89d2"} Nov 28 12:56:01 crc kubenswrapper[4779]: I1128 12:56:01.450479 4779 generic.go:334] "Generic (PLEG): container finished" podID="126d0821-3736-467c-b13c-a5697d834177" containerID="25c80160d37c24d67d161ffa904d288e792389023e84e3a070b6a06b940f6168" exitCode=0 Nov 28 12:56:01 crc kubenswrapper[4779]: I1128 12:56:01.450565 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-j87hp" event={"ID":"126d0821-3736-467c-b13c-a5697d834177","Type":"ContainerDied","Data":"25c80160d37c24d67d161ffa904d288e792389023e84e3a070b6a06b940f6168"} Nov 28 12:56:01 crc kubenswrapper[4779]: I1128 12:56:01.453362 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-74c96b7975-gndjl" event={"ID":"b6ecf1b7-5d5c-4a0f-9fcb-caed8534c325","Type":"ContainerStarted","Data":"7654a7ca4b0697f745f2237e196d12b3b2469d0b2fcbac36c0dfb7c527ccf117"} Nov 28 12:56:01 crc kubenswrapper[4779]: I1128 12:56:01.453456 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-74c96b7975-gndjl" Nov 28 12:56:01 crc kubenswrapper[4779]: I1128 12:56:01.456749 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6b7bf76b6-kbx6h" Nov 28 12:56:01 crc kubenswrapper[4779]: I1128 12:56:01.457613 4779 scope.go:117] "RemoveContainer" containerID="4332371bb729a4894b6315da2aa228a1048c00c59c447a6af9dcdf9e4babaa03" Nov 28 12:56:01 crc kubenswrapper[4779]: E1128 12:56:01.457781 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-66496b66cd-vwvjx_openstack(8c363caa-b3e7-4f43-9bf9-8846a5e72c34)\"" pod="openstack/heat-api-66496b66cd-vwvjx" podUID="8c363caa-b3e7-4f43-9bf9-8846a5e72c34" Nov 28 12:56:01 crc kubenswrapper[4779]: I1128 12:56:01.468118 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-5675dff4b5-5c9sq" podStartSLOduration=3.468081922 podStartE2EDuration="3.468081922s" podCreationTimestamp="2025-11-28 12:55:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:56:01.454822392 +0000 UTC m=+1222.020497746" watchObservedRunningTime="2025-11-28 12:56:01.468081922 +0000 UTC m=+1222.033757276" Nov 28 12:56:01 crc kubenswrapper[4779]: I1128 12:56:01.495949 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-74c96b7975-gndjl" podStartSLOduration=3.495933837 podStartE2EDuration="3.495933837s" podCreationTimestamp="2025-11-28 12:55:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:56:01.486500508 +0000 UTC m=+1222.052175862" watchObservedRunningTime="2025-11-28 12:56:01.495933837 +0000 UTC m=+1222.061609191" Nov 28 12:56:01 crc kubenswrapper[4779]: I1128 12:56:01.512391 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6b7bf76b6-kbx6h"] Nov 28 12:56:01 crc kubenswrapper[4779]: I1128 12:56:01.529876 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-6b7bf76b6-kbx6h"] Nov 28 12:56:01 crc kubenswrapper[4779]: I1128 12:56:01.742422 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1604b21d-7590-4a34-9cf2-e3d03f2db385" path="/var/lib/kubelet/pods/1604b21d-7590-4a34-9cf2-e3d03f2db385/volumes" Nov 28 12:56:02 crc kubenswrapper[4779]: I1128 12:56:02.556472 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7c7488cf49-mbbwj" Nov 28 12:56:02 crc kubenswrapper[4779]: I1128 12:56:02.556538 4779 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-7c7488cf49-mbbwj" Nov 28 12:56:02 crc kubenswrapper[4779]: I1128 12:56:02.557307 4779 scope.go:117] "RemoveContainer" containerID="719e836f82801fccc411277b39ff104f0c7d3ec4a8f1c73fb520c6150630cba9" Nov 28 12:56:02 crc kubenswrapper[4779]: E1128 12:56:02.558297 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-7c7488cf49-mbbwj_openstack(0e8eae11-8b73-41ea-a4ac-6c58de45319e)\"" pod="openstack/heat-cfnapi-7c7488cf49-mbbwj" podUID="0e8eae11-8b73-41ea-a4ac-6c58de45319e" Nov 28 12:56:02 crc kubenswrapper[4779]: I1128 12:56:02.560124 4779 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-66496b66cd-vwvjx" Nov 28 12:56:02 crc kubenswrapper[4779]: I1128 12:56:02.560188 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-66496b66cd-vwvjx" Nov 28 12:56:02 crc kubenswrapper[4779]: I1128 12:56:02.561030 4779 scope.go:117] "RemoveContainer" containerID="4332371bb729a4894b6315da2aa228a1048c00c59c447a6af9dcdf9e4babaa03" Nov 28 12:56:02 crc kubenswrapper[4779]: E1128 12:56:02.561391 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-66496b66cd-vwvjx_openstack(8c363caa-b3e7-4f43-9bf9-8846a5e72c34)\"" pod="openstack/heat-api-66496b66cd-vwvjx" podUID="8c363caa-b3e7-4f43-9bf9-8846a5e72c34" Nov 28 12:56:02 crc kubenswrapper[4779]: I1128 12:56:02.946248 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 12:56:04 crc kubenswrapper[4779]: I1128 12:56:04.183162 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-9c6b99df5-82cnl" Nov 28 12:56:04 crc kubenswrapper[4779]: I1128 12:56:04.189958 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-9c6b99df5-82cnl" Nov 28 12:56:04 crc kubenswrapper[4779]: I1128 12:56:04.335618 4779 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-864dfdcc4d-7wcth" podUID="240f13c7-a251-4e85-b0c2-fafc0c03d52c" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.168:8004/healthcheck\": read tcp 10.217.0.2:52786->10.217.0.168:8004: read: connection reset by peer" Nov 28 12:56:04 crc kubenswrapper[4779]: I1128 12:56:04.405606 4779 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5784cf869f-j87hp" podUID="126d0821-3736-467c-b13c-a5697d834177" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.162:5353: connect: connection refused" Nov 28 12:56:05 crc kubenswrapper[4779]: I1128 12:56:05.498413 4779 generic.go:334] "Generic (PLEG): container finished" podID="240f13c7-a251-4e85-b0c2-fafc0c03d52c" containerID="c264d2dafa86be1299c2b98d8efd0e997671227c6702da51282cecb82e2fa9f9" exitCode=0 Nov 28 12:56:05 crc kubenswrapper[4779]: I1128 12:56:05.498452 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-864dfdcc4d-7wcth" event={"ID":"240f13c7-a251-4e85-b0c2-fafc0c03d52c","Type":"ContainerDied","Data":"c264d2dafa86be1299c2b98d8efd0e997671227c6702da51282cecb82e2fa9f9"} Nov 28 12:56:05 crc kubenswrapper[4779]: I1128 12:56:05.916053 4779 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-864dfdcc4d-7wcth" podUID="240f13c7-a251-4e85-b0c2-fafc0c03d52c" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.168:8004/healthcheck\": dial tcp 10.217.0.168:8004: connect: connection refused" Nov 28 12:56:07 crc kubenswrapper[4779]: I1128 12:56:07.095362 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-j87hp" Nov 28 12:56:07 crc kubenswrapper[4779]: I1128 12:56:07.150309 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-864dfdcc4d-7wcth" Nov 28 12:56:07 crc kubenswrapper[4779]: I1128 12:56:07.188933 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/126d0821-3736-467c-b13c-a5697d834177-dns-svc\") pod \"126d0821-3736-467c-b13c-a5697d834177\" (UID: \"126d0821-3736-467c-b13c-a5697d834177\") " Nov 28 12:56:07 crc kubenswrapper[4779]: I1128 12:56:07.189245 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/240f13c7-a251-4e85-b0c2-fafc0c03d52c-config-data\") pod \"240f13c7-a251-4e85-b0c2-fafc0c03d52c\" (UID: \"240f13c7-a251-4e85-b0c2-fafc0c03d52c\") " Nov 28 12:56:07 crc kubenswrapper[4779]: I1128 12:56:07.189491 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/126d0821-3736-467c-b13c-a5697d834177-dns-swift-storage-0\") pod \"126d0821-3736-467c-b13c-a5697d834177\" (UID: \"126d0821-3736-467c-b13c-a5697d834177\") " Nov 28 12:56:07 crc kubenswrapper[4779]: I1128 12:56:07.189608 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/240f13c7-a251-4e85-b0c2-fafc0c03d52c-config-data-custom\") pod \"240f13c7-a251-4e85-b0c2-fafc0c03d52c\" (UID: \"240f13c7-a251-4e85-b0c2-fafc0c03d52c\") " Nov 28 12:56:07 crc kubenswrapper[4779]: I1128 12:56:07.189689 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/240f13c7-a251-4e85-b0c2-fafc0c03d52c-combined-ca-bundle\") pod \"240f13c7-a251-4e85-b0c2-fafc0c03d52c\" (UID: \"240f13c7-a251-4e85-b0c2-fafc0c03d52c\") " Nov 28 12:56:07 crc kubenswrapper[4779]: I1128 12:56:07.189848 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8wxr\" (UniqueName: \"kubernetes.io/projected/240f13c7-a251-4e85-b0c2-fafc0c03d52c-kube-api-access-r8wxr\") pod \"240f13c7-a251-4e85-b0c2-fafc0c03d52c\" (UID: \"240f13c7-a251-4e85-b0c2-fafc0c03d52c\") " Nov 28 12:56:07 crc kubenswrapper[4779]: I1128 12:56:07.189954 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/126d0821-3736-467c-b13c-a5697d834177-config\") pod \"126d0821-3736-467c-b13c-a5697d834177\" (UID: \"126d0821-3736-467c-b13c-a5697d834177\") " Nov 28 12:56:07 crc kubenswrapper[4779]: I1128 12:56:07.190058 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/126d0821-3736-467c-b13c-a5697d834177-ovsdbserver-sb\") pod \"126d0821-3736-467c-b13c-a5697d834177\" (UID: \"126d0821-3736-467c-b13c-a5697d834177\") " Nov 28 12:56:07 crc kubenswrapper[4779]: I1128 12:56:07.190199 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/126d0821-3736-467c-b13c-a5697d834177-ovsdbserver-nb\") pod \"126d0821-3736-467c-b13c-a5697d834177\" (UID: \"126d0821-3736-467c-b13c-a5697d834177\") " Nov 28 12:56:07 crc kubenswrapper[4779]: I1128 12:56:07.190345 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg6v9\" (UniqueName: \"kubernetes.io/projected/126d0821-3736-467c-b13c-a5697d834177-kube-api-access-zg6v9\") pod \"126d0821-3736-467c-b13c-a5697d834177\" (UID: \"126d0821-3736-467c-b13c-a5697d834177\") " Nov 28 12:56:07 crc kubenswrapper[4779]: I1128 12:56:07.198130 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/240f13c7-a251-4e85-b0c2-fafc0c03d52c-kube-api-access-r8wxr" (OuterVolumeSpecName: "kube-api-access-r8wxr") pod "240f13c7-a251-4e85-b0c2-fafc0c03d52c" (UID: "240f13c7-a251-4e85-b0c2-fafc0c03d52c"). InnerVolumeSpecName "kube-api-access-r8wxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:56:07 crc kubenswrapper[4779]: I1128 12:56:07.205765 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/240f13c7-a251-4e85-b0c2-fafc0c03d52c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "240f13c7-a251-4e85-b0c2-fafc0c03d52c" (UID: "240f13c7-a251-4e85-b0c2-fafc0c03d52c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:56:07 crc kubenswrapper[4779]: I1128 12:56:07.241558 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/126d0821-3736-467c-b13c-a5697d834177-kube-api-access-zg6v9" (OuterVolumeSpecName: "kube-api-access-zg6v9") pod "126d0821-3736-467c-b13c-a5697d834177" (UID: "126d0821-3736-467c-b13c-a5697d834177"). InnerVolumeSpecName "kube-api-access-zg6v9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:56:07 crc kubenswrapper[4779]: I1128 12:56:07.273166 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/240f13c7-a251-4e85-b0c2-fafc0c03d52c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "240f13c7-a251-4e85-b0c2-fafc0c03d52c" (UID: "240f13c7-a251-4e85-b0c2-fafc0c03d52c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:56:07 crc kubenswrapper[4779]: I1128 12:56:07.273627 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/126d0821-3736-467c-b13c-a5697d834177-config" (OuterVolumeSpecName: "config") pod "126d0821-3736-467c-b13c-a5697d834177" (UID: "126d0821-3736-467c-b13c-a5697d834177"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:56:07 crc kubenswrapper[4779]: I1128 12:56:07.275006 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/126d0821-3736-467c-b13c-a5697d834177-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "126d0821-3736-467c-b13c-a5697d834177" (UID: "126d0821-3736-467c-b13c-a5697d834177"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:56:07 crc kubenswrapper[4779]: I1128 12:56:07.275349 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/126d0821-3736-467c-b13c-a5697d834177-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "126d0821-3736-467c-b13c-a5697d834177" (UID: "126d0821-3736-467c-b13c-a5697d834177"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:56:07 crc kubenswrapper[4779]: I1128 12:56:07.278916 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/126d0821-3736-467c-b13c-a5697d834177-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "126d0821-3736-467c-b13c-a5697d834177" (UID: "126d0821-3736-467c-b13c-a5697d834177"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:56:07 crc kubenswrapper[4779]: I1128 12:56:07.288976 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/240f13c7-a251-4e85-b0c2-fafc0c03d52c-config-data" (OuterVolumeSpecName: "config-data") pod "240f13c7-a251-4e85-b0c2-fafc0c03d52c" (UID: "240f13c7-a251-4e85-b0c2-fafc0c03d52c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:56:07 crc kubenswrapper[4779]: I1128 12:56:07.293574 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zg6v9\" (UniqueName: \"kubernetes.io/projected/126d0821-3736-467c-b13c-a5697d834177-kube-api-access-zg6v9\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:07 crc kubenswrapper[4779]: I1128 12:56:07.293609 4779 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/126d0821-3736-467c-b13c-a5697d834177-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:07 crc kubenswrapper[4779]: I1128 12:56:07.293624 4779 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/240f13c7-a251-4e85-b0c2-fafc0c03d52c-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:07 crc kubenswrapper[4779]: I1128 12:56:07.293636 4779 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/126d0821-3736-467c-b13c-a5697d834177-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:07 crc kubenswrapper[4779]: I1128 12:56:07.293666 4779 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/240f13c7-a251-4e85-b0c2-fafc0c03d52c-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:07 crc kubenswrapper[4779]: I1128 12:56:07.293677 4779 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/240f13c7-a251-4e85-b0c2-fafc0c03d52c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:07 crc kubenswrapper[4779]: I1128 12:56:07.293687 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8wxr\" (UniqueName: \"kubernetes.io/projected/240f13c7-a251-4e85-b0c2-fafc0c03d52c-kube-api-access-r8wxr\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:07 crc kubenswrapper[4779]: I1128 12:56:07.293698 4779 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/126d0821-3736-467c-b13c-a5697d834177-config\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:07 crc kubenswrapper[4779]: I1128 12:56:07.293708 4779 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/126d0821-3736-467c-b13c-a5697d834177-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:07 crc kubenswrapper[4779]: I1128 12:56:07.304657 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/126d0821-3736-467c-b13c-a5697d834177-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "126d0821-3736-467c-b13c-a5697d834177" (UID: "126d0821-3736-467c-b13c-a5697d834177"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:56:07 crc kubenswrapper[4779]: I1128 12:56:07.395499 4779 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/126d0821-3736-467c-b13c-a5697d834177-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:07 crc kubenswrapper[4779]: I1128 12:56:07.520168 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"4a8f5701-7e1d-414b-aa88-4af10f82a58e","Type":"ContainerStarted","Data":"ffd7841121f7e8c3f4410f0c2ca301ba58474c3087214e5820b8a2f086337dd4"} Nov 28 12:56:07 crc kubenswrapper[4779]: I1128 12:56:07.522333 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87e78f52-9c09-473f-b884-20bc130d6ede","Type":"ContainerStarted","Data":"54c9afd7053211c99b3569e95bd8e8a163eac184d23587472b56b633aa668bfc"} Nov 28 12:56:07 crc kubenswrapper[4779]: I1128 12:56:07.524638 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-j87hp" event={"ID":"126d0821-3736-467c-b13c-a5697d834177","Type":"ContainerDied","Data":"0ee61e4435bfb0dcbe039745081cf9993635659d01daad604272c247c8a9ebae"} Nov 28 12:56:07 crc kubenswrapper[4779]: I1128 12:56:07.524709 4779 scope.go:117] "RemoveContainer" containerID="25c80160d37c24d67d161ffa904d288e792389023e84e3a070b6a06b940f6168" Nov 28 12:56:07 crc kubenswrapper[4779]: I1128 12:56:07.524655 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-j87hp" Nov 28 12:56:07 crc kubenswrapper[4779]: I1128 12:56:07.526593 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-864dfdcc4d-7wcth" event={"ID":"240f13c7-a251-4e85-b0c2-fafc0c03d52c","Type":"ContainerDied","Data":"b686a61f2029a4e7e319807e7c55f1a6645ff5e29eb5d9b00b0c5acfc9085d67"} Nov 28 12:56:07 crc kubenswrapper[4779]: I1128 12:56:07.526639 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-864dfdcc4d-7wcth" Nov 28 12:56:07 crc kubenswrapper[4779]: I1128 12:56:07.544910 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.057720799 podStartE2EDuration="16.544893351s" podCreationTimestamp="2025-11-28 12:55:51 +0000 UTC" firstStartedPulling="2025-11-28 12:55:52.305144794 +0000 UTC m=+1212.870820168" lastFinishedPulling="2025-11-28 12:56:06.792317366 +0000 UTC m=+1227.357992720" observedRunningTime="2025-11-28 12:56:07.542055156 +0000 UTC m=+1228.107730530" watchObservedRunningTime="2025-11-28 12:56:07.544893351 +0000 UTC m=+1228.110568725" Nov 28 12:56:07 crc kubenswrapper[4779]: I1128 12:56:07.564887 4779 scope.go:117] "RemoveContainer" containerID="b1db6becf352b0bcd49bc942c6033604bf3219c9ab02a31d98d98b3f8b7004e0" Nov 28 12:56:07 crc kubenswrapper[4779]: I1128 12:56:07.583125 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-864dfdcc4d-7wcth"] Nov 28 12:56:07 crc kubenswrapper[4779]: I1128 12:56:07.598498 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-864dfdcc4d-7wcth"] Nov 28 12:56:07 crc kubenswrapper[4779]: I1128 12:56:07.611150 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-j87hp"] Nov 28 12:56:07 crc kubenswrapper[4779]: I1128 12:56:07.618716 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-j87hp"] Nov 28 12:56:07 crc kubenswrapper[4779]: I1128 12:56:07.625475 4779 scope.go:117] "RemoveContainer" containerID="c264d2dafa86be1299c2b98d8efd0e997671227c6702da51282cecb82e2fa9f9" Nov 28 12:56:07 crc kubenswrapper[4779]: I1128 12:56:07.740879 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="126d0821-3736-467c-b13c-a5697d834177" path="/var/lib/kubelet/pods/126d0821-3736-467c-b13c-a5697d834177/volumes" Nov 28 12:56:07 crc kubenswrapper[4779]: I1128 12:56:07.742718 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="240f13c7-a251-4e85-b0c2-fafc0c03d52c" path="/var/lib/kubelet/pods/240f13c7-a251-4e85-b0c2-fafc0c03d52c/volumes" Nov 28 12:56:08 crc kubenswrapper[4779]: I1128 12:56:08.539517 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87e78f52-9c09-473f-b884-20bc130d6ede","Type":"ContainerStarted","Data":"594bfba7ff09abf9fb17a5fd0cd5630bab6fcb7a51b922ef078edda3684a6780"} Nov 28 12:56:09 crc kubenswrapper[4779]: I1128 12:56:09.240715 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 12:56:09 crc kubenswrapper[4779]: I1128 12:56:09.241656 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="892bc73b-f7ab-40c1-af7b-9540725292f1" containerName="glance-log" containerID="cri-o://bf54c552df0d3675666c9a5d3ec263945f8d9fd2a5f1772bd9b2fba1af83b947" gracePeriod=30 Nov 28 12:56:09 crc kubenswrapper[4779]: I1128 12:56:09.241678 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="892bc73b-f7ab-40c1-af7b-9540725292f1" containerName="glance-httpd" containerID="cri-o://f6ce8e51262e48f0e55986bcd03ca7e27b32d59ec9661907fe63a77584e36d84" gracePeriod=30 Nov 28 12:56:09 crc kubenswrapper[4779]: E1128 12:56:09.384867 4779 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod892bc73b_f7ab_40c1_af7b_9540725292f1.slice/crio-conmon-bf54c552df0d3675666c9a5d3ec263945f8d9fd2a5f1772bd9b2fba1af83b947.scope\": RecentStats: unable to find data in memory cache]" Nov 28 12:56:09 crc kubenswrapper[4779]: I1128 12:56:09.548442 4779 generic.go:334] "Generic (PLEG): container finished" podID="892bc73b-f7ab-40c1-af7b-9540725292f1" containerID="bf54c552df0d3675666c9a5d3ec263945f8d9fd2a5f1772bd9b2fba1af83b947" exitCode=143 Nov 28 12:56:09 crc kubenswrapper[4779]: I1128 12:56:09.548498 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"892bc73b-f7ab-40c1-af7b-9540725292f1","Type":"ContainerDied","Data":"bf54c552df0d3675666c9a5d3ec263945f8d9fd2a5f1772bd9b2fba1af83b947"} Nov 28 12:56:09 crc kubenswrapper[4779]: I1128 12:56:09.550318 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87e78f52-9c09-473f-b884-20bc130d6ede","Type":"ContainerStarted","Data":"574fa4cac81acc63a4e2e72955c7cc8faf5eb633d211df2e4b0ab828531f232e"} Nov 28 12:56:09 crc kubenswrapper[4779]: I1128 12:56:09.609275 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-dwv55"] Nov 28 12:56:09 crc kubenswrapper[4779]: E1128 12:56:09.609937 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="240f13c7-a251-4e85-b0c2-fafc0c03d52c" containerName="heat-api" Nov 28 12:56:09 crc kubenswrapper[4779]: I1128 12:56:09.609955 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="240f13c7-a251-4e85-b0c2-fafc0c03d52c" containerName="heat-api" Nov 28 12:56:09 crc kubenswrapper[4779]: E1128 12:56:09.609981 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="126d0821-3736-467c-b13c-a5697d834177" containerName="dnsmasq-dns" Nov 28 12:56:09 crc kubenswrapper[4779]: I1128 12:56:09.609987 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="126d0821-3736-467c-b13c-a5697d834177" containerName="dnsmasq-dns" Nov 28 12:56:09 crc kubenswrapper[4779]: E1128 12:56:09.609998 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="126d0821-3736-467c-b13c-a5697d834177" containerName="init" Nov 28 12:56:09 crc kubenswrapper[4779]: I1128 12:56:09.610006 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="126d0821-3736-467c-b13c-a5697d834177" containerName="init" Nov 28 12:56:09 crc kubenswrapper[4779]: E1128 12:56:09.610016 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1604b21d-7590-4a34-9cf2-e3d03f2db385" containerName="heat-cfnapi" Nov 28 12:56:09 crc kubenswrapper[4779]: I1128 12:56:09.610022 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="1604b21d-7590-4a34-9cf2-e3d03f2db385" containerName="heat-cfnapi" Nov 28 12:56:09 crc kubenswrapper[4779]: I1128 12:56:09.610478 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="1604b21d-7590-4a34-9cf2-e3d03f2db385" containerName="heat-cfnapi" Nov 28 12:56:09 crc kubenswrapper[4779]: I1128 12:56:09.610504 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="240f13c7-a251-4e85-b0c2-fafc0c03d52c" containerName="heat-api" Nov 28 12:56:09 crc kubenswrapper[4779]: I1128 12:56:09.610518 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="126d0821-3736-467c-b13c-a5697d834177" containerName="dnsmasq-dns" Nov 28 12:56:09 crc kubenswrapper[4779]: I1128 12:56:09.611879 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dwv55" Nov 28 12:56:09 crc kubenswrapper[4779]: I1128 12:56:09.634307 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-dwv55"] Nov 28 12:56:09 crc kubenswrapper[4779]: I1128 12:56:09.751580 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbm57\" (UniqueName: \"kubernetes.io/projected/e457ff42-a87d-4bfd-91d1-bcdc8632533d-kube-api-access-lbm57\") pod \"nova-api-db-create-dwv55\" (UID: \"e457ff42-a87d-4bfd-91d1-bcdc8632533d\") " pod="openstack/nova-api-db-create-dwv55" Nov 28 12:56:09 crc kubenswrapper[4779]: I1128 12:56:09.751730 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e457ff42-a87d-4bfd-91d1-bcdc8632533d-operator-scripts\") pod \"nova-api-db-create-dwv55\" (UID: \"e457ff42-a87d-4bfd-91d1-bcdc8632533d\") " pod="openstack/nova-api-db-create-dwv55" Nov 28 12:56:09 crc kubenswrapper[4779]: I1128 12:56:09.786254 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-cv5pl"] Nov 28 12:56:09 crc kubenswrapper[4779]: I1128 12:56:09.814004 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-cv5pl"] Nov 28 12:56:09 crc kubenswrapper[4779]: I1128 12:56:09.814277 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-cv5pl" Nov 28 12:56:09 crc kubenswrapper[4779]: I1128 12:56:09.824535 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-29dd-account-create-update-zs2sb"] Nov 28 12:56:09 crc kubenswrapper[4779]: I1128 12:56:09.826320 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-29dd-account-create-update-zs2sb" Nov 28 12:56:09 crc kubenswrapper[4779]: I1128 12:56:09.828470 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 28 12:56:09 crc kubenswrapper[4779]: I1128 12:56:09.836402 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-29dd-account-create-update-zs2sb"] Nov 28 12:56:09 crc kubenswrapper[4779]: I1128 12:56:09.854216 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fea6137f-d265-4494-a0ba-f92b3bdd82a2-operator-scripts\") pod \"nova-api-29dd-account-create-update-zs2sb\" (UID: \"fea6137f-d265-4494-a0ba-f92b3bdd82a2\") " pod="openstack/nova-api-29dd-account-create-update-zs2sb" Nov 28 12:56:09 crc kubenswrapper[4779]: I1128 12:56:09.854442 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvpqn\" (UniqueName: \"kubernetes.io/projected/fea6137f-d265-4494-a0ba-f92b3bdd82a2-kube-api-access-nvpqn\") pod \"nova-api-29dd-account-create-update-zs2sb\" (UID: \"fea6137f-d265-4494-a0ba-f92b3bdd82a2\") " pod="openstack/nova-api-29dd-account-create-update-zs2sb" Nov 28 12:56:09 crc kubenswrapper[4779]: I1128 12:56:09.854658 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3284e9ee-a945-4fb4-ae73-e2e2f580c7ac-operator-scripts\") pod \"nova-cell0-db-create-cv5pl\" (UID: \"3284e9ee-a945-4fb4-ae73-e2e2f580c7ac\") " pod="openstack/nova-cell0-db-create-cv5pl" Nov 28 12:56:09 crc kubenswrapper[4779]: I1128 12:56:09.854738 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e457ff42-a87d-4bfd-91d1-bcdc8632533d-operator-scripts\") pod \"nova-api-db-create-dwv55\" (UID: \"e457ff42-a87d-4bfd-91d1-bcdc8632533d\") " pod="openstack/nova-api-db-create-dwv55" Nov 28 12:56:09 crc kubenswrapper[4779]: I1128 12:56:09.855008 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbm57\" (UniqueName: \"kubernetes.io/projected/e457ff42-a87d-4bfd-91d1-bcdc8632533d-kube-api-access-lbm57\") pod \"nova-api-db-create-dwv55\" (UID: \"e457ff42-a87d-4bfd-91d1-bcdc8632533d\") " pod="openstack/nova-api-db-create-dwv55" Nov 28 12:56:09 crc kubenswrapper[4779]: I1128 12:56:09.855162 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65qn4\" (UniqueName: \"kubernetes.io/projected/3284e9ee-a945-4fb4-ae73-e2e2f580c7ac-kube-api-access-65qn4\") pod \"nova-cell0-db-create-cv5pl\" (UID: \"3284e9ee-a945-4fb4-ae73-e2e2f580c7ac\") " pod="openstack/nova-cell0-db-create-cv5pl" Nov 28 12:56:09 crc kubenswrapper[4779]: I1128 12:56:09.855884 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e457ff42-a87d-4bfd-91d1-bcdc8632533d-operator-scripts\") pod \"nova-api-db-create-dwv55\" (UID: \"e457ff42-a87d-4bfd-91d1-bcdc8632533d\") " pod="openstack/nova-api-db-create-dwv55" Nov 28 12:56:09 crc kubenswrapper[4779]: I1128 12:56:09.874371 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbm57\" (UniqueName: \"kubernetes.io/projected/e457ff42-a87d-4bfd-91d1-bcdc8632533d-kube-api-access-lbm57\") pod \"nova-api-db-create-dwv55\" (UID: \"e457ff42-a87d-4bfd-91d1-bcdc8632533d\") " pod="openstack/nova-api-db-create-dwv55" Nov 28 12:56:09 crc kubenswrapper[4779]: I1128 12:56:09.909694 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-dkjsg"] Nov 28 12:56:09 crc kubenswrapper[4779]: I1128 12:56:09.910907 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-dkjsg" Nov 28 12:56:09 crc kubenswrapper[4779]: I1128 12:56:09.925036 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-237e-account-create-update-8kj9n"] Nov 28 12:56:09 crc kubenswrapper[4779]: I1128 12:56:09.926116 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-237e-account-create-update-8kj9n" Nov 28 12:56:09 crc kubenswrapper[4779]: I1128 12:56:09.928066 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 28 12:56:09 crc kubenswrapper[4779]: I1128 12:56:09.948340 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-dkjsg"] Nov 28 12:56:09 crc kubenswrapper[4779]: I1128 12:56:09.951751 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dwv55" Nov 28 12:56:09 crc kubenswrapper[4779]: I1128 12:56:09.960582 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3284e9ee-a945-4fb4-ae73-e2e2f580c7ac-operator-scripts\") pod \"nova-cell0-db-create-cv5pl\" (UID: \"3284e9ee-a945-4fb4-ae73-e2e2f580c7ac\") " pod="openstack/nova-cell0-db-create-cv5pl" Nov 28 12:56:09 crc kubenswrapper[4779]: I1128 12:56:09.960648 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a740778-7386-4f7a-ad57-4bd5fa6c2fc6-operator-scripts\") pod \"nova-cell0-237e-account-create-update-8kj9n\" (UID: \"8a740778-7386-4f7a-ad57-4bd5fa6c2fc6\") " pod="openstack/nova-cell0-237e-account-create-update-8kj9n" Nov 28 12:56:09 crc kubenswrapper[4779]: I1128 12:56:09.960689 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65qn4\" (UniqueName: \"kubernetes.io/projected/3284e9ee-a945-4fb4-ae73-e2e2f580c7ac-kube-api-access-65qn4\") pod \"nova-cell0-db-create-cv5pl\" (UID: \"3284e9ee-a945-4fb4-ae73-e2e2f580c7ac\") " pod="openstack/nova-cell0-db-create-cv5pl" Nov 28 12:56:09 crc kubenswrapper[4779]: I1128 12:56:09.960723 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fea6137f-d265-4494-a0ba-f92b3bdd82a2-operator-scripts\") pod \"nova-api-29dd-account-create-update-zs2sb\" (UID: \"fea6137f-d265-4494-a0ba-f92b3bdd82a2\") " pod="openstack/nova-api-29dd-account-create-update-zs2sb" Nov 28 12:56:09 crc kubenswrapper[4779]: I1128 12:56:09.960741 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvpqn\" (UniqueName: \"kubernetes.io/projected/fea6137f-d265-4494-a0ba-f92b3bdd82a2-kube-api-access-nvpqn\") pod \"nova-api-29dd-account-create-update-zs2sb\" (UID: \"fea6137f-d265-4494-a0ba-f92b3bdd82a2\") " pod="openstack/nova-api-29dd-account-create-update-zs2sb" Nov 28 12:56:09 crc kubenswrapper[4779]: I1128 12:56:09.960760 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dggtm\" (UniqueName: \"kubernetes.io/projected/8a740778-7386-4f7a-ad57-4bd5fa6c2fc6-kube-api-access-dggtm\") pod \"nova-cell0-237e-account-create-update-8kj9n\" (UID: \"8a740778-7386-4f7a-ad57-4bd5fa6c2fc6\") " pod="openstack/nova-cell0-237e-account-create-update-8kj9n" Nov 28 12:56:09 crc kubenswrapper[4779]: I1128 12:56:09.960788 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rsjl\" (UniqueName: \"kubernetes.io/projected/a00b970a-2a2a-40c2-bc07-c1bb05d74810-kube-api-access-4rsjl\") pod \"nova-cell1-db-create-dkjsg\" (UID: \"a00b970a-2a2a-40c2-bc07-c1bb05d74810\") " pod="openstack/nova-cell1-db-create-dkjsg" Nov 28 12:56:09 crc kubenswrapper[4779]: I1128 12:56:09.960814 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a00b970a-2a2a-40c2-bc07-c1bb05d74810-operator-scripts\") pod \"nova-cell1-db-create-dkjsg\" (UID: \"a00b970a-2a2a-40c2-bc07-c1bb05d74810\") " pod="openstack/nova-cell1-db-create-dkjsg" Nov 28 12:56:09 crc kubenswrapper[4779]: I1128 12:56:09.961489 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3284e9ee-a945-4fb4-ae73-e2e2f580c7ac-operator-scripts\") pod \"nova-cell0-db-create-cv5pl\" (UID: \"3284e9ee-a945-4fb4-ae73-e2e2f580c7ac\") " pod="openstack/nova-cell0-db-create-cv5pl" Nov 28 12:56:09 crc kubenswrapper[4779]: I1128 12:56:09.961835 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fea6137f-d265-4494-a0ba-f92b3bdd82a2-operator-scripts\") pod \"nova-api-29dd-account-create-update-zs2sb\" (UID: \"fea6137f-d265-4494-a0ba-f92b3bdd82a2\") " pod="openstack/nova-api-29dd-account-create-update-zs2sb" Nov 28 12:56:09 crc kubenswrapper[4779]: I1128 12:56:09.975164 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-237e-account-create-update-8kj9n"] Nov 28 12:56:09 crc kubenswrapper[4779]: I1128 12:56:09.978973 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65qn4\" (UniqueName: \"kubernetes.io/projected/3284e9ee-a945-4fb4-ae73-e2e2f580c7ac-kube-api-access-65qn4\") pod \"nova-cell0-db-create-cv5pl\" (UID: \"3284e9ee-a945-4fb4-ae73-e2e2f580c7ac\") " pod="openstack/nova-cell0-db-create-cv5pl" Nov 28 12:56:09 crc kubenswrapper[4779]: I1128 12:56:09.980961 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvpqn\" (UniqueName: \"kubernetes.io/projected/fea6137f-d265-4494-a0ba-f92b3bdd82a2-kube-api-access-nvpqn\") pod \"nova-api-29dd-account-create-update-zs2sb\" (UID: \"fea6137f-d265-4494-a0ba-f92b3bdd82a2\") " pod="openstack/nova-api-29dd-account-create-update-zs2sb" Nov 28 12:56:10 crc kubenswrapper[4779]: I1128 12:56:10.063177 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rsjl\" (UniqueName: \"kubernetes.io/projected/a00b970a-2a2a-40c2-bc07-c1bb05d74810-kube-api-access-4rsjl\") pod \"nova-cell1-db-create-dkjsg\" (UID: \"a00b970a-2a2a-40c2-bc07-c1bb05d74810\") " pod="openstack/nova-cell1-db-create-dkjsg" Nov 28 12:56:10 crc kubenswrapper[4779]: I1128 12:56:10.063234 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a00b970a-2a2a-40c2-bc07-c1bb05d74810-operator-scripts\") pod \"nova-cell1-db-create-dkjsg\" (UID: \"a00b970a-2a2a-40c2-bc07-c1bb05d74810\") " pod="openstack/nova-cell1-db-create-dkjsg" Nov 28 12:56:10 crc kubenswrapper[4779]: I1128 12:56:10.063318 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a740778-7386-4f7a-ad57-4bd5fa6c2fc6-operator-scripts\") pod \"nova-cell0-237e-account-create-update-8kj9n\" (UID: \"8a740778-7386-4f7a-ad57-4bd5fa6c2fc6\") " pod="openstack/nova-cell0-237e-account-create-update-8kj9n" Nov 28 12:56:10 crc kubenswrapper[4779]: I1128 12:56:10.063396 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dggtm\" (UniqueName: \"kubernetes.io/projected/8a740778-7386-4f7a-ad57-4bd5fa6c2fc6-kube-api-access-dggtm\") pod \"nova-cell0-237e-account-create-update-8kj9n\" (UID: \"8a740778-7386-4f7a-ad57-4bd5fa6c2fc6\") " pod="openstack/nova-cell0-237e-account-create-update-8kj9n" Nov 28 12:56:10 crc kubenswrapper[4779]: I1128 12:56:10.064214 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a00b970a-2a2a-40c2-bc07-c1bb05d74810-operator-scripts\") pod \"nova-cell1-db-create-dkjsg\" (UID: \"a00b970a-2a2a-40c2-bc07-c1bb05d74810\") " pod="openstack/nova-cell1-db-create-dkjsg" Nov 28 12:56:10 crc kubenswrapper[4779]: I1128 12:56:10.064226 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a740778-7386-4f7a-ad57-4bd5fa6c2fc6-operator-scripts\") pod \"nova-cell0-237e-account-create-update-8kj9n\" (UID: \"8a740778-7386-4f7a-ad57-4bd5fa6c2fc6\") " pod="openstack/nova-cell0-237e-account-create-update-8kj9n" Nov 28 12:56:10 crc kubenswrapper[4779]: I1128 12:56:10.087709 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dggtm\" (UniqueName: \"kubernetes.io/projected/8a740778-7386-4f7a-ad57-4bd5fa6c2fc6-kube-api-access-dggtm\") pod \"nova-cell0-237e-account-create-update-8kj9n\" (UID: \"8a740778-7386-4f7a-ad57-4bd5fa6c2fc6\") " pod="openstack/nova-cell0-237e-account-create-update-8kj9n" Nov 28 12:56:10 crc kubenswrapper[4779]: I1128 12:56:10.104398 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rsjl\" (UniqueName: \"kubernetes.io/projected/a00b970a-2a2a-40c2-bc07-c1bb05d74810-kube-api-access-4rsjl\") pod \"nova-cell1-db-create-dkjsg\" (UID: \"a00b970a-2a2a-40c2-bc07-c1bb05d74810\") " pod="openstack/nova-cell1-db-create-dkjsg" Nov 28 12:56:10 crc kubenswrapper[4779]: I1128 12:56:10.138639 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-cv5pl" Nov 28 12:56:10 crc kubenswrapper[4779]: I1128 12:56:10.150514 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-29dd-account-create-update-zs2sb" Nov 28 12:56:10 crc kubenswrapper[4779]: I1128 12:56:10.162185 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-f633-account-create-update-sj299"] Nov 28 12:56:10 crc kubenswrapper[4779]: I1128 12:56:10.163268 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f633-account-create-update-sj299" Nov 28 12:56:10 crc kubenswrapper[4779]: I1128 12:56:10.168502 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 28 12:56:10 crc kubenswrapper[4779]: I1128 12:56:10.175960 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-f633-account-create-update-sj299"] Nov 28 12:56:10 crc kubenswrapper[4779]: I1128 12:56:10.252529 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-dkjsg" Nov 28 12:56:10 crc kubenswrapper[4779]: I1128 12:56:10.261700 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-237e-account-create-update-8kj9n" Nov 28 12:56:10 crc kubenswrapper[4779]: I1128 12:56:10.268089 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj9hs\" (UniqueName: \"kubernetes.io/projected/cf493f67-858c-412a-9bf8-804c687c4f12-kube-api-access-mj9hs\") pod \"nova-cell1-f633-account-create-update-sj299\" (UID: \"cf493f67-858c-412a-9bf8-804c687c4f12\") " pod="openstack/nova-cell1-f633-account-create-update-sj299" Nov 28 12:56:10 crc kubenswrapper[4779]: I1128 12:56:10.268256 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf493f67-858c-412a-9bf8-804c687c4f12-operator-scripts\") pod \"nova-cell1-f633-account-create-update-sj299\" (UID: \"cf493f67-858c-412a-9bf8-804c687c4f12\") " pod="openstack/nova-cell1-f633-account-create-update-sj299" Nov 28 12:56:10 crc kubenswrapper[4779]: I1128 12:56:10.376088 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf493f67-858c-412a-9bf8-804c687c4f12-operator-scripts\") pod \"nova-cell1-f633-account-create-update-sj299\" (UID: \"cf493f67-858c-412a-9bf8-804c687c4f12\") " pod="openstack/nova-cell1-f633-account-create-update-sj299" Nov 28 12:56:10 crc kubenswrapper[4779]: I1128 12:56:10.376234 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mj9hs\" (UniqueName: \"kubernetes.io/projected/cf493f67-858c-412a-9bf8-804c687c4f12-kube-api-access-mj9hs\") pod \"nova-cell1-f633-account-create-update-sj299\" (UID: \"cf493f67-858c-412a-9bf8-804c687c4f12\") " pod="openstack/nova-cell1-f633-account-create-update-sj299" Nov 28 12:56:10 crc kubenswrapper[4779]: I1128 12:56:10.377137 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf493f67-858c-412a-9bf8-804c687c4f12-operator-scripts\") pod \"nova-cell1-f633-account-create-update-sj299\" (UID: \"cf493f67-858c-412a-9bf8-804c687c4f12\") " pod="openstack/nova-cell1-f633-account-create-update-sj299" Nov 28 12:56:10 crc kubenswrapper[4779]: I1128 12:56:10.414347 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj9hs\" (UniqueName: \"kubernetes.io/projected/cf493f67-858c-412a-9bf8-804c687c4f12-kube-api-access-mj9hs\") pod \"nova-cell1-f633-account-create-update-sj299\" (UID: \"cf493f67-858c-412a-9bf8-804c687c4f12\") " pod="openstack/nova-cell1-f633-account-create-update-sj299" Nov 28 12:56:10 crc kubenswrapper[4779]: I1128 12:56:10.519317 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-dwv55"] Nov 28 12:56:10 crc kubenswrapper[4779]: I1128 12:56:10.519786 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f633-account-create-update-sj299" Nov 28 12:56:10 crc kubenswrapper[4779]: I1128 12:56:10.808638 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-cv5pl"] Nov 28 12:56:10 crc kubenswrapper[4779]: I1128 12:56:10.817644 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-29dd-account-create-update-zs2sb"] Nov 28 12:56:10 crc kubenswrapper[4779]: W1128 12:56:10.818866 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3284e9ee_a945_4fb4_ae73_e2e2f580c7ac.slice/crio-8c62b9cab72435a0fafd0530ffc6c65b9c0b08241469bba9a814e3b3460395d8 WatchSource:0}: Error finding container 8c62b9cab72435a0fafd0530ffc6c65b9c0b08241469bba9a814e3b3460395d8: Status 404 returned error can't find the container with id 8c62b9cab72435a0fafd0530ffc6c65b9c0b08241469bba9a814e3b3460395d8 Nov 28 12:56:10 crc kubenswrapper[4779]: I1128 12:56:10.859902 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-55ff4b54d5-48p68" Nov 28 12:56:11 crc kubenswrapper[4779]: I1128 12:56:11.058741 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-237e-account-create-update-8kj9n"] Nov 28 12:56:11 crc kubenswrapper[4779]: I1128 12:56:11.085106 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-dkjsg"] Nov 28 12:56:11 crc kubenswrapper[4779]: I1128 12:56:11.157572 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-f633-account-create-update-sj299"] Nov 28 12:56:11 crc kubenswrapper[4779]: I1128 12:56:11.174000 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-5675dff4b5-5c9sq" Nov 28 12:56:11 crc kubenswrapper[4779]: I1128 12:56:11.216356 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-66496b66cd-vwvjx"] Nov 28 12:56:11 crc kubenswrapper[4779]: I1128 12:56:11.379409 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-74c96b7975-gndjl" Nov 28 12:56:11 crc kubenswrapper[4779]: I1128 12:56:11.450573 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7c7488cf49-mbbwj"] Nov 28 12:56:11 crc kubenswrapper[4779]: I1128 12:56:11.591902 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-66496b66cd-vwvjx" Nov 28 12:56:11 crc kubenswrapper[4779]: I1128 12:56:11.593291 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dwv55" event={"ID":"e457ff42-a87d-4bfd-91d1-bcdc8632533d","Type":"ContainerStarted","Data":"5fda55a9074ead71cf4a28f5dcee527e742a7f2838698010eac32aba9e18e764"} Nov 28 12:56:11 crc kubenswrapper[4779]: I1128 12:56:11.596608 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-f633-account-create-update-sj299" event={"ID":"cf493f67-858c-412a-9bf8-804c687c4f12","Type":"ContainerStarted","Data":"7f1ff93c9b8005a05d5353d770d8521da45528df9f223036691717bf09180ff9"} Nov 28 12:56:11 crc kubenswrapper[4779]: I1128 12:56:11.597909 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-dkjsg" event={"ID":"a00b970a-2a2a-40c2-bc07-c1bb05d74810","Type":"ContainerStarted","Data":"f5404a1245c9b7f96b4b6810438d60db945ac1eb0e72b83c5084f335f15e9280"} Nov 28 12:56:11 crc kubenswrapper[4779]: I1128 12:56:11.600331 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-29dd-account-create-update-zs2sb" event={"ID":"fea6137f-d265-4494-a0ba-f92b3bdd82a2","Type":"ContainerStarted","Data":"6848e08d9756a10ba7a4e0c75eb9164ed57903e550bea275e90102db7154c9e9"} Nov 28 12:56:11 crc kubenswrapper[4779]: I1128 12:56:11.608519 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-66496b66cd-vwvjx" event={"ID":"8c363caa-b3e7-4f43-9bf9-8846a5e72c34","Type":"ContainerDied","Data":"afad1b2161d51c693125344e12f6601466a89c9241cb862167e11edc78e0c43f"} Nov 28 12:56:11 crc kubenswrapper[4779]: I1128 12:56:11.608576 4779 scope.go:117] "RemoveContainer" containerID="4332371bb729a4894b6315da2aa228a1048c00c59c447a6af9dcdf9e4babaa03" Nov 28 12:56:11 crc kubenswrapper[4779]: I1128 12:56:11.608668 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-66496b66cd-vwvjx" Nov 28 12:56:11 crc kubenswrapper[4779]: I1128 12:56:11.621322 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-cv5pl" event={"ID":"3284e9ee-a945-4fb4-ae73-e2e2f580c7ac","Type":"ContainerStarted","Data":"8c62b9cab72435a0fafd0530ffc6c65b9c0b08241469bba9a814e3b3460395d8"} Nov 28 12:56:11 crc kubenswrapper[4779]: I1128 12:56:11.624716 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-237e-account-create-update-8kj9n" event={"ID":"8a740778-7386-4f7a-ad57-4bd5fa6c2fc6","Type":"ContainerStarted","Data":"829a3c8d2167101f749d4fc16a13cae884b9a2740e79edeb75883cc9d2c05480"} Nov 28 12:56:11 crc kubenswrapper[4779]: I1128 12:56:11.702162 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8c363caa-b3e7-4f43-9bf9-8846a5e72c34-config-data-custom\") pod \"8c363caa-b3e7-4f43-9bf9-8846a5e72c34\" (UID: \"8c363caa-b3e7-4f43-9bf9-8846a5e72c34\") " Nov 28 12:56:11 crc kubenswrapper[4779]: I1128 12:56:11.702364 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c363caa-b3e7-4f43-9bf9-8846a5e72c34-combined-ca-bundle\") pod \"8c363caa-b3e7-4f43-9bf9-8846a5e72c34\" (UID: \"8c363caa-b3e7-4f43-9bf9-8846a5e72c34\") " Nov 28 12:56:11 crc kubenswrapper[4779]: I1128 12:56:11.702405 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c363caa-b3e7-4f43-9bf9-8846a5e72c34-config-data\") pod \"8c363caa-b3e7-4f43-9bf9-8846a5e72c34\" (UID: \"8c363caa-b3e7-4f43-9bf9-8846a5e72c34\") " Nov 28 12:56:11 crc kubenswrapper[4779]: I1128 12:56:11.702494 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrpwj\" (UniqueName: \"kubernetes.io/projected/8c363caa-b3e7-4f43-9bf9-8846a5e72c34-kube-api-access-wrpwj\") pod \"8c363caa-b3e7-4f43-9bf9-8846a5e72c34\" (UID: \"8c363caa-b3e7-4f43-9bf9-8846a5e72c34\") " Nov 28 12:56:11 crc kubenswrapper[4779]: I1128 12:56:11.713414 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c363caa-b3e7-4f43-9bf9-8846a5e72c34-kube-api-access-wrpwj" (OuterVolumeSpecName: "kube-api-access-wrpwj") pod "8c363caa-b3e7-4f43-9bf9-8846a5e72c34" (UID: "8c363caa-b3e7-4f43-9bf9-8846a5e72c34"). InnerVolumeSpecName "kube-api-access-wrpwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:56:11 crc kubenswrapper[4779]: I1128 12:56:11.719208 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c363caa-b3e7-4f43-9bf9-8846a5e72c34-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8c363caa-b3e7-4f43-9bf9-8846a5e72c34" (UID: "8c363caa-b3e7-4f43-9bf9-8846a5e72c34"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:56:11 crc kubenswrapper[4779]: I1128 12:56:11.744082 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c363caa-b3e7-4f43-9bf9-8846a5e72c34-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8c363caa-b3e7-4f43-9bf9-8846a5e72c34" (UID: "8c363caa-b3e7-4f43-9bf9-8846a5e72c34"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:56:11 crc kubenswrapper[4779]: I1128 12:56:11.780602 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c363caa-b3e7-4f43-9bf9-8846a5e72c34-config-data" (OuterVolumeSpecName: "config-data") pod "8c363caa-b3e7-4f43-9bf9-8846a5e72c34" (UID: "8c363caa-b3e7-4f43-9bf9-8846a5e72c34"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:56:11 crc kubenswrapper[4779]: I1128 12:56:11.804292 4779 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c363caa-b3e7-4f43-9bf9-8846a5e72c34-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:11 crc kubenswrapper[4779]: I1128 12:56:11.804900 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrpwj\" (UniqueName: \"kubernetes.io/projected/8c363caa-b3e7-4f43-9bf9-8846a5e72c34-kube-api-access-wrpwj\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:11 crc kubenswrapper[4779]: I1128 12:56:11.804919 4779 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8c363caa-b3e7-4f43-9bf9-8846a5e72c34-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:11 crc kubenswrapper[4779]: I1128 12:56:11.804928 4779 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c363caa-b3e7-4f43-9bf9-8846a5e72c34-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:11 crc kubenswrapper[4779]: I1128 12:56:11.959180 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-66496b66cd-vwvjx"] Nov 28 12:56:11 crc kubenswrapper[4779]: I1128 12:56:11.966909 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-66496b66cd-vwvjx"] Nov 28 12:56:12 crc kubenswrapper[4779]: I1128 12:56:12.427521 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7c7488cf49-mbbwj" Nov 28 12:56:12 crc kubenswrapper[4779]: I1128 12:56:12.621146 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e8eae11-8b73-41ea-a4ac-6c58de45319e-config-data-custom\") pod \"0e8eae11-8b73-41ea-a4ac-6c58de45319e\" (UID: \"0e8eae11-8b73-41ea-a4ac-6c58de45319e\") " Nov 28 12:56:12 crc kubenswrapper[4779]: I1128 12:56:12.621545 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e8eae11-8b73-41ea-a4ac-6c58de45319e-combined-ca-bundle\") pod \"0e8eae11-8b73-41ea-a4ac-6c58de45319e\" (UID: \"0e8eae11-8b73-41ea-a4ac-6c58de45319e\") " Nov 28 12:56:12 crc kubenswrapper[4779]: I1128 12:56:12.621582 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sr52v\" (UniqueName: \"kubernetes.io/projected/0e8eae11-8b73-41ea-a4ac-6c58de45319e-kube-api-access-sr52v\") pod \"0e8eae11-8b73-41ea-a4ac-6c58de45319e\" (UID: \"0e8eae11-8b73-41ea-a4ac-6c58de45319e\") " Nov 28 12:56:12 crc kubenswrapper[4779]: I1128 12:56:12.621609 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e8eae11-8b73-41ea-a4ac-6c58de45319e-config-data\") pod \"0e8eae11-8b73-41ea-a4ac-6c58de45319e\" (UID: \"0e8eae11-8b73-41ea-a4ac-6c58de45319e\") " Nov 28 12:56:12 crc kubenswrapper[4779]: I1128 12:56:12.625278 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e8eae11-8b73-41ea-a4ac-6c58de45319e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0e8eae11-8b73-41ea-a4ac-6c58de45319e" (UID: "0e8eae11-8b73-41ea-a4ac-6c58de45319e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:56:12 crc kubenswrapper[4779]: I1128 12:56:12.627316 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e8eae11-8b73-41ea-a4ac-6c58de45319e-kube-api-access-sr52v" (OuterVolumeSpecName: "kube-api-access-sr52v") pod "0e8eae11-8b73-41ea-a4ac-6c58de45319e" (UID: "0e8eae11-8b73-41ea-a4ac-6c58de45319e"). InnerVolumeSpecName "kube-api-access-sr52v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:56:12 crc kubenswrapper[4779]: I1128 12:56:12.644258 4779 generic.go:334] "Generic (PLEG): container finished" podID="e457ff42-a87d-4bfd-91d1-bcdc8632533d" containerID="654e9402753c28d374e3c509719b80da86524908ce29ef08f783560c53b34488" exitCode=0 Nov 28 12:56:12 crc kubenswrapper[4779]: I1128 12:56:12.644291 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dwv55" event={"ID":"e457ff42-a87d-4bfd-91d1-bcdc8632533d","Type":"ContainerDied","Data":"654e9402753c28d374e3c509719b80da86524908ce29ef08f783560c53b34488"} Nov 28 12:56:12 crc kubenswrapper[4779]: I1128 12:56:12.647322 4779 generic.go:334] "Generic (PLEG): container finished" podID="892bc73b-f7ab-40c1-af7b-9540725292f1" containerID="f6ce8e51262e48f0e55986bcd03ca7e27b32d59ec9661907fe63a77584e36d84" exitCode=0 Nov 28 12:56:12 crc kubenswrapper[4779]: I1128 12:56:12.647383 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"892bc73b-f7ab-40c1-af7b-9540725292f1","Type":"ContainerDied","Data":"f6ce8e51262e48f0e55986bcd03ca7e27b32d59ec9661907fe63a77584e36d84"} Nov 28 12:56:12 crc kubenswrapper[4779]: I1128 12:56:12.660670 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7c7488cf49-mbbwj" event={"ID":"0e8eae11-8b73-41ea-a4ac-6c58de45319e","Type":"ContainerDied","Data":"0b19323dbb732df5270c371b2ccc3126c8a26cb7f6602fbe0676e6a89241ac39"} Nov 28 12:56:12 crc kubenswrapper[4779]: I1128 12:56:12.660711 4779 scope.go:117] "RemoveContainer" containerID="719e836f82801fccc411277b39ff104f0c7d3ec4a8f1c73fb520c6150630cba9" Nov 28 12:56:12 crc kubenswrapper[4779]: I1128 12:56:12.660779 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7c7488cf49-mbbwj" Nov 28 12:56:12 crc kubenswrapper[4779]: I1128 12:56:12.736858 4779 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e8eae11-8b73-41ea-a4ac-6c58de45319e-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:12 crc kubenswrapper[4779]: I1128 12:56:12.737165 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sr52v\" (UniqueName: \"kubernetes.io/projected/0e8eae11-8b73-41ea-a4ac-6c58de45319e-kube-api-access-sr52v\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:12 crc kubenswrapper[4779]: I1128 12:56:12.742061 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e8eae11-8b73-41ea-a4ac-6c58de45319e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0e8eae11-8b73-41ea-a4ac-6c58de45319e" (UID: "0e8eae11-8b73-41ea-a4ac-6c58de45319e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:56:12 crc kubenswrapper[4779]: I1128 12:56:12.841282 4779 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e8eae11-8b73-41ea-a4ac-6c58de45319e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:12 crc kubenswrapper[4779]: I1128 12:56:12.870216 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e8eae11-8b73-41ea-a4ac-6c58de45319e-config-data" (OuterVolumeSpecName: "config-data") pod "0e8eae11-8b73-41ea-a4ac-6c58de45319e" (UID: "0e8eae11-8b73-41ea-a4ac-6c58de45319e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:56:12 crc kubenswrapper[4779]: I1128 12:56:12.942247 4779 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e8eae11-8b73-41ea-a4ac-6c58de45319e-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:12 crc kubenswrapper[4779]: I1128 12:56:12.979978 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 12:56:12 crc kubenswrapper[4779]: I1128 12:56:12.995633 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7c7488cf49-mbbwj"] Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.003015 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-7c7488cf49-mbbwj"] Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.042585 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/892bc73b-f7ab-40c1-af7b-9540725292f1-combined-ca-bundle\") pod \"892bc73b-f7ab-40c1-af7b-9540725292f1\" (UID: \"892bc73b-f7ab-40c1-af7b-9540725292f1\") " Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.042633 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/892bc73b-f7ab-40c1-af7b-9540725292f1-httpd-run\") pod \"892bc73b-f7ab-40c1-af7b-9540725292f1\" (UID: \"892bc73b-f7ab-40c1-af7b-9540725292f1\") " Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.042688 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/892bc73b-f7ab-40c1-af7b-9540725292f1-logs\") pod \"892bc73b-f7ab-40c1-af7b-9540725292f1\" (UID: \"892bc73b-f7ab-40c1-af7b-9540725292f1\") " Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.042713 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/892bc73b-f7ab-40c1-af7b-9540725292f1-internal-tls-certs\") pod \"892bc73b-f7ab-40c1-af7b-9540725292f1\" (UID: \"892bc73b-f7ab-40c1-af7b-9540725292f1\") " Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.042740 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/892bc73b-f7ab-40c1-af7b-9540725292f1-scripts\") pod \"892bc73b-f7ab-40c1-af7b-9540725292f1\" (UID: \"892bc73b-f7ab-40c1-af7b-9540725292f1\") " Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.042794 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"892bc73b-f7ab-40c1-af7b-9540725292f1\" (UID: \"892bc73b-f7ab-40c1-af7b-9540725292f1\") " Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.042850 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6k9g\" (UniqueName: \"kubernetes.io/projected/892bc73b-f7ab-40c1-af7b-9540725292f1-kube-api-access-v6k9g\") pod \"892bc73b-f7ab-40c1-af7b-9540725292f1\" (UID: \"892bc73b-f7ab-40c1-af7b-9540725292f1\") " Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.042912 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/892bc73b-f7ab-40c1-af7b-9540725292f1-config-data\") pod \"892bc73b-f7ab-40c1-af7b-9540725292f1\" (UID: \"892bc73b-f7ab-40c1-af7b-9540725292f1\") " Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.044132 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/892bc73b-f7ab-40c1-af7b-9540725292f1-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "892bc73b-f7ab-40c1-af7b-9540725292f1" (UID: "892bc73b-f7ab-40c1-af7b-9540725292f1"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.044653 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/892bc73b-f7ab-40c1-af7b-9540725292f1-logs" (OuterVolumeSpecName: "logs") pod "892bc73b-f7ab-40c1-af7b-9540725292f1" (UID: "892bc73b-f7ab-40c1-af7b-9540725292f1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.048257 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "892bc73b-f7ab-40c1-af7b-9540725292f1" (UID: "892bc73b-f7ab-40c1-af7b-9540725292f1"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.049201 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/892bc73b-f7ab-40c1-af7b-9540725292f1-kube-api-access-v6k9g" (OuterVolumeSpecName: "kube-api-access-v6k9g") pod "892bc73b-f7ab-40c1-af7b-9540725292f1" (UID: "892bc73b-f7ab-40c1-af7b-9540725292f1"). InnerVolumeSpecName "kube-api-access-v6k9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.058055 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/892bc73b-f7ab-40c1-af7b-9540725292f1-scripts" (OuterVolumeSpecName: "scripts") pod "892bc73b-f7ab-40c1-af7b-9540725292f1" (UID: "892bc73b-f7ab-40c1-af7b-9540725292f1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.113195 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/892bc73b-f7ab-40c1-af7b-9540725292f1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "892bc73b-f7ab-40c1-af7b-9540725292f1" (UID: "892bc73b-f7ab-40c1-af7b-9540725292f1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.134972 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/892bc73b-f7ab-40c1-af7b-9540725292f1-config-data" (OuterVolumeSpecName: "config-data") pod "892bc73b-f7ab-40c1-af7b-9540725292f1" (UID: "892bc73b-f7ab-40c1-af7b-9540725292f1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.145398 4779 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/892bc73b-f7ab-40c1-af7b-9540725292f1-logs\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.145596 4779 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/892bc73b-f7ab-40c1-af7b-9540725292f1-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.145694 4779 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.145750 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6k9g\" (UniqueName: \"kubernetes.io/projected/892bc73b-f7ab-40c1-af7b-9540725292f1-kube-api-access-v6k9g\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.145804 4779 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/892bc73b-f7ab-40c1-af7b-9540725292f1-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.145856 4779 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/892bc73b-f7ab-40c1-af7b-9540725292f1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.145905 4779 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/892bc73b-f7ab-40c1-af7b-9540725292f1-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.174208 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/892bc73b-f7ab-40c1-af7b-9540725292f1-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "892bc73b-f7ab-40c1-af7b-9540725292f1" (UID: "892bc73b-f7ab-40c1-af7b-9540725292f1"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.175507 4779 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.247402 4779 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/892bc73b-f7ab-40c1-af7b-9540725292f1-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.247438 4779 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.693528 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87e78f52-9c09-473f-b884-20bc130d6ede","Type":"ContainerStarted","Data":"5a7e9efbce7f12050961b2fce1e7fd418814ef5e96466636d9bcbc7709e31a52"} Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.695070 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="87e78f52-9c09-473f-b884-20bc130d6ede" containerName="ceilometer-central-agent" containerID="cri-o://54c9afd7053211c99b3569e95bd8e8a163eac184d23587472b56b633aa668bfc" gracePeriod=30 Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.695132 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.695229 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="87e78f52-9c09-473f-b884-20bc130d6ede" containerName="sg-core" containerID="cri-o://574fa4cac81acc63a4e2e72955c7cc8faf5eb633d211df2e4b0ab828531f232e" gracePeriod=30 Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.695213 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="87e78f52-9c09-473f-b884-20bc130d6ede" containerName="proxy-httpd" containerID="cri-o://5a7e9efbce7f12050961b2fce1e7fd418814ef5e96466636d9bcbc7709e31a52" gracePeriod=30 Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.695297 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="87e78f52-9c09-473f-b884-20bc130d6ede" containerName="ceilometer-notification-agent" containerID="cri-o://594bfba7ff09abf9fb17a5fd0cd5630bab6fcb7a51b922ef078edda3684a6780" gracePeriod=30 Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.704651 4779 generic.go:334] "Generic (PLEG): container finished" podID="cf493f67-858c-412a-9bf8-804c687c4f12" containerID="0158cd182892aabae03bca60078a21c1c658f82a89a3b70ab19bdee235f23dd3" exitCode=0 Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.704708 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-f633-account-create-update-sj299" event={"ID":"cf493f67-858c-412a-9bf8-804c687c4f12","Type":"ContainerDied","Data":"0158cd182892aabae03bca60078a21c1c658f82a89a3b70ab19bdee235f23dd3"} Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.718845 4779 generic.go:334] "Generic (PLEG): container finished" podID="a00b970a-2a2a-40c2-bc07-c1bb05d74810" containerID="0604a82ed4585c583206490b24ced7dfd0b9874017e9242aaafb4cd5829ad83c" exitCode=0 Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.718910 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-dkjsg" event={"ID":"a00b970a-2a2a-40c2-bc07-c1bb05d74810","Type":"ContainerDied","Data":"0604a82ed4585c583206490b24ced7dfd0b9874017e9242aaafb4cd5829ad83c"} Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.724669 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"892bc73b-f7ab-40c1-af7b-9540725292f1","Type":"ContainerDied","Data":"1383e0c9debe5aa7b1c9e3386675f5558c7623a8eee377273f41f0e7c538f46a"} Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.724716 4779 scope.go:117] "RemoveContainer" containerID="f6ce8e51262e48f0e55986bcd03ca7e27b32d59ec9661907fe63a77584e36d84" Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.724847 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.725272 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.845643225 podStartE2EDuration="14.725261594s" podCreationTimestamp="2025-11-28 12:55:59 +0000 UTC" firstStartedPulling="2025-11-28 12:56:00.560706434 +0000 UTC m=+1221.126381788" lastFinishedPulling="2025-11-28 12:56:12.440324803 +0000 UTC m=+1233.006000157" observedRunningTime="2025-11-28 12:56:13.718824094 +0000 UTC m=+1234.284499448" watchObservedRunningTime="2025-11-28 12:56:13.725261594 +0000 UTC m=+1234.290936948" Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.741506 4779 generic.go:334] "Generic (PLEG): container finished" podID="fea6137f-d265-4494-a0ba-f92b3bdd82a2" containerID="e2cfa8fd89a409a79882e811dc980291b1440d11cbabb24ecd4dd07a4f42b670" exitCode=0 Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.743166 4779 generic.go:334] "Generic (PLEG): container finished" podID="3284e9ee-a945-4fb4-ae73-e2e2f580c7ac" containerID="5164249bfec8b5dd2e37a19274611d1e2dff07ec378dfa823574c05ed3ecd863" exitCode=0 Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.744823 4779 generic.go:334] "Generic (PLEG): container finished" podID="8a740778-7386-4f7a-ad57-4bd5fa6c2fc6" containerID="1cc5cc53cd478678ee5bb23bb96389485b4796828fbc7ca5a9225ea7559680c6" exitCode=0 Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.800050 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e8eae11-8b73-41ea-a4ac-6c58de45319e" path="/var/lib/kubelet/pods/0e8eae11-8b73-41ea-a4ac-6c58de45319e/volumes" Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.800657 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c363caa-b3e7-4f43-9bf9-8846a5e72c34" path="/var/lib/kubelet/pods/8c363caa-b3e7-4f43-9bf9-8846a5e72c34/volumes" Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.812876 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-29dd-account-create-update-zs2sb" event={"ID":"fea6137f-d265-4494-a0ba-f92b3bdd82a2","Type":"ContainerDied","Data":"e2cfa8fd89a409a79882e811dc980291b1440d11cbabb24ecd4dd07a4f42b670"} Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.812914 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-cv5pl" event={"ID":"3284e9ee-a945-4fb4-ae73-e2e2f580c7ac","Type":"ContainerDied","Data":"5164249bfec8b5dd2e37a19274611d1e2dff07ec378dfa823574c05ed3ecd863"} Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.812931 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-237e-account-create-update-8kj9n" event={"ID":"8a740778-7386-4f7a-ad57-4bd5fa6c2fc6","Type":"ContainerDied","Data":"1cc5cc53cd478678ee5bb23bb96389485b4796828fbc7ca5a9225ea7559680c6"} Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.836980 4779 scope.go:117] "RemoveContainer" containerID="bf54c552df0d3675666c9a5d3ec263945f8d9fd2a5f1772bd9b2fba1af83b947" Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.871343 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.895151 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.920145 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 12:56:13 crc kubenswrapper[4779]: E1128 12:56:13.920735 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c363caa-b3e7-4f43-9bf9-8846a5e72c34" containerName="heat-api" Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.920807 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c363caa-b3e7-4f43-9bf9-8846a5e72c34" containerName="heat-api" Nov 28 12:56:13 crc kubenswrapper[4779]: E1128 12:56:13.920887 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e8eae11-8b73-41ea-a4ac-6c58de45319e" containerName="heat-cfnapi" Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.920945 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e8eae11-8b73-41ea-a4ac-6c58de45319e" containerName="heat-cfnapi" Nov 28 12:56:13 crc kubenswrapper[4779]: E1128 12:56:13.921009 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="892bc73b-f7ab-40c1-af7b-9540725292f1" containerName="glance-httpd" Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.921058 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="892bc73b-f7ab-40c1-af7b-9540725292f1" containerName="glance-httpd" Nov 28 12:56:13 crc kubenswrapper[4779]: E1128 12:56:13.921142 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="892bc73b-f7ab-40c1-af7b-9540725292f1" containerName="glance-log" Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.921209 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="892bc73b-f7ab-40c1-af7b-9540725292f1" containerName="glance-log" Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.921434 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e8eae11-8b73-41ea-a4ac-6c58de45319e" containerName="heat-cfnapi" Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.921497 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e8eae11-8b73-41ea-a4ac-6c58de45319e" containerName="heat-cfnapi" Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.921547 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="892bc73b-f7ab-40c1-af7b-9540725292f1" containerName="glance-httpd" Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.921598 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="892bc73b-f7ab-40c1-af7b-9540725292f1" containerName="glance-log" Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.921653 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c363caa-b3e7-4f43-9bf9-8846a5e72c34" containerName="heat-api" Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.921705 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c363caa-b3e7-4f43-9bf9-8846a5e72c34" containerName="heat-api" Nov 28 12:56:13 crc kubenswrapper[4779]: E1128 12:56:13.921928 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c363caa-b3e7-4f43-9bf9-8846a5e72c34" containerName="heat-api" Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.921986 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c363caa-b3e7-4f43-9bf9-8846a5e72c34" containerName="heat-api" Nov 28 12:56:13 crc kubenswrapper[4779]: E1128 12:56:13.922037 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e8eae11-8b73-41ea-a4ac-6c58de45319e" containerName="heat-cfnapi" Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.922085 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e8eae11-8b73-41ea-a4ac-6c58de45319e" containerName="heat-cfnapi" Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.922913 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.936933 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.937141 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 28 12:56:13 crc kubenswrapper[4779]: I1128 12:56:13.992177 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 12:56:14 crc kubenswrapper[4779]: I1128 12:56:14.073562 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05d19641-3a16-482f-bcaf-da12573ca2e6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"05d19641-3a16-482f-bcaf-da12573ca2e6\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:56:14 crc kubenswrapper[4779]: I1128 12:56:14.073839 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05d19641-3a16-482f-bcaf-da12573ca2e6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"05d19641-3a16-482f-bcaf-da12573ca2e6\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:56:14 crc kubenswrapper[4779]: I1128 12:56:14.073860 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvc4z\" (UniqueName: \"kubernetes.io/projected/05d19641-3a16-482f-bcaf-da12573ca2e6-kube-api-access-kvc4z\") pod \"glance-default-internal-api-0\" (UID: \"05d19641-3a16-482f-bcaf-da12573ca2e6\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:56:14 crc kubenswrapper[4779]: I1128 12:56:14.073884 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05d19641-3a16-482f-bcaf-da12573ca2e6-logs\") pod \"glance-default-internal-api-0\" (UID: \"05d19641-3a16-482f-bcaf-da12573ca2e6\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:56:14 crc kubenswrapper[4779]: I1128 12:56:14.073922 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"05d19641-3a16-482f-bcaf-da12573ca2e6\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:56:14 crc kubenswrapper[4779]: I1128 12:56:14.073938 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/05d19641-3a16-482f-bcaf-da12573ca2e6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"05d19641-3a16-482f-bcaf-da12573ca2e6\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:56:14 crc kubenswrapper[4779]: I1128 12:56:14.074016 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05d19641-3a16-482f-bcaf-da12573ca2e6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"05d19641-3a16-482f-bcaf-da12573ca2e6\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:56:14 crc kubenswrapper[4779]: I1128 12:56:14.074042 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05d19641-3a16-482f-bcaf-da12573ca2e6-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"05d19641-3a16-482f-bcaf-da12573ca2e6\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:56:14 crc kubenswrapper[4779]: I1128 12:56:14.175062 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05d19641-3a16-482f-bcaf-da12573ca2e6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"05d19641-3a16-482f-bcaf-da12573ca2e6\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:56:14 crc kubenswrapper[4779]: I1128 12:56:14.175120 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05d19641-3a16-482f-bcaf-da12573ca2e6-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"05d19641-3a16-482f-bcaf-da12573ca2e6\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:56:14 crc kubenswrapper[4779]: I1128 12:56:14.175166 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05d19641-3a16-482f-bcaf-da12573ca2e6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"05d19641-3a16-482f-bcaf-da12573ca2e6\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:56:14 crc kubenswrapper[4779]: I1128 12:56:14.175187 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05d19641-3a16-482f-bcaf-da12573ca2e6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"05d19641-3a16-482f-bcaf-da12573ca2e6\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:56:14 crc kubenswrapper[4779]: I1128 12:56:14.175205 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvc4z\" (UniqueName: \"kubernetes.io/projected/05d19641-3a16-482f-bcaf-da12573ca2e6-kube-api-access-kvc4z\") pod \"glance-default-internal-api-0\" (UID: \"05d19641-3a16-482f-bcaf-da12573ca2e6\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:56:14 crc kubenswrapper[4779]: I1128 12:56:14.175227 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05d19641-3a16-482f-bcaf-da12573ca2e6-logs\") pod \"glance-default-internal-api-0\" (UID: \"05d19641-3a16-482f-bcaf-da12573ca2e6\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:56:14 crc kubenswrapper[4779]: I1128 12:56:14.175260 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"05d19641-3a16-482f-bcaf-da12573ca2e6\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:56:14 crc kubenswrapper[4779]: I1128 12:56:14.175276 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/05d19641-3a16-482f-bcaf-da12573ca2e6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"05d19641-3a16-482f-bcaf-da12573ca2e6\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:56:14 crc kubenswrapper[4779]: I1128 12:56:14.175736 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/05d19641-3a16-482f-bcaf-da12573ca2e6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"05d19641-3a16-482f-bcaf-da12573ca2e6\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:56:14 crc kubenswrapper[4779]: I1128 12:56:14.176049 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05d19641-3a16-482f-bcaf-da12573ca2e6-logs\") pod \"glance-default-internal-api-0\" (UID: \"05d19641-3a16-482f-bcaf-da12573ca2e6\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:56:14 crc kubenswrapper[4779]: I1128 12:56:14.177222 4779 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"05d19641-3a16-482f-bcaf-da12573ca2e6\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Nov 28 12:56:14 crc kubenswrapper[4779]: I1128 12:56:14.182973 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05d19641-3a16-482f-bcaf-da12573ca2e6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"05d19641-3a16-482f-bcaf-da12573ca2e6\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:56:14 crc kubenswrapper[4779]: I1128 12:56:14.183958 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05d19641-3a16-482f-bcaf-da12573ca2e6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"05d19641-3a16-482f-bcaf-da12573ca2e6\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:56:14 crc kubenswrapper[4779]: I1128 12:56:14.188032 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05d19641-3a16-482f-bcaf-da12573ca2e6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"05d19641-3a16-482f-bcaf-da12573ca2e6\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:56:14 crc kubenswrapper[4779]: I1128 12:56:14.192742 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05d19641-3a16-482f-bcaf-da12573ca2e6-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"05d19641-3a16-482f-bcaf-da12573ca2e6\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:56:14 crc kubenswrapper[4779]: I1128 12:56:14.196727 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvc4z\" (UniqueName: \"kubernetes.io/projected/05d19641-3a16-482f-bcaf-da12573ca2e6-kube-api-access-kvc4z\") pod \"glance-default-internal-api-0\" (UID: \"05d19641-3a16-482f-bcaf-da12573ca2e6\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:56:14 crc kubenswrapper[4779]: I1128 12:56:14.209529 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"05d19641-3a16-482f-bcaf-da12573ca2e6\") " pod="openstack/glance-default-internal-api-0" Nov 28 12:56:14 crc kubenswrapper[4779]: I1128 12:56:14.282878 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 12:56:14 crc kubenswrapper[4779]: I1128 12:56:14.294824 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dwv55" Nov 28 12:56:14 crc kubenswrapper[4779]: I1128 12:56:14.489310 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e457ff42-a87d-4bfd-91d1-bcdc8632533d-operator-scripts\") pod \"e457ff42-a87d-4bfd-91d1-bcdc8632533d\" (UID: \"e457ff42-a87d-4bfd-91d1-bcdc8632533d\") " Nov 28 12:56:14 crc kubenswrapper[4779]: I1128 12:56:14.489620 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbm57\" (UniqueName: \"kubernetes.io/projected/e457ff42-a87d-4bfd-91d1-bcdc8632533d-kube-api-access-lbm57\") pod \"e457ff42-a87d-4bfd-91d1-bcdc8632533d\" (UID: \"e457ff42-a87d-4bfd-91d1-bcdc8632533d\") " Nov 28 12:56:14 crc kubenswrapper[4779]: I1128 12:56:14.490155 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e457ff42-a87d-4bfd-91d1-bcdc8632533d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e457ff42-a87d-4bfd-91d1-bcdc8632533d" (UID: "e457ff42-a87d-4bfd-91d1-bcdc8632533d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:56:14 crc kubenswrapper[4779]: I1128 12:56:14.493212 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e457ff42-a87d-4bfd-91d1-bcdc8632533d-kube-api-access-lbm57" (OuterVolumeSpecName: "kube-api-access-lbm57") pod "e457ff42-a87d-4bfd-91d1-bcdc8632533d" (UID: "e457ff42-a87d-4bfd-91d1-bcdc8632533d"). InnerVolumeSpecName "kube-api-access-lbm57". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:56:14 crc kubenswrapper[4779]: I1128 12:56:14.591795 4779 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e457ff42-a87d-4bfd-91d1-bcdc8632533d-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:14 crc kubenswrapper[4779]: I1128 12:56:14.591820 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbm57\" (UniqueName: \"kubernetes.io/projected/e457ff42-a87d-4bfd-91d1-bcdc8632533d-kube-api-access-lbm57\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:14 crc kubenswrapper[4779]: I1128 12:56:14.754315 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dwv55" event={"ID":"e457ff42-a87d-4bfd-91d1-bcdc8632533d","Type":"ContainerDied","Data":"5fda55a9074ead71cf4a28f5dcee527e742a7f2838698010eac32aba9e18e764"} Nov 28 12:56:14 crc kubenswrapper[4779]: I1128 12:56:14.754352 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5fda55a9074ead71cf4a28f5dcee527e742a7f2838698010eac32aba9e18e764" Nov 28 12:56:14 crc kubenswrapper[4779]: I1128 12:56:14.754362 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dwv55" Nov 28 12:56:14 crc kubenswrapper[4779]: I1128 12:56:14.758058 4779 generic.go:334] "Generic (PLEG): container finished" podID="87e78f52-9c09-473f-b884-20bc130d6ede" containerID="5a7e9efbce7f12050961b2fce1e7fd418814ef5e96466636d9bcbc7709e31a52" exitCode=0 Nov 28 12:56:14 crc kubenswrapper[4779]: I1128 12:56:14.758083 4779 generic.go:334] "Generic (PLEG): container finished" podID="87e78f52-9c09-473f-b884-20bc130d6ede" containerID="574fa4cac81acc63a4e2e72955c7cc8faf5eb633d211df2e4b0ab828531f232e" exitCode=2 Nov 28 12:56:14 crc kubenswrapper[4779]: I1128 12:56:14.758103 4779 generic.go:334] "Generic (PLEG): container finished" podID="87e78f52-9c09-473f-b884-20bc130d6ede" containerID="594bfba7ff09abf9fb17a5fd0cd5630bab6fcb7a51b922ef078edda3684a6780" exitCode=0 Nov 28 12:56:14 crc kubenswrapper[4779]: I1128 12:56:14.758125 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87e78f52-9c09-473f-b884-20bc130d6ede","Type":"ContainerDied","Data":"5a7e9efbce7f12050961b2fce1e7fd418814ef5e96466636d9bcbc7709e31a52"} Nov 28 12:56:14 crc kubenswrapper[4779]: I1128 12:56:14.758156 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87e78f52-9c09-473f-b884-20bc130d6ede","Type":"ContainerDied","Data":"574fa4cac81acc63a4e2e72955c7cc8faf5eb633d211df2e4b0ab828531f232e"} Nov 28 12:56:14 crc kubenswrapper[4779]: I1128 12:56:14.758172 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87e78f52-9c09-473f-b884-20bc130d6ede","Type":"ContainerDied","Data":"594bfba7ff09abf9fb17a5fd0cd5630bab6fcb7a51b922ef078edda3684a6780"} Nov 28 12:56:14 crc kubenswrapper[4779]: I1128 12:56:14.865753 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 12:56:15 crc kubenswrapper[4779]: I1128 12:56:15.274901 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-cv5pl" Nov 28 12:56:15 crc kubenswrapper[4779]: I1128 12:56:15.403628 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f633-account-create-update-sj299" Nov 28 12:56:15 crc kubenswrapper[4779]: I1128 12:56:15.406973 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-29dd-account-create-update-zs2sb" Nov 28 12:56:15 crc kubenswrapper[4779]: I1128 12:56:15.407948 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3284e9ee-a945-4fb4-ae73-e2e2f580c7ac-operator-scripts\") pod \"3284e9ee-a945-4fb4-ae73-e2e2f580c7ac\" (UID: \"3284e9ee-a945-4fb4-ae73-e2e2f580c7ac\") " Nov 28 12:56:15 crc kubenswrapper[4779]: I1128 12:56:15.407971 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65qn4\" (UniqueName: \"kubernetes.io/projected/3284e9ee-a945-4fb4-ae73-e2e2f580c7ac-kube-api-access-65qn4\") pod \"3284e9ee-a945-4fb4-ae73-e2e2f580c7ac\" (UID: \"3284e9ee-a945-4fb4-ae73-e2e2f580c7ac\") " Nov 28 12:56:15 crc kubenswrapper[4779]: I1128 12:56:15.408536 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3284e9ee-a945-4fb4-ae73-e2e2f580c7ac-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3284e9ee-a945-4fb4-ae73-e2e2f580c7ac" (UID: "3284e9ee-a945-4fb4-ae73-e2e2f580c7ac"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:56:15 crc kubenswrapper[4779]: I1128 12:56:15.414887 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3284e9ee-a945-4fb4-ae73-e2e2f580c7ac-kube-api-access-65qn4" (OuterVolumeSpecName: "kube-api-access-65qn4") pod "3284e9ee-a945-4fb4-ae73-e2e2f580c7ac" (UID: "3284e9ee-a945-4fb4-ae73-e2e2f580c7ac"). InnerVolumeSpecName "kube-api-access-65qn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:56:15 crc kubenswrapper[4779]: I1128 12:56:15.512570 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mj9hs\" (UniqueName: \"kubernetes.io/projected/cf493f67-858c-412a-9bf8-804c687c4f12-kube-api-access-mj9hs\") pod \"cf493f67-858c-412a-9bf8-804c687c4f12\" (UID: \"cf493f67-858c-412a-9bf8-804c687c4f12\") " Nov 28 12:56:15 crc kubenswrapper[4779]: I1128 12:56:15.513455 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fea6137f-d265-4494-a0ba-f92b3bdd82a2-operator-scripts\") pod \"fea6137f-d265-4494-a0ba-f92b3bdd82a2\" (UID: \"fea6137f-d265-4494-a0ba-f92b3bdd82a2\") " Nov 28 12:56:15 crc kubenswrapper[4779]: I1128 12:56:15.513641 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf493f67-858c-412a-9bf8-804c687c4f12-operator-scripts\") pod \"cf493f67-858c-412a-9bf8-804c687c4f12\" (UID: \"cf493f67-858c-412a-9bf8-804c687c4f12\") " Nov 28 12:56:15 crc kubenswrapper[4779]: I1128 12:56:15.513672 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvpqn\" (UniqueName: \"kubernetes.io/projected/fea6137f-d265-4494-a0ba-f92b3bdd82a2-kube-api-access-nvpqn\") pod \"fea6137f-d265-4494-a0ba-f92b3bdd82a2\" (UID: \"fea6137f-d265-4494-a0ba-f92b3bdd82a2\") " Nov 28 12:56:15 crc kubenswrapper[4779]: I1128 12:56:15.516672 4779 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3284e9ee-a945-4fb4-ae73-e2e2f580c7ac-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:15 crc kubenswrapper[4779]: I1128 12:56:15.516691 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65qn4\" (UniqueName: \"kubernetes.io/projected/3284e9ee-a945-4fb4-ae73-e2e2f580c7ac-kube-api-access-65qn4\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:15 crc kubenswrapper[4779]: I1128 12:56:15.518007 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf493f67-858c-412a-9bf8-804c687c4f12-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cf493f67-858c-412a-9bf8-804c687c4f12" (UID: "cf493f67-858c-412a-9bf8-804c687c4f12"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:56:15 crc kubenswrapper[4779]: I1128 12:56:15.520250 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf493f67-858c-412a-9bf8-804c687c4f12-kube-api-access-mj9hs" (OuterVolumeSpecName: "kube-api-access-mj9hs") pod "cf493f67-858c-412a-9bf8-804c687c4f12" (UID: "cf493f67-858c-412a-9bf8-804c687c4f12"). InnerVolumeSpecName "kube-api-access-mj9hs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:56:15 crc kubenswrapper[4779]: I1128 12:56:15.537983 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-dkjsg" Nov 28 12:56:15 crc kubenswrapper[4779]: I1128 12:56:15.539304 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fea6137f-d265-4494-a0ba-f92b3bdd82a2-kube-api-access-nvpqn" (OuterVolumeSpecName: "kube-api-access-nvpqn") pod "fea6137f-d265-4494-a0ba-f92b3bdd82a2" (UID: "fea6137f-d265-4494-a0ba-f92b3bdd82a2"). InnerVolumeSpecName "kube-api-access-nvpqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:56:15 crc kubenswrapper[4779]: I1128 12:56:15.539542 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fea6137f-d265-4494-a0ba-f92b3bdd82a2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fea6137f-d265-4494-a0ba-f92b3bdd82a2" (UID: "fea6137f-d265-4494-a0ba-f92b3bdd82a2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:56:15 crc kubenswrapper[4779]: I1128 12:56:15.570858 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-237e-account-create-update-8kj9n" Nov 28 12:56:15 crc kubenswrapper[4779]: I1128 12:56:15.618220 4779 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf493f67-858c-412a-9bf8-804c687c4f12-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:15 crc kubenswrapper[4779]: I1128 12:56:15.618246 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvpqn\" (UniqueName: \"kubernetes.io/projected/fea6137f-d265-4494-a0ba-f92b3bdd82a2-kube-api-access-nvpqn\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:15 crc kubenswrapper[4779]: I1128 12:56:15.618256 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mj9hs\" (UniqueName: \"kubernetes.io/projected/cf493f67-858c-412a-9bf8-804c687c4f12-kube-api-access-mj9hs\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:15 crc kubenswrapper[4779]: I1128 12:56:15.618264 4779 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fea6137f-d265-4494-a0ba-f92b3bdd82a2-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:15 crc kubenswrapper[4779]: I1128 12:56:15.720840 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a740778-7386-4f7a-ad57-4bd5fa6c2fc6-operator-scripts\") pod \"8a740778-7386-4f7a-ad57-4bd5fa6c2fc6\" (UID: \"8a740778-7386-4f7a-ad57-4bd5fa6c2fc6\") " Nov 28 12:56:15 crc kubenswrapper[4779]: I1128 12:56:15.721144 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rsjl\" (UniqueName: \"kubernetes.io/projected/a00b970a-2a2a-40c2-bc07-c1bb05d74810-kube-api-access-4rsjl\") pod \"a00b970a-2a2a-40c2-bc07-c1bb05d74810\" (UID: \"a00b970a-2a2a-40c2-bc07-c1bb05d74810\") " Nov 28 12:56:15 crc kubenswrapper[4779]: I1128 12:56:15.721188 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dggtm\" (UniqueName: \"kubernetes.io/projected/8a740778-7386-4f7a-ad57-4bd5fa6c2fc6-kube-api-access-dggtm\") pod \"8a740778-7386-4f7a-ad57-4bd5fa6c2fc6\" (UID: \"8a740778-7386-4f7a-ad57-4bd5fa6c2fc6\") " Nov 28 12:56:15 crc kubenswrapper[4779]: I1128 12:56:15.721229 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a00b970a-2a2a-40c2-bc07-c1bb05d74810-operator-scripts\") pod \"a00b970a-2a2a-40c2-bc07-c1bb05d74810\" (UID: \"a00b970a-2a2a-40c2-bc07-c1bb05d74810\") " Nov 28 12:56:15 crc kubenswrapper[4779]: I1128 12:56:15.721830 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a00b970a-2a2a-40c2-bc07-c1bb05d74810-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a00b970a-2a2a-40c2-bc07-c1bb05d74810" (UID: "a00b970a-2a2a-40c2-bc07-c1bb05d74810"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:56:15 crc kubenswrapper[4779]: I1128 12:56:15.722604 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a740778-7386-4f7a-ad57-4bd5fa6c2fc6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8a740778-7386-4f7a-ad57-4bd5fa6c2fc6" (UID: "8a740778-7386-4f7a-ad57-4bd5fa6c2fc6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:56:15 crc kubenswrapper[4779]: I1128 12:56:15.725485 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a00b970a-2a2a-40c2-bc07-c1bb05d74810-kube-api-access-4rsjl" (OuterVolumeSpecName: "kube-api-access-4rsjl") pod "a00b970a-2a2a-40c2-bc07-c1bb05d74810" (UID: "a00b970a-2a2a-40c2-bc07-c1bb05d74810"). InnerVolumeSpecName "kube-api-access-4rsjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:56:15 crc kubenswrapper[4779]: I1128 12:56:15.725541 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a740778-7386-4f7a-ad57-4bd5fa6c2fc6-kube-api-access-dggtm" (OuterVolumeSpecName: "kube-api-access-dggtm") pod "8a740778-7386-4f7a-ad57-4bd5fa6c2fc6" (UID: "8a740778-7386-4f7a-ad57-4bd5fa6c2fc6"). InnerVolumeSpecName "kube-api-access-dggtm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:56:15 crc kubenswrapper[4779]: I1128 12:56:15.737741 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="892bc73b-f7ab-40c1-af7b-9540725292f1" path="/var/lib/kubelet/pods/892bc73b-f7ab-40c1-af7b-9540725292f1/volumes" Nov 28 12:56:15 crc kubenswrapper[4779]: I1128 12:56:15.802238 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-f633-account-create-update-sj299" event={"ID":"cf493f67-858c-412a-9bf8-804c687c4f12","Type":"ContainerDied","Data":"7f1ff93c9b8005a05d5353d770d8521da45528df9f223036691717bf09180ff9"} Nov 28 12:56:15 crc kubenswrapper[4779]: I1128 12:56:15.802279 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f633-account-create-update-sj299" Nov 28 12:56:15 crc kubenswrapper[4779]: I1128 12:56:15.802292 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f1ff93c9b8005a05d5353d770d8521da45528df9f223036691717bf09180ff9" Nov 28 12:56:15 crc kubenswrapper[4779]: I1128 12:56:15.809208 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-dkjsg" event={"ID":"a00b970a-2a2a-40c2-bc07-c1bb05d74810","Type":"ContainerDied","Data":"f5404a1245c9b7f96b4b6810438d60db945ac1eb0e72b83c5084f335f15e9280"} Nov 28 12:56:15 crc kubenswrapper[4779]: I1128 12:56:15.809258 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5404a1245c9b7f96b4b6810438d60db945ac1eb0e72b83c5084f335f15e9280" Nov 28 12:56:15 crc kubenswrapper[4779]: I1128 12:56:15.809302 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-dkjsg" Nov 28 12:56:15 crc kubenswrapper[4779]: I1128 12:56:15.812074 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-29dd-account-create-update-zs2sb" event={"ID":"fea6137f-d265-4494-a0ba-f92b3bdd82a2","Type":"ContainerDied","Data":"6848e08d9756a10ba7a4e0c75eb9164ed57903e550bea275e90102db7154c9e9"} Nov 28 12:56:15 crc kubenswrapper[4779]: I1128 12:56:15.812128 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6848e08d9756a10ba7a4e0c75eb9164ed57903e550bea275e90102db7154c9e9" Nov 28 12:56:15 crc kubenswrapper[4779]: I1128 12:56:15.812211 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-29dd-account-create-update-zs2sb" Nov 28 12:56:15 crc kubenswrapper[4779]: I1128 12:56:15.819820 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-cv5pl" event={"ID":"3284e9ee-a945-4fb4-ae73-e2e2f580c7ac","Type":"ContainerDied","Data":"8c62b9cab72435a0fafd0530ffc6c65b9c0b08241469bba9a814e3b3460395d8"} Nov 28 12:56:15 crc kubenswrapper[4779]: I1128 12:56:15.819855 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c62b9cab72435a0fafd0530ffc6c65b9c0b08241469bba9a814e3b3460395d8" Nov 28 12:56:15 crc kubenswrapper[4779]: I1128 12:56:15.819950 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-cv5pl" Nov 28 12:56:15 crc kubenswrapper[4779]: I1128 12:56:15.822888 4779 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a740778-7386-4f7a-ad57-4bd5fa6c2fc6-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:15 crc kubenswrapper[4779]: I1128 12:56:15.822919 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rsjl\" (UniqueName: \"kubernetes.io/projected/a00b970a-2a2a-40c2-bc07-c1bb05d74810-kube-api-access-4rsjl\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:15 crc kubenswrapper[4779]: I1128 12:56:15.822928 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dggtm\" (UniqueName: \"kubernetes.io/projected/8a740778-7386-4f7a-ad57-4bd5fa6c2fc6-kube-api-access-dggtm\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:15 crc kubenswrapper[4779]: I1128 12:56:15.822938 4779 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a00b970a-2a2a-40c2-bc07-c1bb05d74810-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:15 crc kubenswrapper[4779]: I1128 12:56:15.827581 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-237e-account-create-update-8kj9n" event={"ID":"8a740778-7386-4f7a-ad57-4bd5fa6c2fc6","Type":"ContainerDied","Data":"829a3c8d2167101f749d4fc16a13cae884b9a2740e79edeb75883cc9d2c05480"} Nov 28 12:56:15 crc kubenswrapper[4779]: I1128 12:56:15.827687 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="829a3c8d2167101f749d4fc16a13cae884b9a2740e79edeb75883cc9d2c05480" Nov 28 12:56:15 crc kubenswrapper[4779]: I1128 12:56:15.827780 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-237e-account-create-update-8kj9n" Nov 28 12:56:15 crc kubenswrapper[4779]: I1128 12:56:15.833736 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"05d19641-3a16-482f-bcaf-da12573ca2e6","Type":"ContainerStarted","Data":"e895d8abf7ab6eecd4bb77df4895c6c2e8a7bdb9a64c8c68e36a9111764c966a"} Nov 28 12:56:15 crc kubenswrapper[4779]: I1128 12:56:15.833793 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"05d19641-3a16-482f-bcaf-da12573ca2e6","Type":"ContainerStarted","Data":"eb12a0e9c9dfef2096f7fb2aed3fef3807726853ecf850a67cc41f87bfb3e260"} Nov 28 12:56:16 crc kubenswrapper[4779]: I1128 12:56:16.850636 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"05d19641-3a16-482f-bcaf-da12573ca2e6","Type":"ContainerStarted","Data":"ca030f5e4ebdbe2a601cb8f17b1391fecbf7601c53e170a557f97237eedc1e50"} Nov 28 12:56:16 crc kubenswrapper[4779]: I1128 12:56:16.877582 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.877568421 podStartE2EDuration="3.877568421s" podCreationTimestamp="2025-11-28 12:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:56:16.876256686 +0000 UTC m=+1237.441932040" watchObservedRunningTime="2025-11-28 12:56:16.877568421 +0000 UTC m=+1237.443243775" Nov 28 12:56:17 crc kubenswrapper[4779]: I1128 12:56:17.495264 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-6dc88d6fdd-9vtxx" Nov 28 12:56:17 crc kubenswrapper[4779]: I1128 12:56:17.551767 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-55ff4b54d5-48p68"] Nov 28 12:56:17 crc kubenswrapper[4779]: I1128 12:56:17.552181 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-55ff4b54d5-48p68" podUID="ed331d56-a8fd-4fd5-8d71-45f853a35fa8" containerName="heat-engine" containerID="cri-o://a46164af1ca68791f72ea8171ee935cc2b683dc226613895d2c7a1ad49678b4e" gracePeriod=60 Nov 28 12:56:17 crc kubenswrapper[4779]: I1128 12:56:17.865158 4779 generic.go:334] "Generic (PLEG): container finished" podID="87e78f52-9c09-473f-b884-20bc130d6ede" containerID="54c9afd7053211c99b3569e95bd8e8a163eac184d23587472b56b633aa668bfc" exitCode=0 Nov 28 12:56:17 crc kubenswrapper[4779]: I1128 12:56:17.865496 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87e78f52-9c09-473f-b884-20bc130d6ede","Type":"ContainerDied","Data":"54c9afd7053211c99b3569e95bd8e8a163eac184d23587472b56b633aa668bfc"} Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.032954 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.114677 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87e78f52-9c09-473f-b884-20bc130d6ede-config-data\") pod \"87e78f52-9c09-473f-b884-20bc130d6ede\" (UID: \"87e78f52-9c09-473f-b884-20bc130d6ede\") " Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.115283 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmt79\" (UniqueName: \"kubernetes.io/projected/87e78f52-9c09-473f-b884-20bc130d6ede-kube-api-access-mmt79\") pod \"87e78f52-9c09-473f-b884-20bc130d6ede\" (UID: \"87e78f52-9c09-473f-b884-20bc130d6ede\") " Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.115491 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/87e78f52-9c09-473f-b884-20bc130d6ede-sg-core-conf-yaml\") pod \"87e78f52-9c09-473f-b884-20bc130d6ede\" (UID: \"87e78f52-9c09-473f-b884-20bc130d6ede\") " Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.115603 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87e78f52-9c09-473f-b884-20bc130d6ede-scripts\") pod \"87e78f52-9c09-473f-b884-20bc130d6ede\" (UID: \"87e78f52-9c09-473f-b884-20bc130d6ede\") " Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.115705 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87e78f52-9c09-473f-b884-20bc130d6ede-combined-ca-bundle\") pod \"87e78f52-9c09-473f-b884-20bc130d6ede\" (UID: \"87e78f52-9c09-473f-b884-20bc130d6ede\") " Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.115789 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87e78f52-9c09-473f-b884-20bc130d6ede-run-httpd\") pod \"87e78f52-9c09-473f-b884-20bc130d6ede\" (UID: \"87e78f52-9c09-473f-b884-20bc130d6ede\") " Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.115965 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87e78f52-9c09-473f-b884-20bc130d6ede-log-httpd\") pod \"87e78f52-9c09-473f-b884-20bc130d6ede\" (UID: \"87e78f52-9c09-473f-b884-20bc130d6ede\") " Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.116068 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87e78f52-9c09-473f-b884-20bc130d6ede-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "87e78f52-9c09-473f-b884-20bc130d6ede" (UID: "87e78f52-9c09-473f-b884-20bc130d6ede"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.116408 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87e78f52-9c09-473f-b884-20bc130d6ede-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "87e78f52-9c09-473f-b884-20bc130d6ede" (UID: "87e78f52-9c09-473f-b884-20bc130d6ede"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.116729 4779 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87e78f52-9c09-473f-b884-20bc130d6ede-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.116813 4779 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87e78f52-9c09-473f-b884-20bc130d6ede-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.136308 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87e78f52-9c09-473f-b884-20bc130d6ede-kube-api-access-mmt79" (OuterVolumeSpecName: "kube-api-access-mmt79") pod "87e78f52-9c09-473f-b884-20bc130d6ede" (UID: "87e78f52-9c09-473f-b884-20bc130d6ede"). InnerVolumeSpecName "kube-api-access-mmt79". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.140539 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87e78f52-9c09-473f-b884-20bc130d6ede-scripts" (OuterVolumeSpecName: "scripts") pod "87e78f52-9c09-473f-b884-20bc130d6ede" (UID: "87e78f52-9c09-473f-b884-20bc130d6ede"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.172354 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87e78f52-9c09-473f-b884-20bc130d6ede-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "87e78f52-9c09-473f-b884-20bc130d6ede" (UID: "87e78f52-9c09-473f-b884-20bc130d6ede"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.205815 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.206112 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="f67e7c90-06fb-42ba-98c6-b30f8f9d2829" containerName="glance-log" containerID="cri-o://483276b16bb4b3bf69e3be86d05e08d9a9522bf4c858a5c52c1b459d2a00c5c8" gracePeriod=30 Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.206273 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="f67e7c90-06fb-42ba-98c6-b30f8f9d2829" containerName="glance-httpd" containerID="cri-o://59c13bcf2469efa826e82d798716c85b168f3ba99f1de588a860b5f0ee81b3ba" gracePeriod=30 Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.227706 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmt79\" (UniqueName: \"kubernetes.io/projected/87e78f52-9c09-473f-b884-20bc130d6ede-kube-api-access-mmt79\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.227746 4779 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/87e78f52-9c09-473f-b884-20bc130d6ede-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.227758 4779 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87e78f52-9c09-473f-b884-20bc130d6ede-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.267347 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87e78f52-9c09-473f-b884-20bc130d6ede-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "87e78f52-9c09-473f-b884-20bc130d6ede" (UID: "87e78f52-9c09-473f-b884-20bc130d6ede"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.284564 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87e78f52-9c09-473f-b884-20bc130d6ede-config-data" (OuterVolumeSpecName: "config-data") pod "87e78f52-9c09-473f-b884-20bc130d6ede" (UID: "87e78f52-9c09-473f-b884-20bc130d6ede"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.329158 4779 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87e78f52-9c09-473f-b884-20bc130d6ede-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.329183 4779 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87e78f52-9c09-473f-b884-20bc130d6ede-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.883084 4779 generic.go:334] "Generic (PLEG): container finished" podID="f67e7c90-06fb-42ba-98c6-b30f8f9d2829" containerID="483276b16bb4b3bf69e3be86d05e08d9a9522bf4c858a5c52c1b459d2a00c5c8" exitCode=143 Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.883182 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f67e7c90-06fb-42ba-98c6-b30f8f9d2829","Type":"ContainerDied","Data":"483276b16bb4b3bf69e3be86d05e08d9a9522bf4c858a5c52c1b459d2a00c5c8"} Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.885949 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87e78f52-9c09-473f-b884-20bc130d6ede","Type":"ContainerDied","Data":"80bc8c55bdeb515790b99c9f3dc54b05db23928a9fa10ac42f315f3d915e89d2"} Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.885981 4779 scope.go:117] "RemoveContainer" containerID="5a7e9efbce7f12050961b2fce1e7fd418814ef5e96466636d9bcbc7709e31a52" Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.886141 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.925524 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.925627 4779 scope.go:117] "RemoveContainer" containerID="574fa4cac81acc63a4e2e72955c7cc8faf5eb633d211df2e4b0ab828531f232e" Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.950050 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.962440 4779 scope.go:117] "RemoveContainer" containerID="594bfba7ff09abf9fb17a5fd0cd5630bab6fcb7a51b922ef078edda3684a6780" Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.962553 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 12:56:18 crc kubenswrapper[4779]: E1128 12:56:18.962857 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87e78f52-9c09-473f-b884-20bc130d6ede" containerName="sg-core" Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.962872 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="87e78f52-9c09-473f-b884-20bc130d6ede" containerName="sg-core" Nov 28 12:56:18 crc kubenswrapper[4779]: E1128 12:56:18.962901 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87e78f52-9c09-473f-b884-20bc130d6ede" containerName="ceilometer-central-agent" Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.962908 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="87e78f52-9c09-473f-b884-20bc130d6ede" containerName="ceilometer-central-agent" Nov 28 12:56:18 crc kubenswrapper[4779]: E1128 12:56:18.962915 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a00b970a-2a2a-40c2-bc07-c1bb05d74810" containerName="mariadb-database-create" Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.962921 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="a00b970a-2a2a-40c2-bc07-c1bb05d74810" containerName="mariadb-database-create" Nov 28 12:56:18 crc kubenswrapper[4779]: E1128 12:56:18.962934 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87e78f52-9c09-473f-b884-20bc130d6ede" containerName="proxy-httpd" Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.962940 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="87e78f52-9c09-473f-b884-20bc130d6ede" containerName="proxy-httpd" Nov 28 12:56:18 crc kubenswrapper[4779]: E1128 12:56:18.962949 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fea6137f-d265-4494-a0ba-f92b3bdd82a2" containerName="mariadb-account-create-update" Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.962955 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="fea6137f-d265-4494-a0ba-f92b3bdd82a2" containerName="mariadb-account-create-update" Nov 28 12:56:18 crc kubenswrapper[4779]: E1128 12:56:18.962966 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf493f67-858c-412a-9bf8-804c687c4f12" containerName="mariadb-account-create-update" Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.962972 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf493f67-858c-412a-9bf8-804c687c4f12" containerName="mariadb-account-create-update" Nov 28 12:56:18 crc kubenswrapper[4779]: E1128 12:56:18.962984 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e457ff42-a87d-4bfd-91d1-bcdc8632533d" containerName="mariadb-database-create" Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.962990 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="e457ff42-a87d-4bfd-91d1-bcdc8632533d" containerName="mariadb-database-create" Nov 28 12:56:18 crc kubenswrapper[4779]: E1128 12:56:18.962997 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a740778-7386-4f7a-ad57-4bd5fa6c2fc6" containerName="mariadb-account-create-update" Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.963003 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a740778-7386-4f7a-ad57-4bd5fa6c2fc6" containerName="mariadb-account-create-update" Nov 28 12:56:18 crc kubenswrapper[4779]: E1128 12:56:18.963014 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87e78f52-9c09-473f-b884-20bc130d6ede" containerName="ceilometer-notification-agent" Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.963020 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="87e78f52-9c09-473f-b884-20bc130d6ede" containerName="ceilometer-notification-agent" Nov 28 12:56:18 crc kubenswrapper[4779]: E1128 12:56:18.963028 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3284e9ee-a945-4fb4-ae73-e2e2f580c7ac" containerName="mariadb-database-create" Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.963033 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="3284e9ee-a945-4fb4-ae73-e2e2f580c7ac" containerName="mariadb-database-create" Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.963205 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf493f67-858c-412a-9bf8-804c687c4f12" containerName="mariadb-account-create-update" Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.963217 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="e457ff42-a87d-4bfd-91d1-bcdc8632533d" containerName="mariadb-database-create" Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.963228 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="87e78f52-9c09-473f-b884-20bc130d6ede" containerName="sg-core" Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.963239 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="87e78f52-9c09-473f-b884-20bc130d6ede" containerName="ceilometer-notification-agent" Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.963251 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a740778-7386-4f7a-ad57-4bd5fa6c2fc6" containerName="mariadb-account-create-update" Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.963259 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="87e78f52-9c09-473f-b884-20bc130d6ede" containerName="ceilometer-central-agent" Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.963272 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="a00b970a-2a2a-40c2-bc07-c1bb05d74810" containerName="mariadb-database-create" Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.963284 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="3284e9ee-a945-4fb4-ae73-e2e2f580c7ac" containerName="mariadb-database-create" Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.963297 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="87e78f52-9c09-473f-b884-20bc130d6ede" containerName="proxy-httpd" Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.963303 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="fea6137f-d265-4494-a0ba-f92b3bdd82a2" containerName="mariadb-account-create-update" Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.965200 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.970792 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.970964 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 12:56:18 crc kubenswrapper[4779]: I1128 12:56:18.971645 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 12:56:19 crc kubenswrapper[4779]: I1128 12:56:19.005884 4779 scope.go:117] "RemoveContainer" containerID="54c9afd7053211c99b3569e95bd8e8a163eac184d23587472b56b633aa668bfc" Nov 28 12:56:19 crc kubenswrapper[4779]: I1128 12:56:19.150720 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2f8bf0a-80ca-401d-97ce-393a3c58632e-config-data\") pod \"ceilometer-0\" (UID: \"a2f8bf0a-80ca-401d-97ce-393a3c58632e\") " pod="openstack/ceilometer-0" Nov 28 12:56:19 crc kubenswrapper[4779]: I1128 12:56:19.151160 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2f8bf0a-80ca-401d-97ce-393a3c58632e-log-httpd\") pod \"ceilometer-0\" (UID: \"a2f8bf0a-80ca-401d-97ce-393a3c58632e\") " pod="openstack/ceilometer-0" Nov 28 12:56:19 crc kubenswrapper[4779]: I1128 12:56:19.151363 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2f8bf0a-80ca-401d-97ce-393a3c58632e-scripts\") pod \"ceilometer-0\" (UID: \"a2f8bf0a-80ca-401d-97ce-393a3c58632e\") " pod="openstack/ceilometer-0" Nov 28 12:56:19 crc kubenswrapper[4779]: I1128 12:56:19.151424 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a2f8bf0a-80ca-401d-97ce-393a3c58632e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a2f8bf0a-80ca-401d-97ce-393a3c58632e\") " pod="openstack/ceilometer-0" Nov 28 12:56:19 crc kubenswrapper[4779]: I1128 12:56:19.151558 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2f8bf0a-80ca-401d-97ce-393a3c58632e-run-httpd\") pod \"ceilometer-0\" (UID: \"a2f8bf0a-80ca-401d-97ce-393a3c58632e\") " pod="openstack/ceilometer-0" Nov 28 12:56:19 crc kubenswrapper[4779]: I1128 12:56:19.151597 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2f8bf0a-80ca-401d-97ce-393a3c58632e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a2f8bf0a-80ca-401d-97ce-393a3c58632e\") " pod="openstack/ceilometer-0" Nov 28 12:56:19 crc kubenswrapper[4779]: I1128 12:56:19.151628 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btlhn\" (UniqueName: \"kubernetes.io/projected/a2f8bf0a-80ca-401d-97ce-393a3c58632e-kube-api-access-btlhn\") pod \"ceilometer-0\" (UID: \"a2f8bf0a-80ca-401d-97ce-393a3c58632e\") " pod="openstack/ceilometer-0" Nov 28 12:56:19 crc kubenswrapper[4779]: I1128 12:56:19.253172 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2f8bf0a-80ca-401d-97ce-393a3c58632e-scripts\") pod \"ceilometer-0\" (UID: \"a2f8bf0a-80ca-401d-97ce-393a3c58632e\") " pod="openstack/ceilometer-0" Nov 28 12:56:19 crc kubenswrapper[4779]: I1128 12:56:19.253234 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a2f8bf0a-80ca-401d-97ce-393a3c58632e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a2f8bf0a-80ca-401d-97ce-393a3c58632e\") " pod="openstack/ceilometer-0" Nov 28 12:56:19 crc kubenswrapper[4779]: I1128 12:56:19.253318 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2f8bf0a-80ca-401d-97ce-393a3c58632e-run-httpd\") pod \"ceilometer-0\" (UID: \"a2f8bf0a-80ca-401d-97ce-393a3c58632e\") " pod="openstack/ceilometer-0" Nov 28 12:56:19 crc kubenswrapper[4779]: I1128 12:56:19.253346 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2f8bf0a-80ca-401d-97ce-393a3c58632e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a2f8bf0a-80ca-401d-97ce-393a3c58632e\") " pod="openstack/ceilometer-0" Nov 28 12:56:19 crc kubenswrapper[4779]: I1128 12:56:19.253370 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btlhn\" (UniqueName: \"kubernetes.io/projected/a2f8bf0a-80ca-401d-97ce-393a3c58632e-kube-api-access-btlhn\") pod \"ceilometer-0\" (UID: \"a2f8bf0a-80ca-401d-97ce-393a3c58632e\") " pod="openstack/ceilometer-0" Nov 28 12:56:19 crc kubenswrapper[4779]: I1128 12:56:19.253402 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2f8bf0a-80ca-401d-97ce-393a3c58632e-config-data\") pod \"ceilometer-0\" (UID: \"a2f8bf0a-80ca-401d-97ce-393a3c58632e\") " pod="openstack/ceilometer-0" Nov 28 12:56:19 crc kubenswrapper[4779]: I1128 12:56:19.253473 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2f8bf0a-80ca-401d-97ce-393a3c58632e-log-httpd\") pod \"ceilometer-0\" (UID: \"a2f8bf0a-80ca-401d-97ce-393a3c58632e\") " pod="openstack/ceilometer-0" Nov 28 12:56:19 crc kubenswrapper[4779]: I1128 12:56:19.253855 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2f8bf0a-80ca-401d-97ce-393a3c58632e-run-httpd\") pod \"ceilometer-0\" (UID: \"a2f8bf0a-80ca-401d-97ce-393a3c58632e\") " pod="openstack/ceilometer-0" Nov 28 12:56:19 crc kubenswrapper[4779]: I1128 12:56:19.254014 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2f8bf0a-80ca-401d-97ce-393a3c58632e-log-httpd\") pod \"ceilometer-0\" (UID: \"a2f8bf0a-80ca-401d-97ce-393a3c58632e\") " pod="openstack/ceilometer-0" Nov 28 12:56:19 crc kubenswrapper[4779]: I1128 12:56:19.259110 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2f8bf0a-80ca-401d-97ce-393a3c58632e-scripts\") pod \"ceilometer-0\" (UID: \"a2f8bf0a-80ca-401d-97ce-393a3c58632e\") " pod="openstack/ceilometer-0" Nov 28 12:56:19 crc kubenswrapper[4779]: I1128 12:56:19.260913 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2f8bf0a-80ca-401d-97ce-393a3c58632e-config-data\") pod \"ceilometer-0\" (UID: \"a2f8bf0a-80ca-401d-97ce-393a3c58632e\") " pod="openstack/ceilometer-0" Nov 28 12:56:19 crc kubenswrapper[4779]: I1128 12:56:19.263855 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2f8bf0a-80ca-401d-97ce-393a3c58632e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a2f8bf0a-80ca-401d-97ce-393a3c58632e\") " pod="openstack/ceilometer-0" Nov 28 12:56:19 crc kubenswrapper[4779]: I1128 12:56:19.271716 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a2f8bf0a-80ca-401d-97ce-393a3c58632e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a2f8bf0a-80ca-401d-97ce-393a3c58632e\") " pod="openstack/ceilometer-0" Nov 28 12:56:19 crc kubenswrapper[4779]: I1128 12:56:19.272195 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btlhn\" (UniqueName: \"kubernetes.io/projected/a2f8bf0a-80ca-401d-97ce-393a3c58632e-kube-api-access-btlhn\") pod \"ceilometer-0\" (UID: \"a2f8bf0a-80ca-401d-97ce-393a3c58632e\") " pod="openstack/ceilometer-0" Nov 28 12:56:19 crc kubenswrapper[4779]: I1128 12:56:19.304152 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 12:56:19 crc kubenswrapper[4779]: I1128 12:56:19.739046 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87e78f52-9c09-473f-b884-20bc130d6ede" path="/var/lib/kubelet/pods/87e78f52-9c09-473f-b884-20bc130d6ede/volumes" Nov 28 12:56:19 crc kubenswrapper[4779]: I1128 12:56:19.775523 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 12:56:19 crc kubenswrapper[4779]: W1128 12:56:19.776248 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2f8bf0a_80ca_401d_97ce_393a3c58632e.slice/crio-5e5a64091523aadc92f633747d311ecf3f4194005dc7290b871564163db7e2e1 WatchSource:0}: Error finding container 5e5a64091523aadc92f633747d311ecf3f4194005dc7290b871564163db7e2e1: Status 404 returned error can't find the container with id 5e5a64091523aadc92f633747d311ecf3f4194005dc7290b871564163db7e2e1 Nov 28 12:56:19 crc kubenswrapper[4779]: I1128 12:56:19.896479 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2f8bf0a-80ca-401d-97ce-393a3c58632e","Type":"ContainerStarted","Data":"5e5a64091523aadc92f633747d311ecf3f4194005dc7290b871564163db7e2e1"} Nov 28 12:56:20 crc kubenswrapper[4779]: I1128 12:56:20.201152 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-tlgl6"] Nov 28 12:56:20 crc kubenswrapper[4779]: I1128 12:56:20.202397 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-tlgl6" Nov 28 12:56:20 crc kubenswrapper[4779]: W1128 12:56:20.203756 4779 reflector.go:561] object-"openstack"/"nova-cell0-conductor-config-data": failed to list *v1.Secret: secrets "nova-cell0-conductor-config-data" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Nov 28 12:56:20 crc kubenswrapper[4779]: E1128 12:56:20.203803 4779 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-cell0-conductor-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"nova-cell0-conductor-config-data\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 28 12:56:20 crc kubenswrapper[4779]: W1128 12:56:20.204242 4779 reflector.go:561] object-"openstack"/"nova-cell0-conductor-scripts": failed to list *v1.Secret: secrets "nova-cell0-conductor-scripts" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Nov 28 12:56:20 crc kubenswrapper[4779]: E1128 12:56:20.204263 4779 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-cell0-conductor-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"nova-cell0-conductor-scripts\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 28 12:56:20 crc kubenswrapper[4779]: I1128 12:56:20.206680 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-2slfk" Nov 28 12:56:20 crc kubenswrapper[4779]: I1128 12:56:20.220053 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-tlgl6"] Nov 28 12:56:20 crc kubenswrapper[4779]: I1128 12:56:20.371201 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21e20123-0f0f-48a0-8412-6167f107ed2a-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-tlgl6\" (UID: \"21e20123-0f0f-48a0-8412-6167f107ed2a\") " pod="openstack/nova-cell0-conductor-db-sync-tlgl6" Nov 28 12:56:20 crc kubenswrapper[4779]: I1128 12:56:20.371238 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21e20123-0f0f-48a0-8412-6167f107ed2a-scripts\") pod \"nova-cell0-conductor-db-sync-tlgl6\" (UID: \"21e20123-0f0f-48a0-8412-6167f107ed2a\") " pod="openstack/nova-cell0-conductor-db-sync-tlgl6" Nov 28 12:56:20 crc kubenswrapper[4779]: I1128 12:56:20.371291 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21e20123-0f0f-48a0-8412-6167f107ed2a-config-data\") pod \"nova-cell0-conductor-db-sync-tlgl6\" (UID: \"21e20123-0f0f-48a0-8412-6167f107ed2a\") " pod="openstack/nova-cell0-conductor-db-sync-tlgl6" Nov 28 12:56:20 crc kubenswrapper[4779]: I1128 12:56:20.371344 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57jfg\" (UniqueName: \"kubernetes.io/projected/21e20123-0f0f-48a0-8412-6167f107ed2a-kube-api-access-57jfg\") pod \"nova-cell0-conductor-db-sync-tlgl6\" (UID: \"21e20123-0f0f-48a0-8412-6167f107ed2a\") " pod="openstack/nova-cell0-conductor-db-sync-tlgl6" Nov 28 12:56:20 crc kubenswrapper[4779]: I1128 12:56:20.472998 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21e20123-0f0f-48a0-8412-6167f107ed2a-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-tlgl6\" (UID: \"21e20123-0f0f-48a0-8412-6167f107ed2a\") " pod="openstack/nova-cell0-conductor-db-sync-tlgl6" Nov 28 12:56:20 crc kubenswrapper[4779]: I1128 12:56:20.473031 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21e20123-0f0f-48a0-8412-6167f107ed2a-scripts\") pod \"nova-cell0-conductor-db-sync-tlgl6\" (UID: \"21e20123-0f0f-48a0-8412-6167f107ed2a\") " pod="openstack/nova-cell0-conductor-db-sync-tlgl6" Nov 28 12:56:20 crc kubenswrapper[4779]: I1128 12:56:20.473072 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21e20123-0f0f-48a0-8412-6167f107ed2a-config-data\") pod \"nova-cell0-conductor-db-sync-tlgl6\" (UID: \"21e20123-0f0f-48a0-8412-6167f107ed2a\") " pod="openstack/nova-cell0-conductor-db-sync-tlgl6" Nov 28 12:56:20 crc kubenswrapper[4779]: I1128 12:56:20.473142 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57jfg\" (UniqueName: \"kubernetes.io/projected/21e20123-0f0f-48a0-8412-6167f107ed2a-kube-api-access-57jfg\") pod \"nova-cell0-conductor-db-sync-tlgl6\" (UID: \"21e20123-0f0f-48a0-8412-6167f107ed2a\") " pod="openstack/nova-cell0-conductor-db-sync-tlgl6" Nov 28 12:56:20 crc kubenswrapper[4779]: I1128 12:56:20.479570 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21e20123-0f0f-48a0-8412-6167f107ed2a-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-tlgl6\" (UID: \"21e20123-0f0f-48a0-8412-6167f107ed2a\") " pod="openstack/nova-cell0-conductor-db-sync-tlgl6" Nov 28 12:56:20 crc kubenswrapper[4779]: I1128 12:56:20.488869 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57jfg\" (UniqueName: \"kubernetes.io/projected/21e20123-0f0f-48a0-8412-6167f107ed2a-kube-api-access-57jfg\") pod \"nova-cell0-conductor-db-sync-tlgl6\" (UID: \"21e20123-0f0f-48a0-8412-6167f107ed2a\") " pod="openstack/nova-cell0-conductor-db-sync-tlgl6" Nov 28 12:56:20 crc kubenswrapper[4779]: E1128 12:56:20.801225 4779 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a46164af1ca68791f72ea8171ee935cc2b683dc226613895d2c7a1ad49678b4e" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 28 12:56:20 crc kubenswrapper[4779]: E1128 12:56:20.803585 4779 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a46164af1ca68791f72ea8171ee935cc2b683dc226613895d2c7a1ad49678b4e" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 28 12:56:20 crc kubenswrapper[4779]: E1128 12:56:20.804914 4779 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a46164af1ca68791f72ea8171ee935cc2b683dc226613895d2c7a1ad49678b4e" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 28 12:56:20 crc kubenswrapper[4779]: E1128 12:56:20.804945 4779 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-55ff4b54d5-48p68" podUID="ed331d56-a8fd-4fd5-8d71-45f853a35fa8" containerName="heat-engine" Nov 28 12:56:20 crc kubenswrapper[4779]: I1128 12:56:20.915252 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2f8bf0a-80ca-401d-97ce-393a3c58632e","Type":"ContainerStarted","Data":"b2083fa439ee9f511a3f2aa7008a0d88d383016308fd6ec290adfe8b372ed815"} Nov 28 12:56:21 crc kubenswrapper[4779]: I1128 12:56:21.281501 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 28 12:56:21 crc kubenswrapper[4779]: I1128 12:56:21.289836 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21e20123-0f0f-48a0-8412-6167f107ed2a-scripts\") pod \"nova-cell0-conductor-db-sync-tlgl6\" (UID: \"21e20123-0f0f-48a0-8412-6167f107ed2a\") " pod="openstack/nova-cell0-conductor-db-sync-tlgl6" Nov 28 12:56:21 crc kubenswrapper[4779]: E1128 12:56:21.474418 4779 secret.go:188] Couldn't get secret openstack/nova-cell0-conductor-config-data: failed to sync secret cache: timed out waiting for the condition Nov 28 12:56:21 crc kubenswrapper[4779]: E1128 12:56:21.474754 4779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21e20123-0f0f-48a0-8412-6167f107ed2a-config-data podName:21e20123-0f0f-48a0-8412-6167f107ed2a nodeName:}" failed. No retries permitted until 2025-11-28 12:56:21.974730999 +0000 UTC m=+1242.540406353 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/21e20123-0f0f-48a0-8412-6167f107ed2a-config-data") pod "nova-cell0-conductor-db-sync-tlgl6" (UID: "21e20123-0f0f-48a0-8412-6167f107ed2a") : failed to sync secret cache: timed out waiting for the condition Nov 28 12:56:21 crc kubenswrapper[4779]: I1128 12:56:21.807915 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 28 12:56:21 crc kubenswrapper[4779]: I1128 12:56:21.928163 4779 generic.go:334] "Generic (PLEG): container finished" podID="f67e7c90-06fb-42ba-98c6-b30f8f9d2829" containerID="59c13bcf2469efa826e82d798716c85b168f3ba99f1de588a860b5f0ee81b3ba" exitCode=0 Nov 28 12:56:21 crc kubenswrapper[4779]: I1128 12:56:21.928210 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f67e7c90-06fb-42ba-98c6-b30f8f9d2829","Type":"ContainerDied","Data":"59c13bcf2469efa826e82d798716c85b168f3ba99f1de588a860b5f0ee81b3ba"} Nov 28 12:56:21 crc kubenswrapper[4779]: I1128 12:56:21.930541 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2f8bf0a-80ca-401d-97ce-393a3c58632e","Type":"ContainerStarted","Data":"e3bbf133d8c53cbc78e2eb027ed4358e04e37affd3720ab6dc28daeb405de989"} Nov 28 12:56:22 crc kubenswrapper[4779]: I1128 12:56:22.001471 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21e20123-0f0f-48a0-8412-6167f107ed2a-config-data\") pod \"nova-cell0-conductor-db-sync-tlgl6\" (UID: \"21e20123-0f0f-48a0-8412-6167f107ed2a\") " pod="openstack/nova-cell0-conductor-db-sync-tlgl6" Nov 28 12:56:22 crc kubenswrapper[4779]: I1128 12:56:22.005004 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21e20123-0f0f-48a0-8412-6167f107ed2a-config-data\") pod \"nova-cell0-conductor-db-sync-tlgl6\" (UID: \"21e20123-0f0f-48a0-8412-6167f107ed2a\") " pod="openstack/nova-cell0-conductor-db-sync-tlgl6" Nov 28 12:56:22 crc kubenswrapper[4779]: I1128 12:56:22.088387 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-tlgl6" Nov 28 12:56:22 crc kubenswrapper[4779]: I1128 12:56:22.164138 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 12:56:22 crc kubenswrapper[4779]: I1128 12:56:22.307214 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f67e7c90-06fb-42ba-98c6-b30f8f9d2829-scripts\") pod \"f67e7c90-06fb-42ba-98c6-b30f8f9d2829\" (UID: \"f67e7c90-06fb-42ba-98c6-b30f8f9d2829\") " Nov 28 12:56:22 crc kubenswrapper[4779]: I1128 12:56:22.307505 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7brf\" (UniqueName: \"kubernetes.io/projected/f67e7c90-06fb-42ba-98c6-b30f8f9d2829-kube-api-access-b7brf\") pod \"f67e7c90-06fb-42ba-98c6-b30f8f9d2829\" (UID: \"f67e7c90-06fb-42ba-98c6-b30f8f9d2829\") " Nov 28 12:56:22 crc kubenswrapper[4779]: I1128 12:56:22.307524 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f67e7c90-06fb-42ba-98c6-b30f8f9d2829-logs\") pod \"f67e7c90-06fb-42ba-98c6-b30f8f9d2829\" (UID: \"f67e7c90-06fb-42ba-98c6-b30f8f9d2829\") " Nov 28 12:56:22 crc kubenswrapper[4779]: I1128 12:56:22.307580 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f67e7c90-06fb-42ba-98c6-b30f8f9d2829-public-tls-certs\") pod \"f67e7c90-06fb-42ba-98c6-b30f8f9d2829\" (UID: \"f67e7c90-06fb-42ba-98c6-b30f8f9d2829\") " Nov 28 12:56:22 crc kubenswrapper[4779]: I1128 12:56:22.307729 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f67e7c90-06fb-42ba-98c6-b30f8f9d2829-combined-ca-bundle\") pod \"f67e7c90-06fb-42ba-98c6-b30f8f9d2829\" (UID: \"f67e7c90-06fb-42ba-98c6-b30f8f9d2829\") " Nov 28 12:56:22 crc kubenswrapper[4779]: I1128 12:56:22.307778 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"f67e7c90-06fb-42ba-98c6-b30f8f9d2829\" (UID: \"f67e7c90-06fb-42ba-98c6-b30f8f9d2829\") " Nov 28 12:56:22 crc kubenswrapper[4779]: I1128 12:56:22.307841 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f67e7c90-06fb-42ba-98c6-b30f8f9d2829-config-data\") pod \"f67e7c90-06fb-42ba-98c6-b30f8f9d2829\" (UID: \"f67e7c90-06fb-42ba-98c6-b30f8f9d2829\") " Nov 28 12:56:22 crc kubenswrapper[4779]: I1128 12:56:22.307887 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f67e7c90-06fb-42ba-98c6-b30f8f9d2829-httpd-run\") pod \"f67e7c90-06fb-42ba-98c6-b30f8f9d2829\" (UID: \"f67e7c90-06fb-42ba-98c6-b30f8f9d2829\") " Nov 28 12:56:22 crc kubenswrapper[4779]: I1128 12:56:22.308558 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f67e7c90-06fb-42ba-98c6-b30f8f9d2829-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f67e7c90-06fb-42ba-98c6-b30f8f9d2829" (UID: "f67e7c90-06fb-42ba-98c6-b30f8f9d2829"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:56:22 crc kubenswrapper[4779]: I1128 12:56:22.310510 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f67e7c90-06fb-42ba-98c6-b30f8f9d2829-logs" (OuterVolumeSpecName: "logs") pod "f67e7c90-06fb-42ba-98c6-b30f8f9d2829" (UID: "f67e7c90-06fb-42ba-98c6-b30f8f9d2829"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:56:22 crc kubenswrapper[4779]: I1128 12:56:22.314298 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f67e7c90-06fb-42ba-98c6-b30f8f9d2829-scripts" (OuterVolumeSpecName: "scripts") pod "f67e7c90-06fb-42ba-98c6-b30f8f9d2829" (UID: "f67e7c90-06fb-42ba-98c6-b30f8f9d2829"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:56:22 crc kubenswrapper[4779]: I1128 12:56:22.316904 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "f67e7c90-06fb-42ba-98c6-b30f8f9d2829" (UID: "f67e7c90-06fb-42ba-98c6-b30f8f9d2829"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:56:22 crc kubenswrapper[4779]: I1128 12:56:22.321925 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f67e7c90-06fb-42ba-98c6-b30f8f9d2829-kube-api-access-b7brf" (OuterVolumeSpecName: "kube-api-access-b7brf") pod "f67e7c90-06fb-42ba-98c6-b30f8f9d2829" (UID: "f67e7c90-06fb-42ba-98c6-b30f8f9d2829"). InnerVolumeSpecName "kube-api-access-b7brf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:56:22 crc kubenswrapper[4779]: I1128 12:56:22.373831 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f67e7c90-06fb-42ba-98c6-b30f8f9d2829-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f67e7c90-06fb-42ba-98c6-b30f8f9d2829" (UID: "f67e7c90-06fb-42ba-98c6-b30f8f9d2829"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:56:22 crc kubenswrapper[4779]: I1128 12:56:22.396338 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f67e7c90-06fb-42ba-98c6-b30f8f9d2829-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f67e7c90-06fb-42ba-98c6-b30f8f9d2829" (UID: "f67e7c90-06fb-42ba-98c6-b30f8f9d2829"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:56:22 crc kubenswrapper[4779]: I1128 12:56:22.409281 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f67e7c90-06fb-42ba-98c6-b30f8f9d2829-config-data" (OuterVolumeSpecName: "config-data") pod "f67e7c90-06fb-42ba-98c6-b30f8f9d2829" (UID: "f67e7c90-06fb-42ba-98c6-b30f8f9d2829"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:56:22 crc kubenswrapper[4779]: I1128 12:56:22.409631 4779 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f67e7c90-06fb-42ba-98c6-b30f8f9d2829-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:22 crc kubenswrapper[4779]: I1128 12:56:22.409715 4779 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f67e7c90-06fb-42ba-98c6-b30f8f9d2829-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:22 crc kubenswrapper[4779]: I1128 12:56:22.409771 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7brf\" (UniqueName: \"kubernetes.io/projected/f67e7c90-06fb-42ba-98c6-b30f8f9d2829-kube-api-access-b7brf\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:22 crc kubenswrapper[4779]: I1128 12:56:22.409850 4779 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f67e7c90-06fb-42ba-98c6-b30f8f9d2829-logs\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:22 crc kubenswrapper[4779]: I1128 12:56:22.409903 4779 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f67e7c90-06fb-42ba-98c6-b30f8f9d2829-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:22 crc kubenswrapper[4779]: I1128 12:56:22.409959 4779 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f67e7c90-06fb-42ba-98c6-b30f8f9d2829-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:22 crc kubenswrapper[4779]: I1128 12:56:22.410030 4779 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Nov 28 12:56:22 crc kubenswrapper[4779]: I1128 12:56:22.410085 4779 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f67e7c90-06fb-42ba-98c6-b30f8f9d2829-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:22 crc kubenswrapper[4779]: I1128 12:56:22.438227 4779 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Nov 28 12:56:22 crc kubenswrapper[4779]: I1128 12:56:22.511991 4779 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:22 crc kubenswrapper[4779]: I1128 12:56:22.541033 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-tlgl6"] Nov 28 12:56:22 crc kubenswrapper[4779]: W1128 12:56:22.542315 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21e20123_0f0f_48a0_8412_6167f107ed2a.slice/crio-7f828210fc369e4d816ea434ced7b0031aa571987fcd59dc50f2456a18a4e40f WatchSource:0}: Error finding container 7f828210fc369e4d816ea434ced7b0031aa571987fcd59dc50f2456a18a4e40f: Status 404 returned error can't find the container with id 7f828210fc369e4d816ea434ced7b0031aa571987fcd59dc50f2456a18a4e40f Nov 28 12:56:22 crc kubenswrapper[4779]: I1128 12:56:22.946481 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f67e7c90-06fb-42ba-98c6-b30f8f9d2829","Type":"ContainerDied","Data":"5af88a65c055621a82d5fa5518209c209add1ed2427cf5662605b5c9e6590b92"} Nov 28 12:56:22 crc kubenswrapper[4779]: I1128 12:56:22.946785 4779 scope.go:117] "RemoveContainer" containerID="59c13bcf2469efa826e82d798716c85b168f3ba99f1de588a860b5f0ee81b3ba" Nov 28 12:56:22 crc kubenswrapper[4779]: I1128 12:56:22.946506 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 12:56:22 crc kubenswrapper[4779]: I1128 12:56:22.961786 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-tlgl6" event={"ID":"21e20123-0f0f-48a0-8412-6167f107ed2a","Type":"ContainerStarted","Data":"7f828210fc369e4d816ea434ced7b0031aa571987fcd59dc50f2456a18a4e40f"} Nov 28 12:56:22 crc kubenswrapper[4779]: I1128 12:56:22.965526 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2f8bf0a-80ca-401d-97ce-393a3c58632e","Type":"ContainerStarted","Data":"703f2c2b149434f3e65870eb340a21f051ee294f3e4fc6f008d1f88d3f98230e"} Nov 28 12:56:22 crc kubenswrapper[4779]: I1128 12:56:22.992206 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 12:56:23 crc kubenswrapper[4779]: I1128 12:56:23.002670 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 12:56:23 crc kubenswrapper[4779]: I1128 12:56:23.025409 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 12:56:23 crc kubenswrapper[4779]: E1128 12:56:23.025758 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f67e7c90-06fb-42ba-98c6-b30f8f9d2829" containerName="glance-httpd" Nov 28 12:56:23 crc kubenswrapper[4779]: I1128 12:56:23.025769 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="f67e7c90-06fb-42ba-98c6-b30f8f9d2829" containerName="glance-httpd" Nov 28 12:56:23 crc kubenswrapper[4779]: E1128 12:56:23.025802 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f67e7c90-06fb-42ba-98c6-b30f8f9d2829" containerName="glance-log" Nov 28 12:56:23 crc kubenswrapper[4779]: I1128 12:56:23.025808 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="f67e7c90-06fb-42ba-98c6-b30f8f9d2829" containerName="glance-log" Nov 28 12:56:23 crc kubenswrapper[4779]: I1128 12:56:23.031752 4779 scope.go:117] "RemoveContainer" containerID="483276b16bb4b3bf69e3be86d05e08d9a9522bf4c858a5c52c1b459d2a00c5c8" Nov 28 12:56:23 crc kubenswrapper[4779]: I1128 12:56:23.032705 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="f67e7c90-06fb-42ba-98c6-b30f8f9d2829" containerName="glance-httpd" Nov 28 12:56:23 crc kubenswrapper[4779]: I1128 12:56:23.032755 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="f67e7c90-06fb-42ba-98c6-b30f8f9d2829" containerName="glance-log" Nov 28 12:56:23 crc kubenswrapper[4779]: I1128 12:56:23.042997 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 12:56:23 crc kubenswrapper[4779]: I1128 12:56:23.051524 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 28 12:56:23 crc kubenswrapper[4779]: I1128 12:56:23.051749 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 28 12:56:23 crc kubenswrapper[4779]: I1128 12:56:23.084243 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 12:56:23 crc kubenswrapper[4779]: I1128 12:56:23.136476 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44e7698e-14e1-4bbe-849b-3a90b6ebd431-scripts\") pod \"glance-default-external-api-0\" (UID: \"44e7698e-14e1-4bbe-849b-3a90b6ebd431\") " pod="openstack/glance-default-external-api-0" Nov 28 12:56:23 crc kubenswrapper[4779]: I1128 12:56:23.136521 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44e7698e-14e1-4bbe-849b-3a90b6ebd431-logs\") pod \"glance-default-external-api-0\" (UID: \"44e7698e-14e1-4bbe-849b-3a90b6ebd431\") " pod="openstack/glance-default-external-api-0" Nov 28 12:56:23 crc kubenswrapper[4779]: I1128 12:56:23.136556 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/44e7698e-14e1-4bbe-849b-3a90b6ebd431-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"44e7698e-14e1-4bbe-849b-3a90b6ebd431\") " pod="openstack/glance-default-external-api-0" Nov 28 12:56:23 crc kubenswrapper[4779]: I1128 12:56:23.136597 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcqb8\" (UniqueName: \"kubernetes.io/projected/44e7698e-14e1-4bbe-849b-3a90b6ebd431-kube-api-access-fcqb8\") pod \"glance-default-external-api-0\" (UID: \"44e7698e-14e1-4bbe-849b-3a90b6ebd431\") " pod="openstack/glance-default-external-api-0" Nov 28 12:56:23 crc kubenswrapper[4779]: I1128 12:56:23.136629 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"44e7698e-14e1-4bbe-849b-3a90b6ebd431\") " pod="openstack/glance-default-external-api-0" Nov 28 12:56:23 crc kubenswrapper[4779]: I1128 12:56:23.136692 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/44e7698e-14e1-4bbe-849b-3a90b6ebd431-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"44e7698e-14e1-4bbe-849b-3a90b6ebd431\") " pod="openstack/glance-default-external-api-0" Nov 28 12:56:23 crc kubenswrapper[4779]: I1128 12:56:23.136726 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44e7698e-14e1-4bbe-849b-3a90b6ebd431-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"44e7698e-14e1-4bbe-849b-3a90b6ebd431\") " pod="openstack/glance-default-external-api-0" Nov 28 12:56:23 crc kubenswrapper[4779]: I1128 12:56:23.136748 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44e7698e-14e1-4bbe-849b-3a90b6ebd431-config-data\") pod \"glance-default-external-api-0\" (UID: \"44e7698e-14e1-4bbe-849b-3a90b6ebd431\") " pod="openstack/glance-default-external-api-0" Nov 28 12:56:23 crc kubenswrapper[4779]: I1128 12:56:23.238375 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44e7698e-14e1-4bbe-849b-3a90b6ebd431-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"44e7698e-14e1-4bbe-849b-3a90b6ebd431\") " pod="openstack/glance-default-external-api-0" Nov 28 12:56:23 crc kubenswrapper[4779]: I1128 12:56:23.238467 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44e7698e-14e1-4bbe-849b-3a90b6ebd431-config-data\") pod \"glance-default-external-api-0\" (UID: \"44e7698e-14e1-4bbe-849b-3a90b6ebd431\") " pod="openstack/glance-default-external-api-0" Nov 28 12:56:23 crc kubenswrapper[4779]: I1128 12:56:23.238531 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44e7698e-14e1-4bbe-849b-3a90b6ebd431-scripts\") pod \"glance-default-external-api-0\" (UID: \"44e7698e-14e1-4bbe-849b-3a90b6ebd431\") " pod="openstack/glance-default-external-api-0" Nov 28 12:56:23 crc kubenswrapper[4779]: I1128 12:56:23.238575 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44e7698e-14e1-4bbe-849b-3a90b6ebd431-logs\") pod \"glance-default-external-api-0\" (UID: \"44e7698e-14e1-4bbe-849b-3a90b6ebd431\") " pod="openstack/glance-default-external-api-0" Nov 28 12:56:23 crc kubenswrapper[4779]: I1128 12:56:23.238612 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/44e7698e-14e1-4bbe-849b-3a90b6ebd431-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"44e7698e-14e1-4bbe-849b-3a90b6ebd431\") " pod="openstack/glance-default-external-api-0" Nov 28 12:56:23 crc kubenswrapper[4779]: I1128 12:56:23.238687 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcqb8\" (UniqueName: \"kubernetes.io/projected/44e7698e-14e1-4bbe-849b-3a90b6ebd431-kube-api-access-fcqb8\") pod \"glance-default-external-api-0\" (UID: \"44e7698e-14e1-4bbe-849b-3a90b6ebd431\") " pod="openstack/glance-default-external-api-0" Nov 28 12:56:23 crc kubenswrapper[4779]: I1128 12:56:23.238746 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"44e7698e-14e1-4bbe-849b-3a90b6ebd431\") " pod="openstack/glance-default-external-api-0" Nov 28 12:56:23 crc kubenswrapper[4779]: I1128 12:56:23.238883 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/44e7698e-14e1-4bbe-849b-3a90b6ebd431-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"44e7698e-14e1-4bbe-849b-3a90b6ebd431\") " pod="openstack/glance-default-external-api-0" Nov 28 12:56:23 crc kubenswrapper[4779]: I1128 12:56:23.240755 4779 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"44e7698e-14e1-4bbe-849b-3a90b6ebd431\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Nov 28 12:56:23 crc kubenswrapper[4779]: I1128 12:56:23.240776 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44e7698e-14e1-4bbe-849b-3a90b6ebd431-logs\") pod \"glance-default-external-api-0\" (UID: \"44e7698e-14e1-4bbe-849b-3a90b6ebd431\") " pod="openstack/glance-default-external-api-0" Nov 28 12:56:23 crc kubenswrapper[4779]: I1128 12:56:23.241242 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/44e7698e-14e1-4bbe-849b-3a90b6ebd431-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"44e7698e-14e1-4bbe-849b-3a90b6ebd431\") " pod="openstack/glance-default-external-api-0" Nov 28 12:56:23 crc kubenswrapper[4779]: I1128 12:56:23.245211 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44e7698e-14e1-4bbe-849b-3a90b6ebd431-config-data\") pod \"glance-default-external-api-0\" (UID: \"44e7698e-14e1-4bbe-849b-3a90b6ebd431\") " pod="openstack/glance-default-external-api-0" Nov 28 12:56:23 crc kubenswrapper[4779]: I1128 12:56:23.246331 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/44e7698e-14e1-4bbe-849b-3a90b6ebd431-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"44e7698e-14e1-4bbe-849b-3a90b6ebd431\") " pod="openstack/glance-default-external-api-0" Nov 28 12:56:23 crc kubenswrapper[4779]: I1128 12:56:23.250012 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44e7698e-14e1-4bbe-849b-3a90b6ebd431-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"44e7698e-14e1-4bbe-849b-3a90b6ebd431\") " pod="openstack/glance-default-external-api-0" Nov 28 12:56:23 crc kubenswrapper[4779]: I1128 12:56:23.261048 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44e7698e-14e1-4bbe-849b-3a90b6ebd431-scripts\") pod \"glance-default-external-api-0\" (UID: \"44e7698e-14e1-4bbe-849b-3a90b6ebd431\") " pod="openstack/glance-default-external-api-0" Nov 28 12:56:23 crc kubenswrapper[4779]: I1128 12:56:23.265801 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcqb8\" (UniqueName: \"kubernetes.io/projected/44e7698e-14e1-4bbe-849b-3a90b6ebd431-kube-api-access-fcqb8\") pod \"glance-default-external-api-0\" (UID: \"44e7698e-14e1-4bbe-849b-3a90b6ebd431\") " pod="openstack/glance-default-external-api-0" Nov 28 12:56:23 crc kubenswrapper[4779]: I1128 12:56:23.271987 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"44e7698e-14e1-4bbe-849b-3a90b6ebd431\") " pod="openstack/glance-default-external-api-0" Nov 28 12:56:23 crc kubenswrapper[4779]: I1128 12:56:23.395332 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 12:56:23 crc kubenswrapper[4779]: I1128 12:56:23.424363 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 12:56:23 crc kubenswrapper[4779]: I1128 12:56:23.748450 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f67e7c90-06fb-42ba-98c6-b30f8f9d2829" path="/var/lib/kubelet/pods/f67e7c90-06fb-42ba-98c6-b30f8f9d2829/volumes" Nov 28 12:56:23 crc kubenswrapper[4779]: I1128 12:56:23.989107 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2f8bf0a-80ca-401d-97ce-393a3c58632e","Type":"ContainerStarted","Data":"93502205b69b1cd53c974f5d0e32fd06f47dbc633e941fff0d0d1c033b4590b9"} Nov 28 12:56:23 crc kubenswrapper[4779]: I1128 12:56:23.989290 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a2f8bf0a-80ca-401d-97ce-393a3c58632e" containerName="ceilometer-central-agent" containerID="cri-o://b2083fa439ee9f511a3f2aa7008a0d88d383016308fd6ec290adfe8b372ed815" gracePeriod=30 Nov 28 12:56:23 crc kubenswrapper[4779]: I1128 12:56:23.989556 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 12:56:23 crc kubenswrapper[4779]: I1128 12:56:23.989843 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a2f8bf0a-80ca-401d-97ce-393a3c58632e" containerName="proxy-httpd" containerID="cri-o://93502205b69b1cd53c974f5d0e32fd06f47dbc633e941fff0d0d1c033b4590b9" gracePeriod=30 Nov 28 12:56:23 crc kubenswrapper[4779]: I1128 12:56:23.989897 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a2f8bf0a-80ca-401d-97ce-393a3c58632e" containerName="sg-core" containerID="cri-o://703f2c2b149434f3e65870eb340a21f051ee294f3e4fc6f008d1f88d3f98230e" gracePeriod=30 Nov 28 12:56:23 crc kubenswrapper[4779]: I1128 12:56:23.989947 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a2f8bf0a-80ca-401d-97ce-393a3c58632e" containerName="ceilometer-notification-agent" containerID="cri-o://e3bbf133d8c53cbc78e2eb027ed4358e04e37affd3720ab6dc28daeb405de989" gracePeriod=30 Nov 28 12:56:24 crc kubenswrapper[4779]: I1128 12:56:24.050534 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.29089678 podStartE2EDuration="6.050512887s" podCreationTimestamp="2025-11-28 12:56:18 +0000 UTC" firstStartedPulling="2025-11-28 12:56:19.7929645 +0000 UTC m=+1240.358639854" lastFinishedPulling="2025-11-28 12:56:23.552580607 +0000 UTC m=+1244.118255961" observedRunningTime="2025-11-28 12:56:24.02135852 +0000 UTC m=+1244.587033874" watchObservedRunningTime="2025-11-28 12:56:24.050512887 +0000 UTC m=+1244.616188241" Nov 28 12:56:24 crc kubenswrapper[4779]: I1128 12:56:24.054061 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 12:56:24 crc kubenswrapper[4779]: I1128 12:56:24.283135 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 28 12:56:24 crc kubenswrapper[4779]: I1128 12:56:24.283200 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 28 12:56:24 crc kubenswrapper[4779]: I1128 12:56:24.323084 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 28 12:56:24 crc kubenswrapper[4779]: I1128 12:56:24.368392 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 28 12:56:25 crc kubenswrapper[4779]: I1128 12:56:25.003730 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"44e7698e-14e1-4bbe-849b-3a90b6ebd431","Type":"ContainerStarted","Data":"f8b93f45695b26408c91828d60d8e8f286f38bb8c287f59e92056c27ccd241ec"} Nov 28 12:56:25 crc kubenswrapper[4779]: I1128 12:56:25.004011 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"44e7698e-14e1-4bbe-849b-3a90b6ebd431","Type":"ContainerStarted","Data":"9e2a6bfea63a0645aa84f7595461f3f62b740bb016e9bb17274846d7f5a8bf0d"} Nov 28 12:56:25 crc kubenswrapper[4779]: I1128 12:56:25.014751 4779 generic.go:334] "Generic (PLEG): container finished" podID="a2f8bf0a-80ca-401d-97ce-393a3c58632e" containerID="93502205b69b1cd53c974f5d0e32fd06f47dbc633e941fff0d0d1c033b4590b9" exitCode=0 Nov 28 12:56:25 crc kubenswrapper[4779]: I1128 12:56:25.014779 4779 generic.go:334] "Generic (PLEG): container finished" podID="a2f8bf0a-80ca-401d-97ce-393a3c58632e" containerID="703f2c2b149434f3e65870eb340a21f051ee294f3e4fc6f008d1f88d3f98230e" exitCode=2 Nov 28 12:56:25 crc kubenswrapper[4779]: I1128 12:56:25.014786 4779 generic.go:334] "Generic (PLEG): container finished" podID="a2f8bf0a-80ca-401d-97ce-393a3c58632e" containerID="e3bbf133d8c53cbc78e2eb027ed4358e04e37affd3720ab6dc28daeb405de989" exitCode=0 Nov 28 12:56:25 crc kubenswrapper[4779]: I1128 12:56:25.014839 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2f8bf0a-80ca-401d-97ce-393a3c58632e","Type":"ContainerDied","Data":"93502205b69b1cd53c974f5d0e32fd06f47dbc633e941fff0d0d1c033b4590b9"} Nov 28 12:56:25 crc kubenswrapper[4779]: I1128 12:56:25.014883 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2f8bf0a-80ca-401d-97ce-393a3c58632e","Type":"ContainerDied","Data":"703f2c2b149434f3e65870eb340a21f051ee294f3e4fc6f008d1f88d3f98230e"} Nov 28 12:56:25 crc kubenswrapper[4779]: I1128 12:56:25.014900 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2f8bf0a-80ca-401d-97ce-393a3c58632e","Type":"ContainerDied","Data":"e3bbf133d8c53cbc78e2eb027ed4358e04e37affd3720ab6dc28daeb405de989"} Nov 28 12:56:25 crc kubenswrapper[4779]: I1128 12:56:25.015237 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 28 12:56:25 crc kubenswrapper[4779]: I1128 12:56:25.015282 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 28 12:56:26 crc kubenswrapper[4779]: I1128 12:56:26.031317 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"44e7698e-14e1-4bbe-849b-3a90b6ebd431","Type":"ContainerStarted","Data":"4bdace1c37b91eee478afb4a8a0df04e4cf0a90a1dfe5f936a5214467795414b"} Nov 28 12:56:26 crc kubenswrapper[4779]: I1128 12:56:26.065324 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.065305124 podStartE2EDuration="4.065305124s" podCreationTimestamp="2025-11-28 12:56:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:56:26.061480204 +0000 UTC m=+1246.627155558" watchObservedRunningTime="2025-11-28 12:56:26.065305124 +0000 UTC m=+1246.630980478" Nov 28 12:56:26 crc kubenswrapper[4779]: I1128 12:56:26.916692 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 28 12:56:27 crc kubenswrapper[4779]: I1128 12:56:27.038997 4779 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 12:56:27 crc kubenswrapper[4779]: I1128 12:56:27.376715 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 28 12:56:30 crc kubenswrapper[4779]: I1128 12:56:30.075397 4779 generic.go:334] "Generic (PLEG): container finished" podID="ed331d56-a8fd-4fd5-8d71-45f853a35fa8" containerID="a46164af1ca68791f72ea8171ee935cc2b683dc226613895d2c7a1ad49678b4e" exitCode=0 Nov 28 12:56:30 crc kubenswrapper[4779]: I1128 12:56:30.075532 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-55ff4b54d5-48p68" event={"ID":"ed331d56-a8fd-4fd5-8d71-45f853a35fa8","Type":"ContainerDied","Data":"a46164af1ca68791f72ea8171ee935cc2b683dc226613895d2c7a1ad49678b4e"} Nov 28 12:56:30 crc kubenswrapper[4779]: E1128 12:56:30.801107 4779 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a46164af1ca68791f72ea8171ee935cc2b683dc226613895d2c7a1ad49678b4e is running failed: container process not found" containerID="a46164af1ca68791f72ea8171ee935cc2b683dc226613895d2c7a1ad49678b4e" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 28 12:56:30 crc kubenswrapper[4779]: E1128 12:56:30.801714 4779 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a46164af1ca68791f72ea8171ee935cc2b683dc226613895d2c7a1ad49678b4e is running failed: container process not found" containerID="a46164af1ca68791f72ea8171ee935cc2b683dc226613895d2c7a1ad49678b4e" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 28 12:56:30 crc kubenswrapper[4779]: E1128 12:56:30.802197 4779 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a46164af1ca68791f72ea8171ee935cc2b683dc226613895d2c7a1ad49678b4e is running failed: container process not found" containerID="a46164af1ca68791f72ea8171ee935cc2b683dc226613895d2c7a1ad49678b4e" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 28 12:56:30 crc kubenswrapper[4779]: E1128 12:56:30.802233 4779 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a46164af1ca68791f72ea8171ee935cc2b683dc226613895d2c7a1ad49678b4e is running failed: container process not found" probeType="Readiness" pod="openstack/heat-engine-55ff4b54d5-48p68" podUID="ed331d56-a8fd-4fd5-8d71-45f853a35fa8" containerName="heat-engine" Nov 28 12:56:31 crc kubenswrapper[4779]: I1128 12:56:31.089960 4779 generic.go:334] "Generic (PLEG): container finished" podID="a2f8bf0a-80ca-401d-97ce-393a3c58632e" containerID="b2083fa439ee9f511a3f2aa7008a0d88d383016308fd6ec290adfe8b372ed815" exitCode=0 Nov 28 12:56:31 crc kubenswrapper[4779]: I1128 12:56:31.089994 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2f8bf0a-80ca-401d-97ce-393a3c58632e","Type":"ContainerDied","Data":"b2083fa439ee9f511a3f2aa7008a0d88d383016308fd6ec290adfe8b372ed815"} Nov 28 12:56:32 crc kubenswrapper[4779]: I1128 12:56:32.171792 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-55ff4b54d5-48p68" Nov 28 12:56:32 crc kubenswrapper[4779]: I1128 12:56:32.223164 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27nm7\" (UniqueName: \"kubernetes.io/projected/ed331d56-a8fd-4fd5-8d71-45f853a35fa8-kube-api-access-27nm7\") pod \"ed331d56-a8fd-4fd5-8d71-45f853a35fa8\" (UID: \"ed331d56-a8fd-4fd5-8d71-45f853a35fa8\") " Nov 28 12:56:32 crc kubenswrapper[4779]: I1128 12:56:32.223344 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed331d56-a8fd-4fd5-8d71-45f853a35fa8-config-data\") pod \"ed331d56-a8fd-4fd5-8d71-45f853a35fa8\" (UID: \"ed331d56-a8fd-4fd5-8d71-45f853a35fa8\") " Nov 28 12:56:32 crc kubenswrapper[4779]: I1128 12:56:32.223379 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed331d56-a8fd-4fd5-8d71-45f853a35fa8-config-data-custom\") pod \"ed331d56-a8fd-4fd5-8d71-45f853a35fa8\" (UID: \"ed331d56-a8fd-4fd5-8d71-45f853a35fa8\") " Nov 28 12:56:32 crc kubenswrapper[4779]: I1128 12:56:32.223401 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed331d56-a8fd-4fd5-8d71-45f853a35fa8-combined-ca-bundle\") pod \"ed331d56-a8fd-4fd5-8d71-45f853a35fa8\" (UID: \"ed331d56-a8fd-4fd5-8d71-45f853a35fa8\") " Nov 28 12:56:32 crc kubenswrapper[4779]: I1128 12:56:32.227745 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed331d56-a8fd-4fd5-8d71-45f853a35fa8-kube-api-access-27nm7" (OuterVolumeSpecName: "kube-api-access-27nm7") pod "ed331d56-a8fd-4fd5-8d71-45f853a35fa8" (UID: "ed331d56-a8fd-4fd5-8d71-45f853a35fa8"). InnerVolumeSpecName "kube-api-access-27nm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:56:32 crc kubenswrapper[4779]: I1128 12:56:32.229761 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed331d56-a8fd-4fd5-8d71-45f853a35fa8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ed331d56-a8fd-4fd5-8d71-45f853a35fa8" (UID: "ed331d56-a8fd-4fd5-8d71-45f853a35fa8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:56:32 crc kubenswrapper[4779]: I1128 12:56:32.304305 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed331d56-a8fd-4fd5-8d71-45f853a35fa8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ed331d56-a8fd-4fd5-8d71-45f853a35fa8" (UID: "ed331d56-a8fd-4fd5-8d71-45f853a35fa8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:56:32 crc kubenswrapper[4779]: I1128 12:56:32.325376 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed331d56-a8fd-4fd5-8d71-45f853a35fa8-config-data" (OuterVolumeSpecName: "config-data") pod "ed331d56-a8fd-4fd5-8d71-45f853a35fa8" (UID: "ed331d56-a8fd-4fd5-8d71-45f853a35fa8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:56:32 crc kubenswrapper[4779]: I1128 12:56:32.325815 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27nm7\" (UniqueName: \"kubernetes.io/projected/ed331d56-a8fd-4fd5-8d71-45f853a35fa8-kube-api-access-27nm7\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:32 crc kubenswrapper[4779]: I1128 12:56:32.325839 4779 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed331d56-a8fd-4fd5-8d71-45f853a35fa8-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:32 crc kubenswrapper[4779]: I1128 12:56:32.325850 4779 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed331d56-a8fd-4fd5-8d71-45f853a35fa8-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:32 crc kubenswrapper[4779]: I1128 12:56:32.325858 4779 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed331d56-a8fd-4fd5-8d71-45f853a35fa8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:32 crc kubenswrapper[4779]: I1128 12:56:32.364050 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 12:56:32 crc kubenswrapper[4779]: I1128 12:56:32.426786 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2f8bf0a-80ca-401d-97ce-393a3c58632e-combined-ca-bundle\") pod \"a2f8bf0a-80ca-401d-97ce-393a3c58632e\" (UID: \"a2f8bf0a-80ca-401d-97ce-393a3c58632e\") " Nov 28 12:56:32 crc kubenswrapper[4779]: I1128 12:56:32.427208 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2f8bf0a-80ca-401d-97ce-393a3c58632e-config-data\") pod \"a2f8bf0a-80ca-401d-97ce-393a3c58632e\" (UID: \"a2f8bf0a-80ca-401d-97ce-393a3c58632e\") " Nov 28 12:56:32 crc kubenswrapper[4779]: I1128 12:56:32.427235 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a2f8bf0a-80ca-401d-97ce-393a3c58632e-sg-core-conf-yaml\") pod \"a2f8bf0a-80ca-401d-97ce-393a3c58632e\" (UID: \"a2f8bf0a-80ca-401d-97ce-393a3c58632e\") " Nov 28 12:56:32 crc kubenswrapper[4779]: I1128 12:56:32.427279 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btlhn\" (UniqueName: \"kubernetes.io/projected/a2f8bf0a-80ca-401d-97ce-393a3c58632e-kube-api-access-btlhn\") pod \"a2f8bf0a-80ca-401d-97ce-393a3c58632e\" (UID: \"a2f8bf0a-80ca-401d-97ce-393a3c58632e\") " Nov 28 12:56:32 crc kubenswrapper[4779]: I1128 12:56:32.427310 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2f8bf0a-80ca-401d-97ce-393a3c58632e-scripts\") pod \"a2f8bf0a-80ca-401d-97ce-393a3c58632e\" (UID: \"a2f8bf0a-80ca-401d-97ce-393a3c58632e\") " Nov 28 12:56:32 crc kubenswrapper[4779]: I1128 12:56:32.427333 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2f8bf0a-80ca-401d-97ce-393a3c58632e-log-httpd\") pod \"a2f8bf0a-80ca-401d-97ce-393a3c58632e\" (UID: \"a2f8bf0a-80ca-401d-97ce-393a3c58632e\") " Nov 28 12:56:32 crc kubenswrapper[4779]: I1128 12:56:32.427411 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2f8bf0a-80ca-401d-97ce-393a3c58632e-run-httpd\") pod \"a2f8bf0a-80ca-401d-97ce-393a3c58632e\" (UID: \"a2f8bf0a-80ca-401d-97ce-393a3c58632e\") " Nov 28 12:56:32 crc kubenswrapper[4779]: I1128 12:56:32.428044 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2f8bf0a-80ca-401d-97ce-393a3c58632e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a2f8bf0a-80ca-401d-97ce-393a3c58632e" (UID: "a2f8bf0a-80ca-401d-97ce-393a3c58632e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:56:32 crc kubenswrapper[4779]: I1128 12:56:32.428159 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2f8bf0a-80ca-401d-97ce-393a3c58632e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a2f8bf0a-80ca-401d-97ce-393a3c58632e" (UID: "a2f8bf0a-80ca-401d-97ce-393a3c58632e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:56:32 crc kubenswrapper[4779]: I1128 12:56:32.431395 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2f8bf0a-80ca-401d-97ce-393a3c58632e-kube-api-access-btlhn" (OuterVolumeSpecName: "kube-api-access-btlhn") pod "a2f8bf0a-80ca-401d-97ce-393a3c58632e" (UID: "a2f8bf0a-80ca-401d-97ce-393a3c58632e"). InnerVolumeSpecName "kube-api-access-btlhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:56:32 crc kubenswrapper[4779]: I1128 12:56:32.431768 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2f8bf0a-80ca-401d-97ce-393a3c58632e-scripts" (OuterVolumeSpecName: "scripts") pod "a2f8bf0a-80ca-401d-97ce-393a3c58632e" (UID: "a2f8bf0a-80ca-401d-97ce-393a3c58632e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:56:32 crc kubenswrapper[4779]: I1128 12:56:32.456577 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2f8bf0a-80ca-401d-97ce-393a3c58632e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a2f8bf0a-80ca-401d-97ce-393a3c58632e" (UID: "a2f8bf0a-80ca-401d-97ce-393a3c58632e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:56:32 crc kubenswrapper[4779]: I1128 12:56:32.493922 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2f8bf0a-80ca-401d-97ce-393a3c58632e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a2f8bf0a-80ca-401d-97ce-393a3c58632e" (UID: "a2f8bf0a-80ca-401d-97ce-393a3c58632e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:56:32 crc kubenswrapper[4779]: I1128 12:56:32.521452 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2f8bf0a-80ca-401d-97ce-393a3c58632e-config-data" (OuterVolumeSpecName: "config-data") pod "a2f8bf0a-80ca-401d-97ce-393a3c58632e" (UID: "a2f8bf0a-80ca-401d-97ce-393a3c58632e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:56:32 crc kubenswrapper[4779]: I1128 12:56:32.529581 4779 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2f8bf0a-80ca-401d-97ce-393a3c58632e-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:32 crc kubenswrapper[4779]: I1128 12:56:32.529614 4779 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a2f8bf0a-80ca-401d-97ce-393a3c58632e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:32 crc kubenswrapper[4779]: I1128 12:56:32.529627 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btlhn\" (UniqueName: \"kubernetes.io/projected/a2f8bf0a-80ca-401d-97ce-393a3c58632e-kube-api-access-btlhn\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:32 crc kubenswrapper[4779]: I1128 12:56:32.529639 4779 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2f8bf0a-80ca-401d-97ce-393a3c58632e-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:32 crc kubenswrapper[4779]: I1128 12:56:32.529647 4779 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2f8bf0a-80ca-401d-97ce-393a3c58632e-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:32 crc kubenswrapper[4779]: I1128 12:56:32.529656 4779 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2f8bf0a-80ca-401d-97ce-393a3c58632e-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:32 crc kubenswrapper[4779]: I1128 12:56:32.529664 4779 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2f8bf0a-80ca-401d-97ce-393a3c58632e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:33 crc kubenswrapper[4779]: I1128 12:56:33.110759 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-tlgl6" event={"ID":"21e20123-0f0f-48a0-8412-6167f107ed2a","Type":"ContainerStarted","Data":"022440ff63c3b6837f6091a7c8348788600a0b01cdfe7994b11c6182e955e40c"} Nov 28 12:56:33 crc kubenswrapper[4779]: I1128 12:56:33.113713 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2f8bf0a-80ca-401d-97ce-393a3c58632e","Type":"ContainerDied","Data":"5e5a64091523aadc92f633747d311ecf3f4194005dc7290b871564163db7e2e1"} Nov 28 12:56:33 crc kubenswrapper[4779]: I1128 12:56:33.113747 4779 scope.go:117] "RemoveContainer" containerID="93502205b69b1cd53c974f5d0e32fd06f47dbc633e941fff0d0d1c033b4590b9" Nov 28 12:56:33 crc kubenswrapper[4779]: I1128 12:56:33.113840 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 12:56:33 crc kubenswrapper[4779]: I1128 12:56:33.120052 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-55ff4b54d5-48p68" event={"ID":"ed331d56-a8fd-4fd5-8d71-45f853a35fa8","Type":"ContainerDied","Data":"45e855e310cc8afb9e5b6376890bb26e81baa516668025688fdd8eff7a208c1f"} Nov 28 12:56:33 crc kubenswrapper[4779]: I1128 12:56:33.120083 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-55ff4b54d5-48p68" Nov 28 12:56:33 crc kubenswrapper[4779]: I1128 12:56:33.141265 4779 scope.go:117] "RemoveContainer" containerID="703f2c2b149434f3e65870eb340a21f051ee294f3e4fc6f008d1f88d3f98230e" Nov 28 12:56:33 crc kubenswrapper[4779]: I1128 12:56:33.154573 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-tlgl6" podStartSLOduration=3.5276215090000003 podStartE2EDuration="13.154555177s" podCreationTimestamp="2025-11-28 12:56:20 +0000 UTC" firstStartedPulling="2025-11-28 12:56:22.544550196 +0000 UTC m=+1243.110225550" lastFinishedPulling="2025-11-28 12:56:32.171483864 +0000 UTC m=+1252.737159218" observedRunningTime="2025-11-28 12:56:33.147192533 +0000 UTC m=+1253.712867887" watchObservedRunningTime="2025-11-28 12:56:33.154555177 +0000 UTC m=+1253.720230531" Nov 28 12:56:33 crc kubenswrapper[4779]: I1128 12:56:33.188597 4779 scope.go:117] "RemoveContainer" containerID="e3bbf133d8c53cbc78e2eb027ed4358e04e37affd3720ab6dc28daeb405de989" Nov 28 12:56:33 crc kubenswrapper[4779]: I1128 12:56:33.192498 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-55ff4b54d5-48p68"] Nov 28 12:56:33 crc kubenswrapper[4779]: I1128 12:56:33.207970 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-55ff4b54d5-48p68"] Nov 28 12:56:33 crc kubenswrapper[4779]: I1128 12:56:33.227331 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 12:56:33 crc kubenswrapper[4779]: I1128 12:56:33.229450 4779 scope.go:117] "RemoveContainer" containerID="b2083fa439ee9f511a3f2aa7008a0d88d383016308fd6ec290adfe8b372ed815" Nov 28 12:56:33 crc kubenswrapper[4779]: I1128 12:56:33.248212 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 12:56:33 crc kubenswrapper[4779]: I1128 12:56:33.257446 4779 scope.go:117] "RemoveContainer" containerID="a46164af1ca68791f72ea8171ee935cc2b683dc226613895d2c7a1ad49678b4e" Nov 28 12:56:33 crc kubenswrapper[4779]: I1128 12:56:33.271880 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 12:56:33 crc kubenswrapper[4779]: E1128 12:56:33.272698 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2f8bf0a-80ca-401d-97ce-393a3c58632e" containerName="ceilometer-central-agent" Nov 28 12:56:33 crc kubenswrapper[4779]: I1128 12:56:33.272730 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2f8bf0a-80ca-401d-97ce-393a3c58632e" containerName="ceilometer-central-agent" Nov 28 12:56:33 crc kubenswrapper[4779]: E1128 12:56:33.272753 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2f8bf0a-80ca-401d-97ce-393a3c58632e" containerName="ceilometer-notification-agent" Nov 28 12:56:33 crc kubenswrapper[4779]: I1128 12:56:33.272764 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2f8bf0a-80ca-401d-97ce-393a3c58632e" containerName="ceilometer-notification-agent" Nov 28 12:56:33 crc kubenswrapper[4779]: E1128 12:56:33.272786 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2f8bf0a-80ca-401d-97ce-393a3c58632e" containerName="sg-core" Nov 28 12:56:33 crc kubenswrapper[4779]: I1128 12:56:33.272796 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2f8bf0a-80ca-401d-97ce-393a3c58632e" containerName="sg-core" Nov 28 12:56:33 crc kubenswrapper[4779]: E1128 12:56:33.272818 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2f8bf0a-80ca-401d-97ce-393a3c58632e" containerName="proxy-httpd" Nov 28 12:56:33 crc kubenswrapper[4779]: I1128 12:56:33.272828 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2f8bf0a-80ca-401d-97ce-393a3c58632e" containerName="proxy-httpd" Nov 28 12:56:33 crc kubenswrapper[4779]: E1128 12:56:33.272851 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed331d56-a8fd-4fd5-8d71-45f853a35fa8" containerName="heat-engine" Nov 28 12:56:33 crc kubenswrapper[4779]: I1128 12:56:33.272861 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed331d56-a8fd-4fd5-8d71-45f853a35fa8" containerName="heat-engine" Nov 28 12:56:33 crc kubenswrapper[4779]: I1128 12:56:33.273193 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2f8bf0a-80ca-401d-97ce-393a3c58632e" containerName="sg-core" Nov 28 12:56:33 crc kubenswrapper[4779]: I1128 12:56:33.273212 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed331d56-a8fd-4fd5-8d71-45f853a35fa8" containerName="heat-engine" Nov 28 12:56:33 crc kubenswrapper[4779]: I1128 12:56:33.273223 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2f8bf0a-80ca-401d-97ce-393a3c58632e" containerName="ceilometer-central-agent" Nov 28 12:56:33 crc kubenswrapper[4779]: I1128 12:56:33.273241 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2f8bf0a-80ca-401d-97ce-393a3c58632e" containerName="ceilometer-notification-agent" Nov 28 12:56:33 crc kubenswrapper[4779]: I1128 12:56:33.273255 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2f8bf0a-80ca-401d-97ce-393a3c58632e" containerName="proxy-httpd" Nov 28 12:56:33 crc kubenswrapper[4779]: I1128 12:56:33.283960 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 12:56:33 crc kubenswrapper[4779]: I1128 12:56:33.290560 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 12:56:33 crc kubenswrapper[4779]: I1128 12:56:33.293768 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 12:56:33 crc kubenswrapper[4779]: I1128 12:56:33.299330 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 12:56:33 crc kubenswrapper[4779]: I1128 12:56:33.347317 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f171b23-17c5-4da3-9d82-5041a6ac24f5-run-httpd\") pod \"ceilometer-0\" (UID: \"1f171b23-17c5-4da3-9d82-5041a6ac24f5\") " pod="openstack/ceilometer-0" Nov 28 12:56:33 crc kubenswrapper[4779]: I1128 12:56:33.347361 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f171b23-17c5-4da3-9d82-5041a6ac24f5-log-httpd\") pod \"ceilometer-0\" (UID: \"1f171b23-17c5-4da3-9d82-5041a6ac24f5\") " pod="openstack/ceilometer-0" Nov 28 12:56:33 crc kubenswrapper[4779]: I1128 12:56:33.347523 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f171b23-17c5-4da3-9d82-5041a6ac24f5-scripts\") pod \"ceilometer-0\" (UID: \"1f171b23-17c5-4da3-9d82-5041a6ac24f5\") " pod="openstack/ceilometer-0" Nov 28 12:56:33 crc kubenswrapper[4779]: I1128 12:56:33.347545 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1f171b23-17c5-4da3-9d82-5041a6ac24f5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1f171b23-17c5-4da3-9d82-5041a6ac24f5\") " pod="openstack/ceilometer-0" Nov 28 12:56:33 crc kubenswrapper[4779]: I1128 12:56:33.347560 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f171b23-17c5-4da3-9d82-5041a6ac24f5-config-data\") pod \"ceilometer-0\" (UID: \"1f171b23-17c5-4da3-9d82-5041a6ac24f5\") " pod="openstack/ceilometer-0" Nov 28 12:56:33 crc kubenswrapper[4779]: I1128 12:56:33.347585 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f171b23-17c5-4da3-9d82-5041a6ac24f5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1f171b23-17c5-4da3-9d82-5041a6ac24f5\") " pod="openstack/ceilometer-0" Nov 28 12:56:33 crc kubenswrapper[4779]: I1128 12:56:33.347602 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfrr4\" (UniqueName: \"kubernetes.io/projected/1f171b23-17c5-4da3-9d82-5041a6ac24f5-kube-api-access-zfrr4\") pod \"ceilometer-0\" (UID: \"1f171b23-17c5-4da3-9d82-5041a6ac24f5\") " pod="openstack/ceilometer-0" Nov 28 12:56:33 crc kubenswrapper[4779]: I1128 12:56:33.396400 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 28 12:56:33 crc kubenswrapper[4779]: I1128 12:56:33.396492 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 28 12:56:33 crc kubenswrapper[4779]: I1128 12:56:33.441319 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 28 12:56:33 crc kubenswrapper[4779]: I1128 12:56:33.450069 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f171b23-17c5-4da3-9d82-5041a6ac24f5-run-httpd\") pod \"ceilometer-0\" (UID: \"1f171b23-17c5-4da3-9d82-5041a6ac24f5\") " pod="openstack/ceilometer-0" Nov 28 12:56:33 crc kubenswrapper[4779]: I1128 12:56:33.450230 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f171b23-17c5-4da3-9d82-5041a6ac24f5-log-httpd\") pod \"ceilometer-0\" (UID: \"1f171b23-17c5-4da3-9d82-5041a6ac24f5\") " pod="openstack/ceilometer-0" Nov 28 12:56:33 crc kubenswrapper[4779]: I1128 12:56:33.450432 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f171b23-17c5-4da3-9d82-5041a6ac24f5-scripts\") pod \"ceilometer-0\" (UID: \"1f171b23-17c5-4da3-9d82-5041a6ac24f5\") " pod="openstack/ceilometer-0" Nov 28 12:56:33 crc kubenswrapper[4779]: I1128 12:56:33.450477 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1f171b23-17c5-4da3-9d82-5041a6ac24f5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1f171b23-17c5-4da3-9d82-5041a6ac24f5\") " pod="openstack/ceilometer-0" Nov 28 12:56:33 crc kubenswrapper[4779]: I1128 12:56:33.450521 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f171b23-17c5-4da3-9d82-5041a6ac24f5-config-data\") pod \"ceilometer-0\" (UID: \"1f171b23-17c5-4da3-9d82-5041a6ac24f5\") " pod="openstack/ceilometer-0" Nov 28 12:56:33 crc kubenswrapper[4779]: I1128 12:56:33.450595 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f171b23-17c5-4da3-9d82-5041a6ac24f5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1f171b23-17c5-4da3-9d82-5041a6ac24f5\") " pod="openstack/ceilometer-0" Nov 28 12:56:33 crc kubenswrapper[4779]: I1128 12:56:33.450631 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfrr4\" (UniqueName: \"kubernetes.io/projected/1f171b23-17c5-4da3-9d82-5041a6ac24f5-kube-api-access-zfrr4\") pod \"ceilometer-0\" (UID: \"1f171b23-17c5-4da3-9d82-5041a6ac24f5\") " pod="openstack/ceilometer-0" Nov 28 12:56:33 crc kubenswrapper[4779]: I1128 12:56:33.454052 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f171b23-17c5-4da3-9d82-5041a6ac24f5-run-httpd\") pod \"ceilometer-0\" (UID: \"1f171b23-17c5-4da3-9d82-5041a6ac24f5\") " pod="openstack/ceilometer-0" Nov 28 12:56:33 crc kubenswrapper[4779]: I1128 12:56:33.454547 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f171b23-17c5-4da3-9d82-5041a6ac24f5-log-httpd\") pod \"ceilometer-0\" (UID: \"1f171b23-17c5-4da3-9d82-5041a6ac24f5\") " pod="openstack/ceilometer-0" Nov 28 12:56:33 crc kubenswrapper[4779]: I1128 12:56:33.463909 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f171b23-17c5-4da3-9d82-5041a6ac24f5-config-data\") pod \"ceilometer-0\" (UID: \"1f171b23-17c5-4da3-9d82-5041a6ac24f5\") " pod="openstack/ceilometer-0" Nov 28 12:56:33 crc kubenswrapper[4779]: I1128 12:56:33.465033 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f171b23-17c5-4da3-9d82-5041a6ac24f5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1f171b23-17c5-4da3-9d82-5041a6ac24f5\") " pod="openstack/ceilometer-0" Nov 28 12:56:33 crc kubenswrapper[4779]: I1128 12:56:33.469083 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1f171b23-17c5-4da3-9d82-5041a6ac24f5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1f171b23-17c5-4da3-9d82-5041a6ac24f5\") " pod="openstack/ceilometer-0" Nov 28 12:56:33 crc kubenswrapper[4779]: I1128 12:56:33.469739 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f171b23-17c5-4da3-9d82-5041a6ac24f5-scripts\") pod \"ceilometer-0\" (UID: \"1f171b23-17c5-4da3-9d82-5041a6ac24f5\") " pod="openstack/ceilometer-0" Nov 28 12:56:33 crc kubenswrapper[4779]: I1128 12:56:33.489054 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfrr4\" (UniqueName: \"kubernetes.io/projected/1f171b23-17c5-4da3-9d82-5041a6ac24f5-kube-api-access-zfrr4\") pod \"ceilometer-0\" (UID: \"1f171b23-17c5-4da3-9d82-5041a6ac24f5\") " pod="openstack/ceilometer-0" Nov 28 12:56:33 crc kubenswrapper[4779]: I1128 12:56:33.493204 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 28 12:56:33 crc kubenswrapper[4779]: I1128 12:56:33.602654 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 12:56:33 crc kubenswrapper[4779]: I1128 12:56:33.743114 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2f8bf0a-80ca-401d-97ce-393a3c58632e" path="/var/lib/kubelet/pods/a2f8bf0a-80ca-401d-97ce-393a3c58632e/volumes" Nov 28 12:56:33 crc kubenswrapper[4779]: I1128 12:56:33.745176 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed331d56-a8fd-4fd5-8d71-45f853a35fa8" path="/var/lib/kubelet/pods/ed331d56-a8fd-4fd5-8d71-45f853a35fa8/volumes" Nov 28 12:56:34 crc kubenswrapper[4779]: I1128 12:56:34.030937 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 12:56:34 crc kubenswrapper[4779]: I1128 12:56:34.131662 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f171b23-17c5-4da3-9d82-5041a6ac24f5","Type":"ContainerStarted","Data":"1663da3babdf61dde6feed8ad8d16ab2ed70bbd2f6606869f5929b2fb93b2158"} Nov 28 12:56:34 crc kubenswrapper[4779]: I1128 12:56:34.132903 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 28 12:56:34 crc kubenswrapper[4779]: I1128 12:56:34.132928 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 28 12:56:35 crc kubenswrapper[4779]: I1128 12:56:35.145521 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f171b23-17c5-4da3-9d82-5041a6ac24f5","Type":"ContainerStarted","Data":"c7c08c0a4289edeec60d37f30801c879371bb37a3f2fb521355be07ec1ac5a43"} Nov 28 12:56:36 crc kubenswrapper[4779]: I1128 12:56:36.038126 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 28 12:56:36 crc kubenswrapper[4779]: I1128 12:56:36.040038 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 28 12:56:36 crc kubenswrapper[4779]: I1128 12:56:36.158179 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f171b23-17c5-4da3-9d82-5041a6ac24f5","Type":"ContainerStarted","Data":"3beebef2209289f075721a39ce279ae56821e276a98b4a0509d04ec513834537"} Nov 28 12:56:37 crc kubenswrapper[4779]: I1128 12:56:37.168743 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f171b23-17c5-4da3-9d82-5041a6ac24f5","Type":"ContainerStarted","Data":"7dfddd06074ddff9cfea0d1754e5a2f0ae179e9459de2a92065f4cdeae6d82fb"} Nov 28 12:56:38 crc kubenswrapper[4779]: I1128 12:56:38.011576 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 12:56:39 crc kubenswrapper[4779]: I1128 12:56:39.193582 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f171b23-17c5-4da3-9d82-5041a6ac24f5","Type":"ContainerStarted","Data":"1ec34a3d14b94674b2da0b290f86d3a76534628f67f3d87e3baf64a19153cab4"} Nov 28 12:56:39 crc kubenswrapper[4779]: I1128 12:56:39.195512 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 12:56:39 crc kubenswrapper[4779]: I1128 12:56:39.194328 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1f171b23-17c5-4da3-9d82-5041a6ac24f5" containerName="proxy-httpd" containerID="cri-o://1ec34a3d14b94674b2da0b290f86d3a76534628f67f3d87e3baf64a19153cab4" gracePeriod=30 Nov 28 12:56:39 crc kubenswrapper[4779]: I1128 12:56:39.193898 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1f171b23-17c5-4da3-9d82-5041a6ac24f5" containerName="ceilometer-central-agent" containerID="cri-o://c7c08c0a4289edeec60d37f30801c879371bb37a3f2fb521355be07ec1ac5a43" gracePeriod=30 Nov 28 12:56:39 crc kubenswrapper[4779]: I1128 12:56:39.194349 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1f171b23-17c5-4da3-9d82-5041a6ac24f5" containerName="ceilometer-notification-agent" containerID="cri-o://3beebef2209289f075721a39ce279ae56821e276a98b4a0509d04ec513834537" gracePeriod=30 Nov 28 12:56:39 crc kubenswrapper[4779]: I1128 12:56:39.194340 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1f171b23-17c5-4da3-9d82-5041a6ac24f5" containerName="sg-core" containerID="cri-o://7dfddd06074ddff9cfea0d1754e5a2f0ae179e9459de2a92065f4cdeae6d82fb" gracePeriod=30 Nov 28 12:56:39 crc kubenswrapper[4779]: I1128 12:56:39.229976 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.2007731919999998 podStartE2EDuration="6.229961077s" podCreationTimestamp="2025-11-28 12:56:33 +0000 UTC" firstStartedPulling="2025-11-28 12:56:34.035242135 +0000 UTC m=+1254.600917509" lastFinishedPulling="2025-11-28 12:56:38.06443004 +0000 UTC m=+1258.630105394" observedRunningTime="2025-11-28 12:56:39.222610623 +0000 UTC m=+1259.788285977" watchObservedRunningTime="2025-11-28 12:56:39.229961077 +0000 UTC m=+1259.795636431" Nov 28 12:56:40 crc kubenswrapper[4779]: I1128 12:56:40.208966 4779 generic.go:334] "Generic (PLEG): container finished" podID="1f171b23-17c5-4da3-9d82-5041a6ac24f5" containerID="1ec34a3d14b94674b2da0b290f86d3a76534628f67f3d87e3baf64a19153cab4" exitCode=0 Nov 28 12:56:40 crc kubenswrapper[4779]: I1128 12:56:40.209210 4779 generic.go:334] "Generic (PLEG): container finished" podID="1f171b23-17c5-4da3-9d82-5041a6ac24f5" containerID="7dfddd06074ddff9cfea0d1754e5a2f0ae179e9459de2a92065f4cdeae6d82fb" exitCode=2 Nov 28 12:56:40 crc kubenswrapper[4779]: I1128 12:56:40.209218 4779 generic.go:334] "Generic (PLEG): container finished" podID="1f171b23-17c5-4da3-9d82-5041a6ac24f5" containerID="3beebef2209289f075721a39ce279ae56821e276a98b4a0509d04ec513834537" exitCode=0 Nov 28 12:56:40 crc kubenswrapper[4779]: I1128 12:56:40.209116 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f171b23-17c5-4da3-9d82-5041a6ac24f5","Type":"ContainerDied","Data":"1ec34a3d14b94674b2da0b290f86d3a76534628f67f3d87e3baf64a19153cab4"} Nov 28 12:56:40 crc kubenswrapper[4779]: I1128 12:56:40.209246 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f171b23-17c5-4da3-9d82-5041a6ac24f5","Type":"ContainerDied","Data":"7dfddd06074ddff9cfea0d1754e5a2f0ae179e9459de2a92065f4cdeae6d82fb"} Nov 28 12:56:40 crc kubenswrapper[4779]: I1128 12:56:40.209256 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f171b23-17c5-4da3-9d82-5041a6ac24f5","Type":"ContainerDied","Data":"3beebef2209289f075721a39ce279ae56821e276a98b4a0509d04ec513834537"} Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.152070 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.179911 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f171b23-17c5-4da3-9d82-5041a6ac24f5-combined-ca-bundle\") pod \"1f171b23-17c5-4da3-9d82-5041a6ac24f5\" (UID: \"1f171b23-17c5-4da3-9d82-5041a6ac24f5\") " Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.180013 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f171b23-17c5-4da3-9d82-5041a6ac24f5-config-data\") pod \"1f171b23-17c5-4da3-9d82-5041a6ac24f5\" (UID: \"1f171b23-17c5-4da3-9d82-5041a6ac24f5\") " Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.180074 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f171b23-17c5-4da3-9d82-5041a6ac24f5-scripts\") pod \"1f171b23-17c5-4da3-9d82-5041a6ac24f5\" (UID: \"1f171b23-17c5-4da3-9d82-5041a6ac24f5\") " Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.180116 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1f171b23-17c5-4da3-9d82-5041a6ac24f5-sg-core-conf-yaml\") pod \"1f171b23-17c5-4da3-9d82-5041a6ac24f5\" (UID: \"1f171b23-17c5-4da3-9d82-5041a6ac24f5\") " Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.180223 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f171b23-17c5-4da3-9d82-5041a6ac24f5-run-httpd\") pod \"1f171b23-17c5-4da3-9d82-5041a6ac24f5\" (UID: \"1f171b23-17c5-4da3-9d82-5041a6ac24f5\") " Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.180269 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfrr4\" (UniqueName: \"kubernetes.io/projected/1f171b23-17c5-4da3-9d82-5041a6ac24f5-kube-api-access-zfrr4\") pod \"1f171b23-17c5-4da3-9d82-5041a6ac24f5\" (UID: \"1f171b23-17c5-4da3-9d82-5041a6ac24f5\") " Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.180300 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f171b23-17c5-4da3-9d82-5041a6ac24f5-log-httpd\") pod \"1f171b23-17c5-4da3-9d82-5041a6ac24f5\" (UID: \"1f171b23-17c5-4da3-9d82-5041a6ac24f5\") " Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.180807 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f171b23-17c5-4da3-9d82-5041a6ac24f5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1f171b23-17c5-4da3-9d82-5041a6ac24f5" (UID: "1f171b23-17c5-4da3-9d82-5041a6ac24f5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.181168 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f171b23-17c5-4da3-9d82-5041a6ac24f5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1f171b23-17c5-4da3-9d82-5041a6ac24f5" (UID: "1f171b23-17c5-4da3-9d82-5041a6ac24f5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.187521 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f171b23-17c5-4da3-9d82-5041a6ac24f5-kube-api-access-zfrr4" (OuterVolumeSpecName: "kube-api-access-zfrr4") pod "1f171b23-17c5-4da3-9d82-5041a6ac24f5" (UID: "1f171b23-17c5-4da3-9d82-5041a6ac24f5"). InnerVolumeSpecName "kube-api-access-zfrr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.197291 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f171b23-17c5-4da3-9d82-5041a6ac24f5-scripts" (OuterVolumeSpecName: "scripts") pod "1f171b23-17c5-4da3-9d82-5041a6ac24f5" (UID: "1f171b23-17c5-4da3-9d82-5041a6ac24f5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.232721 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f171b23-17c5-4da3-9d82-5041a6ac24f5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1f171b23-17c5-4da3-9d82-5041a6ac24f5" (UID: "1f171b23-17c5-4da3-9d82-5041a6ac24f5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.282942 4779 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f171b23-17c5-4da3-9d82-5041a6ac24f5-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.282978 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfrr4\" (UniqueName: \"kubernetes.io/projected/1f171b23-17c5-4da3-9d82-5041a6ac24f5-kube-api-access-zfrr4\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.282990 4779 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f171b23-17c5-4da3-9d82-5041a6ac24f5-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.282998 4779 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f171b23-17c5-4da3-9d82-5041a6ac24f5-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.283006 4779 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1f171b23-17c5-4da3-9d82-5041a6ac24f5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.285470 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f171b23-17c5-4da3-9d82-5041a6ac24f5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f171b23-17c5-4da3-9d82-5041a6ac24f5" (UID: "1f171b23-17c5-4da3-9d82-5041a6ac24f5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.314028 4779 generic.go:334] "Generic (PLEG): container finished" podID="1f171b23-17c5-4da3-9d82-5041a6ac24f5" containerID="c7c08c0a4289edeec60d37f30801c879371bb37a3f2fb521355be07ec1ac5a43" exitCode=0 Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.314074 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f171b23-17c5-4da3-9d82-5041a6ac24f5","Type":"ContainerDied","Data":"c7c08c0a4289edeec60d37f30801c879371bb37a3f2fb521355be07ec1ac5a43"} Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.314119 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f171b23-17c5-4da3-9d82-5041a6ac24f5","Type":"ContainerDied","Data":"1663da3babdf61dde6feed8ad8d16ab2ed70bbd2f6606869f5929b2fb93b2158"} Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.314149 4779 scope.go:117] "RemoveContainer" containerID="1ec34a3d14b94674b2da0b290f86d3a76534628f67f3d87e3baf64a19153cab4" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.314253 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.343136 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f171b23-17c5-4da3-9d82-5041a6ac24f5-config-data" (OuterVolumeSpecName: "config-data") pod "1f171b23-17c5-4da3-9d82-5041a6ac24f5" (UID: "1f171b23-17c5-4da3-9d82-5041a6ac24f5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.349191 4779 scope.go:117] "RemoveContainer" containerID="7dfddd06074ddff9cfea0d1754e5a2f0ae179e9459de2a92065f4cdeae6d82fb" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.367011 4779 scope.go:117] "RemoveContainer" containerID="3beebef2209289f075721a39ce279ae56821e276a98b4a0509d04ec513834537" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.384651 4779 scope.go:117] "RemoveContainer" containerID="c7c08c0a4289edeec60d37f30801c879371bb37a3f2fb521355be07ec1ac5a43" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.385018 4779 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f171b23-17c5-4da3-9d82-5041a6ac24f5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.385045 4779 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f171b23-17c5-4da3-9d82-5041a6ac24f5-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.405504 4779 scope.go:117] "RemoveContainer" containerID="1ec34a3d14b94674b2da0b290f86d3a76534628f67f3d87e3baf64a19153cab4" Nov 28 12:56:48 crc kubenswrapper[4779]: E1128 12:56:48.405943 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ec34a3d14b94674b2da0b290f86d3a76534628f67f3d87e3baf64a19153cab4\": container with ID starting with 1ec34a3d14b94674b2da0b290f86d3a76534628f67f3d87e3baf64a19153cab4 not found: ID does not exist" containerID="1ec34a3d14b94674b2da0b290f86d3a76534628f67f3d87e3baf64a19153cab4" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.405984 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ec34a3d14b94674b2da0b290f86d3a76534628f67f3d87e3baf64a19153cab4"} err="failed to get container status \"1ec34a3d14b94674b2da0b290f86d3a76534628f67f3d87e3baf64a19153cab4\": rpc error: code = NotFound desc = could not find container \"1ec34a3d14b94674b2da0b290f86d3a76534628f67f3d87e3baf64a19153cab4\": container with ID starting with 1ec34a3d14b94674b2da0b290f86d3a76534628f67f3d87e3baf64a19153cab4 not found: ID does not exist" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.406025 4779 scope.go:117] "RemoveContainer" containerID="7dfddd06074ddff9cfea0d1754e5a2f0ae179e9459de2a92065f4cdeae6d82fb" Nov 28 12:56:48 crc kubenswrapper[4779]: E1128 12:56:48.408315 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7dfddd06074ddff9cfea0d1754e5a2f0ae179e9459de2a92065f4cdeae6d82fb\": container with ID starting with 7dfddd06074ddff9cfea0d1754e5a2f0ae179e9459de2a92065f4cdeae6d82fb not found: ID does not exist" containerID="7dfddd06074ddff9cfea0d1754e5a2f0ae179e9459de2a92065f4cdeae6d82fb" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.408358 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7dfddd06074ddff9cfea0d1754e5a2f0ae179e9459de2a92065f4cdeae6d82fb"} err="failed to get container status \"7dfddd06074ddff9cfea0d1754e5a2f0ae179e9459de2a92065f4cdeae6d82fb\": rpc error: code = NotFound desc = could not find container \"7dfddd06074ddff9cfea0d1754e5a2f0ae179e9459de2a92065f4cdeae6d82fb\": container with ID starting with 7dfddd06074ddff9cfea0d1754e5a2f0ae179e9459de2a92065f4cdeae6d82fb not found: ID does not exist" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.408387 4779 scope.go:117] "RemoveContainer" containerID="3beebef2209289f075721a39ce279ae56821e276a98b4a0509d04ec513834537" Nov 28 12:56:48 crc kubenswrapper[4779]: E1128 12:56:48.408685 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3beebef2209289f075721a39ce279ae56821e276a98b4a0509d04ec513834537\": container with ID starting with 3beebef2209289f075721a39ce279ae56821e276a98b4a0509d04ec513834537 not found: ID does not exist" containerID="3beebef2209289f075721a39ce279ae56821e276a98b4a0509d04ec513834537" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.408711 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3beebef2209289f075721a39ce279ae56821e276a98b4a0509d04ec513834537"} err="failed to get container status \"3beebef2209289f075721a39ce279ae56821e276a98b4a0509d04ec513834537\": rpc error: code = NotFound desc = could not find container \"3beebef2209289f075721a39ce279ae56821e276a98b4a0509d04ec513834537\": container with ID starting with 3beebef2209289f075721a39ce279ae56821e276a98b4a0509d04ec513834537 not found: ID does not exist" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.408730 4779 scope.go:117] "RemoveContainer" containerID="c7c08c0a4289edeec60d37f30801c879371bb37a3f2fb521355be07ec1ac5a43" Nov 28 12:56:48 crc kubenswrapper[4779]: E1128 12:56:48.408950 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7c08c0a4289edeec60d37f30801c879371bb37a3f2fb521355be07ec1ac5a43\": container with ID starting with c7c08c0a4289edeec60d37f30801c879371bb37a3f2fb521355be07ec1ac5a43 not found: ID does not exist" containerID="c7c08c0a4289edeec60d37f30801c879371bb37a3f2fb521355be07ec1ac5a43" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.408968 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7c08c0a4289edeec60d37f30801c879371bb37a3f2fb521355be07ec1ac5a43"} err="failed to get container status \"c7c08c0a4289edeec60d37f30801c879371bb37a3f2fb521355be07ec1ac5a43\": rpc error: code = NotFound desc = could not find container \"c7c08c0a4289edeec60d37f30801c879371bb37a3f2fb521355be07ec1ac5a43\": container with ID starting with c7c08c0a4289edeec60d37f30801c879371bb37a3f2fb521355be07ec1ac5a43 not found: ID does not exist" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.662381 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.670178 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.686602 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 12:56:48 crc kubenswrapper[4779]: E1128 12:56:48.687188 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f171b23-17c5-4da3-9d82-5041a6ac24f5" containerName="ceilometer-notification-agent" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.687257 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f171b23-17c5-4da3-9d82-5041a6ac24f5" containerName="ceilometer-notification-agent" Nov 28 12:56:48 crc kubenswrapper[4779]: E1128 12:56:48.687311 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f171b23-17c5-4da3-9d82-5041a6ac24f5" containerName="proxy-httpd" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.687368 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f171b23-17c5-4da3-9d82-5041a6ac24f5" containerName="proxy-httpd" Nov 28 12:56:48 crc kubenswrapper[4779]: E1128 12:56:48.687448 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f171b23-17c5-4da3-9d82-5041a6ac24f5" containerName="ceilometer-central-agent" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.687505 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f171b23-17c5-4da3-9d82-5041a6ac24f5" containerName="ceilometer-central-agent" Nov 28 12:56:48 crc kubenswrapper[4779]: E1128 12:56:48.687559 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f171b23-17c5-4da3-9d82-5041a6ac24f5" containerName="sg-core" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.687633 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f171b23-17c5-4da3-9d82-5041a6ac24f5" containerName="sg-core" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.687836 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f171b23-17c5-4da3-9d82-5041a6ac24f5" containerName="ceilometer-notification-agent" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.687918 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f171b23-17c5-4da3-9d82-5041a6ac24f5" containerName="ceilometer-central-agent" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.687979 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f171b23-17c5-4da3-9d82-5041a6ac24f5" containerName="sg-core" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.688042 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f171b23-17c5-4da3-9d82-5041a6ac24f5" containerName="proxy-httpd" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.689604 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.691791 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.692665 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.713424 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.791691 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe0048e-087b-49e2-9625-3893d7eecb29-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"afe0048e-087b-49e2-9625-3893d7eecb29\") " pod="openstack/ceilometer-0" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.791774 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afe0048e-087b-49e2-9625-3893d7eecb29-log-httpd\") pod \"ceilometer-0\" (UID: \"afe0048e-087b-49e2-9625-3893d7eecb29\") " pod="openstack/ceilometer-0" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.791802 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afe0048e-087b-49e2-9625-3893d7eecb29-run-httpd\") pod \"ceilometer-0\" (UID: \"afe0048e-087b-49e2-9625-3893d7eecb29\") " pod="openstack/ceilometer-0" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.791819 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/afe0048e-087b-49e2-9625-3893d7eecb29-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"afe0048e-087b-49e2-9625-3893d7eecb29\") " pod="openstack/ceilometer-0" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.791836 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qrnv\" (UniqueName: \"kubernetes.io/projected/afe0048e-087b-49e2-9625-3893d7eecb29-kube-api-access-4qrnv\") pod \"ceilometer-0\" (UID: \"afe0048e-087b-49e2-9625-3893d7eecb29\") " pod="openstack/ceilometer-0" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.792037 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afe0048e-087b-49e2-9625-3893d7eecb29-config-data\") pod \"ceilometer-0\" (UID: \"afe0048e-087b-49e2-9625-3893d7eecb29\") " pod="openstack/ceilometer-0" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.792182 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afe0048e-087b-49e2-9625-3893d7eecb29-scripts\") pod \"ceilometer-0\" (UID: \"afe0048e-087b-49e2-9625-3893d7eecb29\") " pod="openstack/ceilometer-0" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.893874 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afe0048e-087b-49e2-9625-3893d7eecb29-log-httpd\") pod \"ceilometer-0\" (UID: \"afe0048e-087b-49e2-9625-3893d7eecb29\") " pod="openstack/ceilometer-0" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.893921 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afe0048e-087b-49e2-9625-3893d7eecb29-run-httpd\") pod \"ceilometer-0\" (UID: \"afe0048e-087b-49e2-9625-3893d7eecb29\") " pod="openstack/ceilometer-0" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.893937 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/afe0048e-087b-49e2-9625-3893d7eecb29-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"afe0048e-087b-49e2-9625-3893d7eecb29\") " pod="openstack/ceilometer-0" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.893957 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qrnv\" (UniqueName: \"kubernetes.io/projected/afe0048e-087b-49e2-9625-3893d7eecb29-kube-api-access-4qrnv\") pod \"ceilometer-0\" (UID: \"afe0048e-087b-49e2-9625-3893d7eecb29\") " pod="openstack/ceilometer-0" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.894028 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afe0048e-087b-49e2-9625-3893d7eecb29-config-data\") pod \"ceilometer-0\" (UID: \"afe0048e-087b-49e2-9625-3893d7eecb29\") " pod="openstack/ceilometer-0" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.894060 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afe0048e-087b-49e2-9625-3893d7eecb29-scripts\") pod \"ceilometer-0\" (UID: \"afe0048e-087b-49e2-9625-3893d7eecb29\") " pod="openstack/ceilometer-0" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.894099 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe0048e-087b-49e2-9625-3893d7eecb29-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"afe0048e-087b-49e2-9625-3893d7eecb29\") " pod="openstack/ceilometer-0" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.894515 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afe0048e-087b-49e2-9625-3893d7eecb29-log-httpd\") pod \"ceilometer-0\" (UID: \"afe0048e-087b-49e2-9625-3893d7eecb29\") " pod="openstack/ceilometer-0" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.894727 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afe0048e-087b-49e2-9625-3893d7eecb29-run-httpd\") pod \"ceilometer-0\" (UID: \"afe0048e-087b-49e2-9625-3893d7eecb29\") " pod="openstack/ceilometer-0" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.899199 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/afe0048e-087b-49e2-9625-3893d7eecb29-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"afe0048e-087b-49e2-9625-3893d7eecb29\") " pod="openstack/ceilometer-0" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.900117 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afe0048e-087b-49e2-9625-3893d7eecb29-config-data\") pod \"ceilometer-0\" (UID: \"afe0048e-087b-49e2-9625-3893d7eecb29\") " pod="openstack/ceilometer-0" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.900818 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afe0048e-087b-49e2-9625-3893d7eecb29-scripts\") pod \"ceilometer-0\" (UID: \"afe0048e-087b-49e2-9625-3893d7eecb29\") " pod="openstack/ceilometer-0" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.902452 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe0048e-087b-49e2-9625-3893d7eecb29-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"afe0048e-087b-49e2-9625-3893d7eecb29\") " pod="openstack/ceilometer-0" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.921134 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qrnv\" (UniqueName: \"kubernetes.io/projected/afe0048e-087b-49e2-9625-3893d7eecb29-kube-api-access-4qrnv\") pod \"ceilometer-0\" (UID: \"afe0048e-087b-49e2-9625-3893d7eecb29\") " pod="openstack/ceilometer-0" Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.938264 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 12:56:48 crc kubenswrapper[4779]: I1128 12:56:48.940540 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 12:56:49 crc kubenswrapper[4779]: I1128 12:56:49.426230 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 12:56:49 crc kubenswrapper[4779]: W1128 12:56:49.427173 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podafe0048e_087b_49e2_9625_3893d7eecb29.slice/crio-ef9f1302f43ff26d9b3f0856a4b00ccca3ee266335e3f13eca04ae406c076d47 WatchSource:0}: Error finding container ef9f1302f43ff26d9b3f0856a4b00ccca3ee266335e3f13eca04ae406c076d47: Status 404 returned error can't find the container with id ef9f1302f43ff26d9b3f0856a4b00ccca3ee266335e3f13eca04ae406c076d47 Nov 28 12:56:49 crc kubenswrapper[4779]: I1128 12:56:49.736173 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f171b23-17c5-4da3-9d82-5041a6ac24f5" path="/var/lib/kubelet/pods/1f171b23-17c5-4da3-9d82-5041a6ac24f5/volumes" Nov 28 12:56:50 crc kubenswrapper[4779]: I1128 12:56:50.337343 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"afe0048e-087b-49e2-9625-3893d7eecb29","Type":"ContainerStarted","Data":"ef9f1302f43ff26d9b3f0856a4b00ccca3ee266335e3f13eca04ae406c076d47"} Nov 28 12:56:51 crc kubenswrapper[4779]: I1128 12:56:51.357132 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"afe0048e-087b-49e2-9625-3893d7eecb29","Type":"ContainerStarted","Data":"83290e42af3d416e77ad2fb940bd729bc69eef61b66826ef7628494741f87fef"} Nov 28 12:56:52 crc kubenswrapper[4779]: I1128 12:56:52.367841 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"afe0048e-087b-49e2-9625-3893d7eecb29","Type":"ContainerStarted","Data":"26360b71b3b84a5099c8d0acac7e6cbb2c6797a9d3a41cbc5b66bf1bf9742e83"} Nov 28 12:56:53 crc kubenswrapper[4779]: I1128 12:56:53.377831 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"afe0048e-087b-49e2-9625-3893d7eecb29","Type":"ContainerStarted","Data":"d7fef85c46bea294afadeca18f2911a6af0bbee516d2c61aeadaacac86c95b6f"} Nov 28 12:56:55 crc kubenswrapper[4779]: I1128 12:56:55.404048 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"afe0048e-087b-49e2-9625-3893d7eecb29","Type":"ContainerStarted","Data":"b579b29d8f62268e205e3990e1d9c604f379784f56bc8ac4478a9192733c5a35"} Nov 28 12:56:55 crc kubenswrapper[4779]: I1128 12:56:55.404654 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 12:56:55 crc kubenswrapper[4779]: I1128 12:56:55.404447 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="afe0048e-087b-49e2-9625-3893d7eecb29" containerName="ceilometer-notification-agent" containerID="cri-o://26360b71b3b84a5099c8d0acac7e6cbb2c6797a9d3a41cbc5b66bf1bf9742e83" gracePeriod=30 Nov 28 12:56:55 crc kubenswrapper[4779]: I1128 12:56:55.404232 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="afe0048e-087b-49e2-9625-3893d7eecb29" containerName="ceilometer-central-agent" containerID="cri-o://83290e42af3d416e77ad2fb940bd729bc69eef61b66826ef7628494741f87fef" gracePeriod=30 Nov 28 12:56:55 crc kubenswrapper[4779]: I1128 12:56:55.404442 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="afe0048e-087b-49e2-9625-3893d7eecb29" containerName="sg-core" containerID="cri-o://d7fef85c46bea294afadeca18f2911a6af0bbee516d2c61aeadaacac86c95b6f" gracePeriod=30 Nov 28 12:56:55 crc kubenswrapper[4779]: I1128 12:56:55.404464 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="afe0048e-087b-49e2-9625-3893d7eecb29" containerName="proxy-httpd" containerID="cri-o://b579b29d8f62268e205e3990e1d9c604f379784f56bc8ac4478a9192733c5a35" gracePeriod=30 Nov 28 12:56:55 crc kubenswrapper[4779]: I1128 12:56:55.460247 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.718450087 podStartE2EDuration="7.460219474s" podCreationTimestamp="2025-11-28 12:56:48 +0000 UTC" firstStartedPulling="2025-11-28 12:56:49.430685171 +0000 UTC m=+1269.996360525" lastFinishedPulling="2025-11-28 12:56:54.172454558 +0000 UTC m=+1274.738129912" observedRunningTime="2025-11-28 12:56:55.443282828 +0000 UTC m=+1276.008958222" watchObservedRunningTime="2025-11-28 12:56:55.460219474 +0000 UTC m=+1276.025894838" Nov 28 12:56:56 crc kubenswrapper[4779]: I1128 12:56:56.424725 4779 generic.go:334] "Generic (PLEG): container finished" podID="afe0048e-087b-49e2-9625-3893d7eecb29" containerID="b579b29d8f62268e205e3990e1d9c604f379784f56bc8ac4478a9192733c5a35" exitCode=0 Nov 28 12:56:56 crc kubenswrapper[4779]: I1128 12:56:56.424779 4779 generic.go:334] "Generic (PLEG): container finished" podID="afe0048e-087b-49e2-9625-3893d7eecb29" containerID="d7fef85c46bea294afadeca18f2911a6af0bbee516d2c61aeadaacac86c95b6f" exitCode=2 Nov 28 12:56:56 crc kubenswrapper[4779]: I1128 12:56:56.424796 4779 generic.go:334] "Generic (PLEG): container finished" podID="afe0048e-087b-49e2-9625-3893d7eecb29" containerID="26360b71b3b84a5099c8d0acac7e6cbb2c6797a9d3a41cbc5b66bf1bf9742e83" exitCode=0 Nov 28 12:56:56 crc kubenswrapper[4779]: I1128 12:56:56.424799 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"afe0048e-087b-49e2-9625-3893d7eecb29","Type":"ContainerDied","Data":"b579b29d8f62268e205e3990e1d9c604f379784f56bc8ac4478a9192733c5a35"} Nov 28 12:56:56 crc kubenswrapper[4779]: I1128 12:56:56.424868 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"afe0048e-087b-49e2-9625-3893d7eecb29","Type":"ContainerDied","Data":"d7fef85c46bea294afadeca18f2911a6af0bbee516d2c61aeadaacac86c95b6f"} Nov 28 12:56:56 crc kubenswrapper[4779]: I1128 12:56:56.424888 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"afe0048e-087b-49e2-9625-3893d7eecb29","Type":"ContainerDied","Data":"26360b71b3b84a5099c8d0acac7e6cbb2c6797a9d3a41cbc5b66bf1bf9742e83"} Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.241838 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.369357 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afe0048e-087b-49e2-9625-3893d7eecb29-config-data\") pod \"afe0048e-087b-49e2-9625-3893d7eecb29\" (UID: \"afe0048e-087b-49e2-9625-3893d7eecb29\") " Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.369446 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe0048e-087b-49e2-9625-3893d7eecb29-combined-ca-bundle\") pod \"afe0048e-087b-49e2-9625-3893d7eecb29\" (UID: \"afe0048e-087b-49e2-9625-3893d7eecb29\") " Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.369532 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afe0048e-087b-49e2-9625-3893d7eecb29-run-httpd\") pod \"afe0048e-087b-49e2-9625-3893d7eecb29\" (UID: \"afe0048e-087b-49e2-9625-3893d7eecb29\") " Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.369561 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/afe0048e-087b-49e2-9625-3893d7eecb29-sg-core-conf-yaml\") pod \"afe0048e-087b-49e2-9625-3893d7eecb29\" (UID: \"afe0048e-087b-49e2-9625-3893d7eecb29\") " Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.369639 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afe0048e-087b-49e2-9625-3893d7eecb29-scripts\") pod \"afe0048e-087b-49e2-9625-3893d7eecb29\" (UID: \"afe0048e-087b-49e2-9625-3893d7eecb29\") " Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.369665 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qrnv\" (UniqueName: \"kubernetes.io/projected/afe0048e-087b-49e2-9625-3893d7eecb29-kube-api-access-4qrnv\") pod \"afe0048e-087b-49e2-9625-3893d7eecb29\" (UID: \"afe0048e-087b-49e2-9625-3893d7eecb29\") " Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.369702 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afe0048e-087b-49e2-9625-3893d7eecb29-log-httpd\") pod \"afe0048e-087b-49e2-9625-3893d7eecb29\" (UID: \"afe0048e-087b-49e2-9625-3893d7eecb29\") " Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.370295 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afe0048e-087b-49e2-9625-3893d7eecb29-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "afe0048e-087b-49e2-9625-3893d7eecb29" (UID: "afe0048e-087b-49e2-9625-3893d7eecb29"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.370641 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afe0048e-087b-49e2-9625-3893d7eecb29-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "afe0048e-087b-49e2-9625-3893d7eecb29" (UID: "afe0048e-087b-49e2-9625-3893d7eecb29"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.376214 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afe0048e-087b-49e2-9625-3893d7eecb29-scripts" (OuterVolumeSpecName: "scripts") pod "afe0048e-087b-49e2-9625-3893d7eecb29" (UID: "afe0048e-087b-49e2-9625-3893d7eecb29"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.377870 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afe0048e-087b-49e2-9625-3893d7eecb29-kube-api-access-4qrnv" (OuterVolumeSpecName: "kube-api-access-4qrnv") pod "afe0048e-087b-49e2-9625-3893d7eecb29" (UID: "afe0048e-087b-49e2-9625-3893d7eecb29"). InnerVolumeSpecName "kube-api-access-4qrnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.420191 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afe0048e-087b-49e2-9625-3893d7eecb29-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "afe0048e-087b-49e2-9625-3893d7eecb29" (UID: "afe0048e-087b-49e2-9625-3893d7eecb29"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.441248 4779 generic.go:334] "Generic (PLEG): container finished" podID="afe0048e-087b-49e2-9625-3893d7eecb29" containerID="83290e42af3d416e77ad2fb940bd729bc69eef61b66826ef7628494741f87fef" exitCode=0 Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.441351 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"afe0048e-087b-49e2-9625-3893d7eecb29","Type":"ContainerDied","Data":"83290e42af3d416e77ad2fb940bd729bc69eef61b66826ef7628494741f87fef"} Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.441462 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"afe0048e-087b-49e2-9625-3893d7eecb29","Type":"ContainerDied","Data":"ef9f1302f43ff26d9b3f0856a4b00ccca3ee266335e3f13eca04ae406c076d47"} Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.441506 4779 scope.go:117] "RemoveContainer" containerID="b579b29d8f62268e205e3990e1d9c604f379784f56bc8ac4478a9192733c5a35" Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.443302 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.443462 4779 generic.go:334] "Generic (PLEG): container finished" podID="21e20123-0f0f-48a0-8412-6167f107ed2a" containerID="022440ff63c3b6837f6091a7c8348788600a0b01cdfe7994b11c6182e955e40c" exitCode=0 Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.443510 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-tlgl6" event={"ID":"21e20123-0f0f-48a0-8412-6167f107ed2a","Type":"ContainerDied","Data":"022440ff63c3b6837f6091a7c8348788600a0b01cdfe7994b11c6182e955e40c"} Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.472738 4779 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afe0048e-087b-49e2-9625-3893d7eecb29-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.472790 4779 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/afe0048e-087b-49e2-9625-3893d7eecb29-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.472803 4779 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afe0048e-087b-49e2-9625-3893d7eecb29-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.472814 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qrnv\" (UniqueName: \"kubernetes.io/projected/afe0048e-087b-49e2-9625-3893d7eecb29-kube-api-access-4qrnv\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.472827 4779 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afe0048e-087b-49e2-9625-3893d7eecb29-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.493081 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afe0048e-087b-49e2-9625-3893d7eecb29-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "afe0048e-087b-49e2-9625-3893d7eecb29" (UID: "afe0048e-087b-49e2-9625-3893d7eecb29"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.493616 4779 scope.go:117] "RemoveContainer" containerID="d7fef85c46bea294afadeca18f2911a6af0bbee516d2c61aeadaacac86c95b6f" Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.514707 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afe0048e-087b-49e2-9625-3893d7eecb29-config-data" (OuterVolumeSpecName: "config-data") pod "afe0048e-087b-49e2-9625-3893d7eecb29" (UID: "afe0048e-087b-49e2-9625-3893d7eecb29"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.522474 4779 scope.go:117] "RemoveContainer" containerID="26360b71b3b84a5099c8d0acac7e6cbb2c6797a9d3a41cbc5b66bf1bf9742e83" Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.543507 4779 scope.go:117] "RemoveContainer" containerID="83290e42af3d416e77ad2fb940bd729bc69eef61b66826ef7628494741f87fef" Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.567787 4779 scope.go:117] "RemoveContainer" containerID="b579b29d8f62268e205e3990e1d9c604f379784f56bc8ac4478a9192733c5a35" Nov 28 12:56:57 crc kubenswrapper[4779]: E1128 12:56:57.568484 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b579b29d8f62268e205e3990e1d9c604f379784f56bc8ac4478a9192733c5a35\": container with ID starting with b579b29d8f62268e205e3990e1d9c604f379784f56bc8ac4478a9192733c5a35 not found: ID does not exist" containerID="b579b29d8f62268e205e3990e1d9c604f379784f56bc8ac4478a9192733c5a35" Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.568651 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b579b29d8f62268e205e3990e1d9c604f379784f56bc8ac4478a9192733c5a35"} err="failed to get container status \"b579b29d8f62268e205e3990e1d9c604f379784f56bc8ac4478a9192733c5a35\": rpc error: code = NotFound desc = could not find container \"b579b29d8f62268e205e3990e1d9c604f379784f56bc8ac4478a9192733c5a35\": container with ID starting with b579b29d8f62268e205e3990e1d9c604f379784f56bc8ac4478a9192733c5a35 not found: ID does not exist" Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.568789 4779 scope.go:117] "RemoveContainer" containerID="d7fef85c46bea294afadeca18f2911a6af0bbee516d2c61aeadaacac86c95b6f" Nov 28 12:56:57 crc kubenswrapper[4779]: E1128 12:56:57.569548 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7fef85c46bea294afadeca18f2911a6af0bbee516d2c61aeadaacac86c95b6f\": container with ID starting with d7fef85c46bea294afadeca18f2911a6af0bbee516d2c61aeadaacac86c95b6f not found: ID does not exist" containerID="d7fef85c46bea294afadeca18f2911a6af0bbee516d2c61aeadaacac86c95b6f" Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.569625 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7fef85c46bea294afadeca18f2911a6af0bbee516d2c61aeadaacac86c95b6f"} err="failed to get container status \"d7fef85c46bea294afadeca18f2911a6af0bbee516d2c61aeadaacac86c95b6f\": rpc error: code = NotFound desc = could not find container \"d7fef85c46bea294afadeca18f2911a6af0bbee516d2c61aeadaacac86c95b6f\": container with ID starting with d7fef85c46bea294afadeca18f2911a6af0bbee516d2c61aeadaacac86c95b6f not found: ID does not exist" Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.569661 4779 scope.go:117] "RemoveContainer" containerID="26360b71b3b84a5099c8d0acac7e6cbb2c6797a9d3a41cbc5b66bf1bf9742e83" Nov 28 12:56:57 crc kubenswrapper[4779]: E1128 12:56:57.570004 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26360b71b3b84a5099c8d0acac7e6cbb2c6797a9d3a41cbc5b66bf1bf9742e83\": container with ID starting with 26360b71b3b84a5099c8d0acac7e6cbb2c6797a9d3a41cbc5b66bf1bf9742e83 not found: ID does not exist" containerID="26360b71b3b84a5099c8d0acac7e6cbb2c6797a9d3a41cbc5b66bf1bf9742e83" Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.570233 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26360b71b3b84a5099c8d0acac7e6cbb2c6797a9d3a41cbc5b66bf1bf9742e83"} err="failed to get container status \"26360b71b3b84a5099c8d0acac7e6cbb2c6797a9d3a41cbc5b66bf1bf9742e83\": rpc error: code = NotFound desc = could not find container \"26360b71b3b84a5099c8d0acac7e6cbb2c6797a9d3a41cbc5b66bf1bf9742e83\": container with ID starting with 26360b71b3b84a5099c8d0acac7e6cbb2c6797a9d3a41cbc5b66bf1bf9742e83 not found: ID does not exist" Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.570292 4779 scope.go:117] "RemoveContainer" containerID="83290e42af3d416e77ad2fb940bd729bc69eef61b66826ef7628494741f87fef" Nov 28 12:56:57 crc kubenswrapper[4779]: E1128 12:56:57.570665 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83290e42af3d416e77ad2fb940bd729bc69eef61b66826ef7628494741f87fef\": container with ID starting with 83290e42af3d416e77ad2fb940bd729bc69eef61b66826ef7628494741f87fef not found: ID does not exist" containerID="83290e42af3d416e77ad2fb940bd729bc69eef61b66826ef7628494741f87fef" Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.570712 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83290e42af3d416e77ad2fb940bd729bc69eef61b66826ef7628494741f87fef"} err="failed to get container status \"83290e42af3d416e77ad2fb940bd729bc69eef61b66826ef7628494741f87fef\": rpc error: code = NotFound desc = could not find container \"83290e42af3d416e77ad2fb940bd729bc69eef61b66826ef7628494741f87fef\": container with ID starting with 83290e42af3d416e77ad2fb940bd729bc69eef61b66826ef7628494741f87fef not found: ID does not exist" Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.574943 4779 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afe0048e-087b-49e2-9625-3893d7eecb29-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.574972 4779 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe0048e-087b-49e2-9625-3893d7eecb29-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.819451 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.841785 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.869514 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 12:56:57 crc kubenswrapper[4779]: E1128 12:56:57.869899 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afe0048e-087b-49e2-9625-3893d7eecb29" containerName="ceilometer-central-agent" Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.869921 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="afe0048e-087b-49e2-9625-3893d7eecb29" containerName="ceilometer-central-agent" Nov 28 12:56:57 crc kubenswrapper[4779]: E1128 12:56:57.869951 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afe0048e-087b-49e2-9625-3893d7eecb29" containerName="proxy-httpd" Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.869960 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="afe0048e-087b-49e2-9625-3893d7eecb29" containerName="proxy-httpd" Nov 28 12:56:57 crc kubenswrapper[4779]: E1128 12:56:57.869980 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afe0048e-087b-49e2-9625-3893d7eecb29" containerName="sg-core" Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.869988 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="afe0048e-087b-49e2-9625-3893d7eecb29" containerName="sg-core" Nov 28 12:56:57 crc kubenswrapper[4779]: E1128 12:56:57.870018 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afe0048e-087b-49e2-9625-3893d7eecb29" containerName="ceilometer-notification-agent" Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.870027 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="afe0048e-087b-49e2-9625-3893d7eecb29" containerName="ceilometer-notification-agent" Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.870235 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="afe0048e-087b-49e2-9625-3893d7eecb29" containerName="sg-core" Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.870256 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="afe0048e-087b-49e2-9625-3893d7eecb29" containerName="ceilometer-central-agent" Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.870272 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="afe0048e-087b-49e2-9625-3893d7eecb29" containerName="ceilometer-notification-agent" Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.870291 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="afe0048e-087b-49e2-9625-3893d7eecb29" containerName="proxy-httpd" Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.872065 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.872181 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.876315 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.876787 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.982881 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0079f475-1f10-4aeb-a5dd-e74628d26936-config-data\") pod \"ceilometer-0\" (UID: \"0079f475-1f10-4aeb-a5dd-e74628d26936\") " pod="openstack/ceilometer-0" Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.983138 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0079f475-1f10-4aeb-a5dd-e74628d26936-scripts\") pod \"ceilometer-0\" (UID: \"0079f475-1f10-4aeb-a5dd-e74628d26936\") " pod="openstack/ceilometer-0" Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.983302 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll57k\" (UniqueName: \"kubernetes.io/projected/0079f475-1f10-4aeb-a5dd-e74628d26936-kube-api-access-ll57k\") pod \"ceilometer-0\" (UID: \"0079f475-1f10-4aeb-a5dd-e74628d26936\") " pod="openstack/ceilometer-0" Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.983451 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0079f475-1f10-4aeb-a5dd-e74628d26936-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0079f475-1f10-4aeb-a5dd-e74628d26936\") " pod="openstack/ceilometer-0" Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.983563 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0079f475-1f10-4aeb-a5dd-e74628d26936-run-httpd\") pod \"ceilometer-0\" (UID: \"0079f475-1f10-4aeb-a5dd-e74628d26936\") " pod="openstack/ceilometer-0" Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.983673 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0079f475-1f10-4aeb-a5dd-e74628d26936-log-httpd\") pod \"ceilometer-0\" (UID: \"0079f475-1f10-4aeb-a5dd-e74628d26936\") " pod="openstack/ceilometer-0" Nov 28 12:56:57 crc kubenswrapper[4779]: I1128 12:56:57.983795 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0079f475-1f10-4aeb-a5dd-e74628d26936-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0079f475-1f10-4aeb-a5dd-e74628d26936\") " pod="openstack/ceilometer-0" Nov 28 12:56:58 crc kubenswrapper[4779]: I1128 12:56:58.085600 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll57k\" (UniqueName: \"kubernetes.io/projected/0079f475-1f10-4aeb-a5dd-e74628d26936-kube-api-access-ll57k\") pod \"ceilometer-0\" (UID: \"0079f475-1f10-4aeb-a5dd-e74628d26936\") " pod="openstack/ceilometer-0" Nov 28 12:56:58 crc kubenswrapper[4779]: I1128 12:56:58.085707 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0079f475-1f10-4aeb-a5dd-e74628d26936-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0079f475-1f10-4aeb-a5dd-e74628d26936\") " pod="openstack/ceilometer-0" Nov 28 12:56:58 crc kubenswrapper[4779]: I1128 12:56:58.085736 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0079f475-1f10-4aeb-a5dd-e74628d26936-run-httpd\") pod \"ceilometer-0\" (UID: \"0079f475-1f10-4aeb-a5dd-e74628d26936\") " pod="openstack/ceilometer-0" Nov 28 12:56:58 crc kubenswrapper[4779]: I1128 12:56:58.085769 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0079f475-1f10-4aeb-a5dd-e74628d26936-log-httpd\") pod \"ceilometer-0\" (UID: \"0079f475-1f10-4aeb-a5dd-e74628d26936\") " pod="openstack/ceilometer-0" Nov 28 12:56:58 crc kubenswrapper[4779]: I1128 12:56:58.085818 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0079f475-1f10-4aeb-a5dd-e74628d26936-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0079f475-1f10-4aeb-a5dd-e74628d26936\") " pod="openstack/ceilometer-0" Nov 28 12:56:58 crc kubenswrapper[4779]: I1128 12:56:58.085891 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0079f475-1f10-4aeb-a5dd-e74628d26936-config-data\") pod \"ceilometer-0\" (UID: \"0079f475-1f10-4aeb-a5dd-e74628d26936\") " pod="openstack/ceilometer-0" Nov 28 12:56:58 crc kubenswrapper[4779]: I1128 12:56:58.085920 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0079f475-1f10-4aeb-a5dd-e74628d26936-scripts\") pod \"ceilometer-0\" (UID: \"0079f475-1f10-4aeb-a5dd-e74628d26936\") " pod="openstack/ceilometer-0" Nov 28 12:56:58 crc kubenswrapper[4779]: I1128 12:56:58.087036 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0079f475-1f10-4aeb-a5dd-e74628d26936-run-httpd\") pod \"ceilometer-0\" (UID: \"0079f475-1f10-4aeb-a5dd-e74628d26936\") " pod="openstack/ceilometer-0" Nov 28 12:56:58 crc kubenswrapper[4779]: I1128 12:56:58.087054 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0079f475-1f10-4aeb-a5dd-e74628d26936-log-httpd\") pod \"ceilometer-0\" (UID: \"0079f475-1f10-4aeb-a5dd-e74628d26936\") " pod="openstack/ceilometer-0" Nov 28 12:56:58 crc kubenswrapper[4779]: I1128 12:56:58.091654 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0079f475-1f10-4aeb-a5dd-e74628d26936-scripts\") pod \"ceilometer-0\" (UID: \"0079f475-1f10-4aeb-a5dd-e74628d26936\") " pod="openstack/ceilometer-0" Nov 28 12:56:58 crc kubenswrapper[4779]: I1128 12:56:58.094235 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0079f475-1f10-4aeb-a5dd-e74628d26936-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0079f475-1f10-4aeb-a5dd-e74628d26936\") " pod="openstack/ceilometer-0" Nov 28 12:56:58 crc kubenswrapper[4779]: I1128 12:56:58.095282 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0079f475-1f10-4aeb-a5dd-e74628d26936-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0079f475-1f10-4aeb-a5dd-e74628d26936\") " pod="openstack/ceilometer-0" Nov 28 12:56:58 crc kubenswrapper[4779]: I1128 12:56:58.099914 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0079f475-1f10-4aeb-a5dd-e74628d26936-config-data\") pod \"ceilometer-0\" (UID: \"0079f475-1f10-4aeb-a5dd-e74628d26936\") " pod="openstack/ceilometer-0" Nov 28 12:56:58 crc kubenswrapper[4779]: I1128 12:56:58.116589 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll57k\" (UniqueName: \"kubernetes.io/projected/0079f475-1f10-4aeb-a5dd-e74628d26936-kube-api-access-ll57k\") pod \"ceilometer-0\" (UID: \"0079f475-1f10-4aeb-a5dd-e74628d26936\") " pod="openstack/ceilometer-0" Nov 28 12:56:58 crc kubenswrapper[4779]: I1128 12:56:58.259374 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 12:56:58 crc kubenswrapper[4779]: I1128 12:56:58.721841 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-tlgl6" Nov 28 12:56:58 crc kubenswrapper[4779]: I1128 12:56:58.781669 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 12:56:58 crc kubenswrapper[4779]: W1128 12:56:58.793351 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0079f475_1f10_4aeb_a5dd_e74628d26936.slice/crio-00f29ba359c3507e7c089e87598f9050b89e81d65f0b41b3fe0d188e5d6d9566 WatchSource:0}: Error finding container 00f29ba359c3507e7c089e87598f9050b89e81d65f0b41b3fe0d188e5d6d9566: Status 404 returned error can't find the container with id 00f29ba359c3507e7c089e87598f9050b89e81d65f0b41b3fe0d188e5d6d9566 Nov 28 12:56:58 crc kubenswrapper[4779]: I1128 12:56:58.799692 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21e20123-0f0f-48a0-8412-6167f107ed2a-combined-ca-bundle\") pod \"21e20123-0f0f-48a0-8412-6167f107ed2a\" (UID: \"21e20123-0f0f-48a0-8412-6167f107ed2a\") " Nov 28 12:56:58 crc kubenswrapper[4779]: I1128 12:56:58.799839 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57jfg\" (UniqueName: \"kubernetes.io/projected/21e20123-0f0f-48a0-8412-6167f107ed2a-kube-api-access-57jfg\") pod \"21e20123-0f0f-48a0-8412-6167f107ed2a\" (UID: \"21e20123-0f0f-48a0-8412-6167f107ed2a\") " Nov 28 12:56:58 crc kubenswrapper[4779]: I1128 12:56:58.800120 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21e20123-0f0f-48a0-8412-6167f107ed2a-scripts\") pod \"21e20123-0f0f-48a0-8412-6167f107ed2a\" (UID: \"21e20123-0f0f-48a0-8412-6167f107ed2a\") " Nov 28 12:56:58 crc kubenswrapper[4779]: I1128 12:56:58.800313 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21e20123-0f0f-48a0-8412-6167f107ed2a-config-data\") pod \"21e20123-0f0f-48a0-8412-6167f107ed2a\" (UID: \"21e20123-0f0f-48a0-8412-6167f107ed2a\") " Nov 28 12:56:58 crc kubenswrapper[4779]: I1128 12:56:58.807055 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21e20123-0f0f-48a0-8412-6167f107ed2a-scripts" (OuterVolumeSpecName: "scripts") pod "21e20123-0f0f-48a0-8412-6167f107ed2a" (UID: "21e20123-0f0f-48a0-8412-6167f107ed2a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:56:58 crc kubenswrapper[4779]: I1128 12:56:58.808733 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21e20123-0f0f-48a0-8412-6167f107ed2a-kube-api-access-57jfg" (OuterVolumeSpecName: "kube-api-access-57jfg") pod "21e20123-0f0f-48a0-8412-6167f107ed2a" (UID: "21e20123-0f0f-48a0-8412-6167f107ed2a"). InnerVolumeSpecName "kube-api-access-57jfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:56:58 crc kubenswrapper[4779]: I1128 12:56:58.829817 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21e20123-0f0f-48a0-8412-6167f107ed2a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "21e20123-0f0f-48a0-8412-6167f107ed2a" (UID: "21e20123-0f0f-48a0-8412-6167f107ed2a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:56:58 crc kubenswrapper[4779]: I1128 12:56:58.834814 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21e20123-0f0f-48a0-8412-6167f107ed2a-config-data" (OuterVolumeSpecName: "config-data") pod "21e20123-0f0f-48a0-8412-6167f107ed2a" (UID: "21e20123-0f0f-48a0-8412-6167f107ed2a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:56:58 crc kubenswrapper[4779]: I1128 12:56:58.902948 4779 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21e20123-0f0f-48a0-8412-6167f107ed2a-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:58 crc kubenswrapper[4779]: I1128 12:56:58.902994 4779 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21e20123-0f0f-48a0-8412-6167f107ed2a-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:58 crc kubenswrapper[4779]: I1128 12:56:58.903014 4779 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21e20123-0f0f-48a0-8412-6167f107ed2a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:58 crc kubenswrapper[4779]: I1128 12:56:58.903035 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57jfg\" (UniqueName: \"kubernetes.io/projected/21e20123-0f0f-48a0-8412-6167f107ed2a-kube-api-access-57jfg\") on node \"crc\" DevicePath \"\"" Nov 28 12:56:59 crc kubenswrapper[4779]: I1128 12:56:59.492313 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-tlgl6" Nov 28 12:56:59 crc kubenswrapper[4779]: I1128 12:56:59.492303 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-tlgl6" event={"ID":"21e20123-0f0f-48a0-8412-6167f107ed2a","Type":"ContainerDied","Data":"7f828210fc369e4d816ea434ced7b0031aa571987fcd59dc50f2456a18a4e40f"} Nov 28 12:56:59 crc kubenswrapper[4779]: I1128 12:56:59.493736 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f828210fc369e4d816ea434ced7b0031aa571987fcd59dc50f2456a18a4e40f" Nov 28 12:56:59 crc kubenswrapper[4779]: I1128 12:56:59.499619 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0079f475-1f10-4aeb-a5dd-e74628d26936","Type":"ContainerStarted","Data":"00f29ba359c3507e7c089e87598f9050b89e81d65f0b41b3fe0d188e5d6d9566"} Nov 28 12:56:59 crc kubenswrapper[4779]: I1128 12:56:59.563437 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 28 12:56:59 crc kubenswrapper[4779]: E1128 12:56:59.563780 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21e20123-0f0f-48a0-8412-6167f107ed2a" containerName="nova-cell0-conductor-db-sync" Nov 28 12:56:59 crc kubenswrapper[4779]: I1128 12:56:59.563794 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="21e20123-0f0f-48a0-8412-6167f107ed2a" containerName="nova-cell0-conductor-db-sync" Nov 28 12:56:59 crc kubenswrapper[4779]: I1128 12:56:59.563981 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="21e20123-0f0f-48a0-8412-6167f107ed2a" containerName="nova-cell0-conductor-db-sync" Nov 28 12:56:59 crc kubenswrapper[4779]: I1128 12:56:59.564575 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 28 12:56:59 crc kubenswrapper[4779]: I1128 12:56:59.566311 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-2slfk" Nov 28 12:56:59 crc kubenswrapper[4779]: I1128 12:56:59.569011 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 28 12:56:59 crc kubenswrapper[4779]: I1128 12:56:59.613562 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 28 12:56:59 crc kubenswrapper[4779]: I1128 12:56:59.720669 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55d804a5-57cf-458c-8941-c0ec9ea50d24-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"55d804a5-57cf-458c-8941-c0ec9ea50d24\") " pod="openstack/nova-cell0-conductor-0" Nov 28 12:56:59 crc kubenswrapper[4779]: I1128 12:56:59.720798 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55d804a5-57cf-458c-8941-c0ec9ea50d24-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"55d804a5-57cf-458c-8941-c0ec9ea50d24\") " pod="openstack/nova-cell0-conductor-0" Nov 28 12:56:59 crc kubenswrapper[4779]: I1128 12:56:59.721161 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7crh\" (UniqueName: \"kubernetes.io/projected/55d804a5-57cf-458c-8941-c0ec9ea50d24-kube-api-access-f7crh\") pod \"nova-cell0-conductor-0\" (UID: \"55d804a5-57cf-458c-8941-c0ec9ea50d24\") " pod="openstack/nova-cell0-conductor-0" Nov 28 12:56:59 crc kubenswrapper[4779]: I1128 12:56:59.739481 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afe0048e-087b-49e2-9625-3893d7eecb29" path="/var/lib/kubelet/pods/afe0048e-087b-49e2-9625-3893d7eecb29/volumes" Nov 28 12:56:59 crc kubenswrapper[4779]: I1128 12:56:59.822766 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55d804a5-57cf-458c-8941-c0ec9ea50d24-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"55d804a5-57cf-458c-8941-c0ec9ea50d24\") " pod="openstack/nova-cell0-conductor-0" Nov 28 12:56:59 crc kubenswrapper[4779]: I1128 12:56:59.822930 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7crh\" (UniqueName: \"kubernetes.io/projected/55d804a5-57cf-458c-8941-c0ec9ea50d24-kube-api-access-f7crh\") pod \"nova-cell0-conductor-0\" (UID: \"55d804a5-57cf-458c-8941-c0ec9ea50d24\") " pod="openstack/nova-cell0-conductor-0" Nov 28 12:56:59 crc kubenswrapper[4779]: I1128 12:56:59.823012 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55d804a5-57cf-458c-8941-c0ec9ea50d24-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"55d804a5-57cf-458c-8941-c0ec9ea50d24\") " pod="openstack/nova-cell0-conductor-0" Nov 28 12:56:59 crc kubenswrapper[4779]: I1128 12:56:59.832038 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55d804a5-57cf-458c-8941-c0ec9ea50d24-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"55d804a5-57cf-458c-8941-c0ec9ea50d24\") " pod="openstack/nova-cell0-conductor-0" Nov 28 12:56:59 crc kubenswrapper[4779]: I1128 12:56:59.832746 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55d804a5-57cf-458c-8941-c0ec9ea50d24-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"55d804a5-57cf-458c-8941-c0ec9ea50d24\") " pod="openstack/nova-cell0-conductor-0" Nov 28 12:56:59 crc kubenswrapper[4779]: I1128 12:56:59.839655 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7crh\" (UniqueName: \"kubernetes.io/projected/55d804a5-57cf-458c-8941-c0ec9ea50d24-kube-api-access-f7crh\") pod \"nova-cell0-conductor-0\" (UID: \"55d804a5-57cf-458c-8941-c0ec9ea50d24\") " pod="openstack/nova-cell0-conductor-0" Nov 28 12:56:59 crc kubenswrapper[4779]: I1128 12:56:59.929183 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 28 12:57:00 crc kubenswrapper[4779]: I1128 12:57:00.438265 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 28 12:57:00 crc kubenswrapper[4779]: W1128 12:57:00.443144 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55d804a5_57cf_458c_8941_c0ec9ea50d24.slice/crio-e407a33f5f2ee636107de35a4ce330defb59ef5cb4ad7efefe7f50bf5930c551 WatchSource:0}: Error finding container e407a33f5f2ee636107de35a4ce330defb59ef5cb4ad7efefe7f50bf5930c551: Status 404 returned error can't find the container with id e407a33f5f2ee636107de35a4ce330defb59ef5cb4ad7efefe7f50bf5930c551 Nov 28 12:57:00 crc kubenswrapper[4779]: I1128 12:57:00.518281 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"55d804a5-57cf-458c-8941-c0ec9ea50d24","Type":"ContainerStarted","Data":"e407a33f5f2ee636107de35a4ce330defb59ef5cb4ad7efefe7f50bf5930c551"} Nov 28 12:57:00 crc kubenswrapper[4779]: I1128 12:57:00.519801 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0079f475-1f10-4aeb-a5dd-e74628d26936","Type":"ContainerStarted","Data":"5ae620089df6c228f1fbcac70aca74fc8376f08c05250b32f82920fe3952bf81"} Nov 28 12:57:01 crc kubenswrapper[4779]: I1128 12:57:01.535606 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"55d804a5-57cf-458c-8941-c0ec9ea50d24","Type":"ContainerStarted","Data":"d83e3260cf32774d57ae94a81b1ac1361fe4067471c7a17eabd0b90feb64ac72"} Nov 28 12:57:01 crc kubenswrapper[4779]: I1128 12:57:01.536608 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 28 12:57:01 crc kubenswrapper[4779]: I1128 12:57:01.541696 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0079f475-1f10-4aeb-a5dd-e74628d26936","Type":"ContainerStarted","Data":"e0e645c256a7953ca5f8241897036d8dfa6793ada8ba733c07c63c94a2601bb1"} Nov 28 12:57:01 crc kubenswrapper[4779]: I1128 12:57:01.542022 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0079f475-1f10-4aeb-a5dd-e74628d26936","Type":"ContainerStarted","Data":"bd2cad8dcc4c16f6bc865b5df3adeb87500cfa21b6fb3c0680b1b07d3525df9a"} Nov 28 12:57:01 crc kubenswrapper[4779]: I1128 12:57:01.558305 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.558283639 podStartE2EDuration="2.558283639s" podCreationTimestamp="2025-11-28 12:56:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:57:01.555338492 +0000 UTC m=+1282.121013856" watchObservedRunningTime="2025-11-28 12:57:01.558283639 +0000 UTC m=+1282.123959003" Nov 28 12:57:03 crc kubenswrapper[4779]: I1128 12:57:03.564820 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0079f475-1f10-4aeb-a5dd-e74628d26936","Type":"ContainerStarted","Data":"a8527b30304e13f25f431b0ec219438e3123f37bc2e492b7397df8161f1021fd"} Nov 28 12:57:03 crc kubenswrapper[4779]: I1128 12:57:03.565554 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 12:57:09 crc kubenswrapper[4779]: I1128 12:57:09.977780 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 28 12:57:10 crc kubenswrapper[4779]: I1128 12:57:10.011137 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=9.064648178 podStartE2EDuration="13.011086864s" podCreationTimestamp="2025-11-28 12:56:57 +0000 UTC" firstStartedPulling="2025-11-28 12:56:58.796026452 +0000 UTC m=+1279.361701816" lastFinishedPulling="2025-11-28 12:57:02.742465138 +0000 UTC m=+1283.308140502" observedRunningTime="2025-11-28 12:57:03.587919088 +0000 UTC m=+1284.153594482" watchObservedRunningTime="2025-11-28 12:57:10.011086864 +0000 UTC m=+1290.576762248" Nov 28 12:57:10 crc kubenswrapper[4779]: I1128 12:57:10.748462 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-q9xf9"] Nov 28 12:57:10 crc kubenswrapper[4779]: I1128 12:57:10.749595 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-q9xf9" Nov 28 12:57:10 crc kubenswrapper[4779]: I1128 12:57:10.762007 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 28 12:57:10 crc kubenswrapper[4779]: I1128 12:57:10.762163 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 28 12:57:10 crc kubenswrapper[4779]: I1128 12:57:10.819801 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-q9xf9"] Nov 28 12:57:10 crc kubenswrapper[4779]: I1128 12:57:10.865294 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvz5t\" (UniqueName: \"kubernetes.io/projected/e860d8bc-f4c3-4923-ba29-3fb022978027-kube-api-access-fvz5t\") pod \"nova-cell0-cell-mapping-q9xf9\" (UID: \"e860d8bc-f4c3-4923-ba29-3fb022978027\") " pod="openstack/nova-cell0-cell-mapping-q9xf9" Nov 28 12:57:10 crc kubenswrapper[4779]: I1128 12:57:10.865572 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e860d8bc-f4c3-4923-ba29-3fb022978027-config-data\") pod \"nova-cell0-cell-mapping-q9xf9\" (UID: \"e860d8bc-f4c3-4923-ba29-3fb022978027\") " pod="openstack/nova-cell0-cell-mapping-q9xf9" Nov 28 12:57:10 crc kubenswrapper[4779]: I1128 12:57:10.865628 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e860d8bc-f4c3-4923-ba29-3fb022978027-scripts\") pod \"nova-cell0-cell-mapping-q9xf9\" (UID: \"e860d8bc-f4c3-4923-ba29-3fb022978027\") " pod="openstack/nova-cell0-cell-mapping-q9xf9" Nov 28 12:57:10 crc kubenswrapper[4779]: I1128 12:57:10.865663 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e860d8bc-f4c3-4923-ba29-3fb022978027-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-q9xf9\" (UID: \"e860d8bc-f4c3-4923-ba29-3fb022978027\") " pod="openstack/nova-cell0-cell-mapping-q9xf9" Nov 28 12:57:10 crc kubenswrapper[4779]: I1128 12:57:10.933129 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 12:57:10 crc kubenswrapper[4779]: I1128 12:57:10.934239 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 12:57:10 crc kubenswrapper[4779]: I1128 12:57:10.947102 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 28 12:57:10 crc kubenswrapper[4779]: I1128 12:57:10.967032 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvz5t\" (UniqueName: \"kubernetes.io/projected/e860d8bc-f4c3-4923-ba29-3fb022978027-kube-api-access-fvz5t\") pod \"nova-cell0-cell-mapping-q9xf9\" (UID: \"e860d8bc-f4c3-4923-ba29-3fb022978027\") " pod="openstack/nova-cell0-cell-mapping-q9xf9" Nov 28 12:57:10 crc kubenswrapper[4779]: I1128 12:57:10.967112 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e860d8bc-f4c3-4923-ba29-3fb022978027-config-data\") pod \"nova-cell0-cell-mapping-q9xf9\" (UID: \"e860d8bc-f4c3-4923-ba29-3fb022978027\") " pod="openstack/nova-cell0-cell-mapping-q9xf9" Nov 28 12:57:10 crc kubenswrapper[4779]: I1128 12:57:10.967181 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e860d8bc-f4c3-4923-ba29-3fb022978027-scripts\") pod \"nova-cell0-cell-mapping-q9xf9\" (UID: \"e860d8bc-f4c3-4923-ba29-3fb022978027\") " pod="openstack/nova-cell0-cell-mapping-q9xf9" Nov 28 12:57:10 crc kubenswrapper[4779]: I1128 12:57:10.967230 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e860d8bc-f4c3-4923-ba29-3fb022978027-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-q9xf9\" (UID: \"e860d8bc-f4c3-4923-ba29-3fb022978027\") " pod="openstack/nova-cell0-cell-mapping-q9xf9" Nov 28 12:57:10 crc kubenswrapper[4779]: I1128 12:57:10.975498 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e860d8bc-f4c3-4923-ba29-3fb022978027-config-data\") pod \"nova-cell0-cell-mapping-q9xf9\" (UID: \"e860d8bc-f4c3-4923-ba29-3fb022978027\") " pod="openstack/nova-cell0-cell-mapping-q9xf9" Nov 28 12:57:10 crc kubenswrapper[4779]: I1128 12:57:10.975723 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e860d8bc-f4c3-4923-ba29-3fb022978027-scripts\") pod \"nova-cell0-cell-mapping-q9xf9\" (UID: \"e860d8bc-f4c3-4923-ba29-3fb022978027\") " pod="openstack/nova-cell0-cell-mapping-q9xf9" Nov 28 12:57:10 crc kubenswrapper[4779]: I1128 12:57:10.984142 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 12:57:10 crc kubenswrapper[4779]: I1128 12:57:10.996704 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e860d8bc-f4c3-4923-ba29-3fb022978027-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-q9xf9\" (UID: \"e860d8bc-f4c3-4923-ba29-3fb022978027\") " pod="openstack/nova-cell0-cell-mapping-q9xf9" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.016355 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.028054 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.030408 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.034652 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.072104 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvz5t\" (UniqueName: \"kubernetes.io/projected/e860d8bc-f4c3-4923-ba29-3fb022978027-kube-api-access-fvz5t\") pod \"nova-cell0-cell-mapping-q9xf9\" (UID: \"e860d8bc-f4c3-4923-ba29-3fb022978027\") " pod="openstack/nova-cell0-cell-mapping-q9xf9" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.072922 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-q9xf9" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.073367 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0c2eda9-dcae-45cc-bee9-d9ce55f34d1c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b0c2eda9-dcae-45cc-bee9-d9ce55f34d1c\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.073473 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dfwj\" (UniqueName: \"kubernetes.io/projected/b0c2eda9-dcae-45cc-bee9-d9ce55f34d1c-kube-api-access-4dfwj\") pod \"nova-cell1-novncproxy-0\" (UID: \"b0c2eda9-dcae-45cc-bee9-d9ce55f34d1c\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.073558 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17062727-c25f-4ff0-90af-422314919f7a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"17062727-c25f-4ff0-90af-422314919f7a\") " pod="openstack/nova-scheduler-0" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.073715 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbxd7\" (UniqueName: \"kubernetes.io/projected/17062727-c25f-4ff0-90af-422314919f7a-kube-api-access-vbxd7\") pod \"nova-scheduler-0\" (UID: \"17062727-c25f-4ff0-90af-422314919f7a\") " pod="openstack/nova-scheduler-0" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.073813 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0c2eda9-dcae-45cc-bee9-d9ce55f34d1c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b0c2eda9-dcae-45cc-bee9-d9ce55f34d1c\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.073941 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17062727-c25f-4ff0-90af-422314919f7a-config-data\") pod \"nova-scheduler-0\" (UID: \"17062727-c25f-4ff0-90af-422314919f7a\") " pod="openstack/nova-scheduler-0" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.131191 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.133320 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.149396 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.149578 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.177981 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbxd7\" (UniqueName: \"kubernetes.io/projected/17062727-c25f-4ff0-90af-422314919f7a-kube-api-access-vbxd7\") pod \"nova-scheduler-0\" (UID: \"17062727-c25f-4ff0-90af-422314919f7a\") " pod="openstack/nova-scheduler-0" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.178250 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0c2eda9-dcae-45cc-bee9-d9ce55f34d1c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b0c2eda9-dcae-45cc-bee9-d9ce55f34d1c\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.178280 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c5b7c8d-60b7-4dab-8392-036bb769ee86-logs\") pod \"nova-metadata-0\" (UID: \"5c5b7c8d-60b7-4dab-8392-036bb769ee86\") " pod="openstack/nova-metadata-0" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.178298 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdf8b\" (UniqueName: \"kubernetes.io/projected/5c5b7c8d-60b7-4dab-8392-036bb769ee86-kube-api-access-jdf8b\") pod \"nova-metadata-0\" (UID: \"5c5b7c8d-60b7-4dab-8392-036bb769ee86\") " pod="openstack/nova-metadata-0" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.178323 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c5b7c8d-60b7-4dab-8392-036bb769ee86-config-data\") pod \"nova-metadata-0\" (UID: \"5c5b7c8d-60b7-4dab-8392-036bb769ee86\") " pod="openstack/nova-metadata-0" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.178349 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c5b7c8d-60b7-4dab-8392-036bb769ee86-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5c5b7c8d-60b7-4dab-8392-036bb769ee86\") " pod="openstack/nova-metadata-0" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.178375 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17062727-c25f-4ff0-90af-422314919f7a-config-data\") pod \"nova-scheduler-0\" (UID: \"17062727-c25f-4ff0-90af-422314919f7a\") " pod="openstack/nova-scheduler-0" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.178431 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0c2eda9-dcae-45cc-bee9-d9ce55f34d1c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b0c2eda9-dcae-45cc-bee9-d9ce55f34d1c\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.178457 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dfwj\" (UniqueName: \"kubernetes.io/projected/b0c2eda9-dcae-45cc-bee9-d9ce55f34d1c-kube-api-access-4dfwj\") pod \"nova-cell1-novncproxy-0\" (UID: \"b0c2eda9-dcae-45cc-bee9-d9ce55f34d1c\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.178478 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17062727-c25f-4ff0-90af-422314919f7a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"17062727-c25f-4ff0-90af-422314919f7a\") " pod="openstack/nova-scheduler-0" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.192174 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17062727-c25f-4ff0-90af-422314919f7a-config-data\") pod \"nova-scheduler-0\" (UID: \"17062727-c25f-4ff0-90af-422314919f7a\") " pod="openstack/nova-scheduler-0" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.195291 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.196906 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.199771 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.203059 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17062727-c25f-4ff0-90af-422314919f7a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"17062727-c25f-4ff0-90af-422314919f7a\") " pod="openstack/nova-scheduler-0" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.203838 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0c2eda9-dcae-45cc-bee9-d9ce55f34d1c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b0c2eda9-dcae-45cc-bee9-d9ce55f34d1c\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.204600 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dfwj\" (UniqueName: \"kubernetes.io/projected/b0c2eda9-dcae-45cc-bee9-d9ce55f34d1c-kube-api-access-4dfwj\") pod \"nova-cell1-novncproxy-0\" (UID: \"b0c2eda9-dcae-45cc-bee9-d9ce55f34d1c\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.204666 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5fbc4d444f-mgxbw"] Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.206033 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fbc4d444f-mgxbw" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.211745 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0c2eda9-dcae-45cc-bee9-d9ce55f34d1c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b0c2eda9-dcae-45cc-bee9-d9ce55f34d1c\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.232681 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbxd7\" (UniqueName: \"kubernetes.io/projected/17062727-c25f-4ff0-90af-422314919f7a-kube-api-access-vbxd7\") pod \"nova-scheduler-0\" (UID: \"17062727-c25f-4ff0-90af-422314919f7a\") " pod="openstack/nova-scheduler-0" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.239836 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fbc4d444f-mgxbw"] Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.258483 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.269298 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.280522 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c5b7c8d-60b7-4dab-8392-036bb769ee86-logs\") pod \"nova-metadata-0\" (UID: \"5c5b7c8d-60b7-4dab-8392-036bb769ee86\") " pod="openstack/nova-metadata-0" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.280573 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdf8b\" (UniqueName: \"kubernetes.io/projected/5c5b7c8d-60b7-4dab-8392-036bb769ee86-kube-api-access-jdf8b\") pod \"nova-metadata-0\" (UID: \"5c5b7c8d-60b7-4dab-8392-036bb769ee86\") " pod="openstack/nova-metadata-0" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.280611 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c5b7c8d-60b7-4dab-8392-036bb769ee86-config-data\") pod \"nova-metadata-0\" (UID: \"5c5b7c8d-60b7-4dab-8392-036bb769ee86\") " pod="openstack/nova-metadata-0" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.280654 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c5b7c8d-60b7-4dab-8392-036bb769ee86-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5c5b7c8d-60b7-4dab-8392-036bb769ee86\") " pod="openstack/nova-metadata-0" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.282563 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c5b7c8d-60b7-4dab-8392-036bb769ee86-logs\") pod \"nova-metadata-0\" (UID: \"5c5b7c8d-60b7-4dab-8392-036bb769ee86\") " pod="openstack/nova-metadata-0" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.295744 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c5b7c8d-60b7-4dab-8392-036bb769ee86-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5c5b7c8d-60b7-4dab-8392-036bb769ee86\") " pod="openstack/nova-metadata-0" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.295833 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c5b7c8d-60b7-4dab-8392-036bb769ee86-config-data\") pod \"nova-metadata-0\" (UID: \"5c5b7c8d-60b7-4dab-8392-036bb769ee86\") " pod="openstack/nova-metadata-0" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.299452 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdf8b\" (UniqueName: \"kubernetes.io/projected/5c5b7c8d-60b7-4dab-8392-036bb769ee86-kube-api-access-jdf8b\") pod \"nova-metadata-0\" (UID: \"5c5b7c8d-60b7-4dab-8392-036bb769ee86\") " pod="openstack/nova-metadata-0" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.382210 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9820813-0205-41b5-a0cd-be93c4b28372-config\") pod \"dnsmasq-dns-5fbc4d444f-mgxbw\" (UID: \"f9820813-0205-41b5-a0cd-be93c4b28372\") " pod="openstack/dnsmasq-dns-5fbc4d444f-mgxbw" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.382556 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/782e7605-5aed-4324-8ccf-964c7c961b48-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"782e7605-5aed-4324-8ccf-964c7c961b48\") " pod="openstack/nova-api-0" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.382578 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9820813-0205-41b5-a0cd-be93c4b28372-dns-svc\") pod \"dnsmasq-dns-5fbc4d444f-mgxbw\" (UID: \"f9820813-0205-41b5-a0cd-be93c4b28372\") " pod="openstack/dnsmasq-dns-5fbc4d444f-mgxbw" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.382600 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc9qk\" (UniqueName: \"kubernetes.io/projected/782e7605-5aed-4324-8ccf-964c7c961b48-kube-api-access-nc9qk\") pod \"nova-api-0\" (UID: \"782e7605-5aed-4324-8ccf-964c7c961b48\") " pod="openstack/nova-api-0" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.382629 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f9820813-0205-41b5-a0cd-be93c4b28372-dns-swift-storage-0\") pod \"dnsmasq-dns-5fbc4d444f-mgxbw\" (UID: \"f9820813-0205-41b5-a0cd-be93c4b28372\") " pod="openstack/dnsmasq-dns-5fbc4d444f-mgxbw" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.382657 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f9820813-0205-41b5-a0cd-be93c4b28372-ovsdbserver-nb\") pod \"dnsmasq-dns-5fbc4d444f-mgxbw\" (UID: \"f9820813-0205-41b5-a0cd-be93c4b28372\") " pod="openstack/dnsmasq-dns-5fbc4d444f-mgxbw" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.382738 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/782e7605-5aed-4324-8ccf-964c7c961b48-logs\") pod \"nova-api-0\" (UID: \"782e7605-5aed-4324-8ccf-964c7c961b48\") " pod="openstack/nova-api-0" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.382767 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f9820813-0205-41b5-a0cd-be93c4b28372-ovsdbserver-sb\") pod \"dnsmasq-dns-5fbc4d444f-mgxbw\" (UID: \"f9820813-0205-41b5-a0cd-be93c4b28372\") " pod="openstack/dnsmasq-dns-5fbc4d444f-mgxbw" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.382808 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ng5j\" (UniqueName: \"kubernetes.io/projected/f9820813-0205-41b5-a0cd-be93c4b28372-kube-api-access-6ng5j\") pod \"dnsmasq-dns-5fbc4d444f-mgxbw\" (UID: \"f9820813-0205-41b5-a0cd-be93c4b28372\") " pod="openstack/dnsmasq-dns-5fbc4d444f-mgxbw" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.382832 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/782e7605-5aed-4324-8ccf-964c7c961b48-config-data\") pod \"nova-api-0\" (UID: \"782e7605-5aed-4324-8ccf-964c7c961b48\") " pod="openstack/nova-api-0" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.436324 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.485148 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f9820813-0205-41b5-a0cd-be93c4b28372-ovsdbserver-nb\") pod \"dnsmasq-dns-5fbc4d444f-mgxbw\" (UID: \"f9820813-0205-41b5-a0cd-be93c4b28372\") " pod="openstack/dnsmasq-dns-5fbc4d444f-mgxbw" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.485256 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/782e7605-5aed-4324-8ccf-964c7c961b48-logs\") pod \"nova-api-0\" (UID: \"782e7605-5aed-4324-8ccf-964c7c961b48\") " pod="openstack/nova-api-0" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.485276 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f9820813-0205-41b5-a0cd-be93c4b28372-ovsdbserver-sb\") pod \"dnsmasq-dns-5fbc4d444f-mgxbw\" (UID: \"f9820813-0205-41b5-a0cd-be93c4b28372\") " pod="openstack/dnsmasq-dns-5fbc4d444f-mgxbw" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.485302 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ng5j\" (UniqueName: \"kubernetes.io/projected/f9820813-0205-41b5-a0cd-be93c4b28372-kube-api-access-6ng5j\") pod \"dnsmasq-dns-5fbc4d444f-mgxbw\" (UID: \"f9820813-0205-41b5-a0cd-be93c4b28372\") " pod="openstack/dnsmasq-dns-5fbc4d444f-mgxbw" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.485319 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/782e7605-5aed-4324-8ccf-964c7c961b48-config-data\") pod \"nova-api-0\" (UID: \"782e7605-5aed-4324-8ccf-964c7c961b48\") " pod="openstack/nova-api-0" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.485382 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9820813-0205-41b5-a0cd-be93c4b28372-config\") pod \"dnsmasq-dns-5fbc4d444f-mgxbw\" (UID: \"f9820813-0205-41b5-a0cd-be93c4b28372\") " pod="openstack/dnsmasq-dns-5fbc4d444f-mgxbw" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.485401 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/782e7605-5aed-4324-8ccf-964c7c961b48-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"782e7605-5aed-4324-8ccf-964c7c961b48\") " pod="openstack/nova-api-0" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.485425 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9820813-0205-41b5-a0cd-be93c4b28372-dns-svc\") pod \"dnsmasq-dns-5fbc4d444f-mgxbw\" (UID: \"f9820813-0205-41b5-a0cd-be93c4b28372\") " pod="openstack/dnsmasq-dns-5fbc4d444f-mgxbw" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.485445 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc9qk\" (UniqueName: \"kubernetes.io/projected/782e7605-5aed-4324-8ccf-964c7c961b48-kube-api-access-nc9qk\") pod \"nova-api-0\" (UID: \"782e7605-5aed-4324-8ccf-964c7c961b48\") " pod="openstack/nova-api-0" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.485472 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f9820813-0205-41b5-a0cd-be93c4b28372-dns-swift-storage-0\") pod \"dnsmasq-dns-5fbc4d444f-mgxbw\" (UID: \"f9820813-0205-41b5-a0cd-be93c4b28372\") " pod="openstack/dnsmasq-dns-5fbc4d444f-mgxbw" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.485756 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/782e7605-5aed-4324-8ccf-964c7c961b48-logs\") pod \"nova-api-0\" (UID: \"782e7605-5aed-4324-8ccf-964c7c961b48\") " pod="openstack/nova-api-0" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.486294 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f9820813-0205-41b5-a0cd-be93c4b28372-dns-swift-storage-0\") pod \"dnsmasq-dns-5fbc4d444f-mgxbw\" (UID: \"f9820813-0205-41b5-a0cd-be93c4b28372\") " pod="openstack/dnsmasq-dns-5fbc4d444f-mgxbw" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.490460 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f9820813-0205-41b5-a0cd-be93c4b28372-ovsdbserver-nb\") pod \"dnsmasq-dns-5fbc4d444f-mgxbw\" (UID: \"f9820813-0205-41b5-a0cd-be93c4b28372\") " pod="openstack/dnsmasq-dns-5fbc4d444f-mgxbw" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.494651 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9820813-0205-41b5-a0cd-be93c4b28372-dns-svc\") pod \"dnsmasq-dns-5fbc4d444f-mgxbw\" (UID: \"f9820813-0205-41b5-a0cd-be93c4b28372\") " pod="openstack/dnsmasq-dns-5fbc4d444f-mgxbw" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.496652 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/782e7605-5aed-4324-8ccf-964c7c961b48-config-data\") pod \"nova-api-0\" (UID: \"782e7605-5aed-4324-8ccf-964c7c961b48\") " pod="openstack/nova-api-0" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.496898 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9820813-0205-41b5-a0cd-be93c4b28372-config\") pod \"dnsmasq-dns-5fbc4d444f-mgxbw\" (UID: \"f9820813-0205-41b5-a0cd-be93c4b28372\") " pod="openstack/dnsmasq-dns-5fbc4d444f-mgxbw" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.497062 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f9820813-0205-41b5-a0cd-be93c4b28372-ovsdbserver-sb\") pod \"dnsmasq-dns-5fbc4d444f-mgxbw\" (UID: \"f9820813-0205-41b5-a0cd-be93c4b28372\") " pod="openstack/dnsmasq-dns-5fbc4d444f-mgxbw" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.506560 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/782e7605-5aed-4324-8ccf-964c7c961b48-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"782e7605-5aed-4324-8ccf-964c7c961b48\") " pod="openstack/nova-api-0" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.510209 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc9qk\" (UniqueName: \"kubernetes.io/projected/782e7605-5aed-4324-8ccf-964c7c961b48-kube-api-access-nc9qk\") pod \"nova-api-0\" (UID: \"782e7605-5aed-4324-8ccf-964c7c961b48\") " pod="openstack/nova-api-0" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.510751 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ng5j\" (UniqueName: \"kubernetes.io/projected/f9820813-0205-41b5-a0cd-be93c4b28372-kube-api-access-6ng5j\") pod \"dnsmasq-dns-5fbc4d444f-mgxbw\" (UID: \"f9820813-0205-41b5-a0cd-be93c4b28372\") " pod="openstack/dnsmasq-dns-5fbc4d444f-mgxbw" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.583639 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.617810 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.657939 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fbc4d444f-mgxbw" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.771912 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-v8d6d"] Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.773403 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-v8d6d"] Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.773428 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-q9xf9"] Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.773520 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-v8d6d" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.776005 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.777794 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 28 12:57:11 crc kubenswrapper[4779]: W1128 12:57:11.783047 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode860d8bc_f4c3_4923_ba29_3fb022978027.slice/crio-71963e8d2bec1a23545150fa49d9305406c6293cee7c62ed4f8e8e818f0ed179 WatchSource:0}: Error finding container 71963e8d2bec1a23545150fa49d9305406c6293cee7c62ed4f8e8e818f0ed179: Status 404 returned error can't find the container with id 71963e8d2bec1a23545150fa49d9305406c6293cee7c62ed4f8e8e818f0ed179 Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.894295 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.898078 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16554019-fb17-4257-9bfd-1c1ffe3edb87-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-v8d6d\" (UID: \"16554019-fb17-4257-9bfd-1c1ffe3edb87\") " pod="openstack/nova-cell1-conductor-db-sync-v8d6d" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.898138 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjrct\" (UniqueName: \"kubernetes.io/projected/16554019-fb17-4257-9bfd-1c1ffe3edb87-kube-api-access-bjrct\") pod \"nova-cell1-conductor-db-sync-v8d6d\" (UID: \"16554019-fb17-4257-9bfd-1c1ffe3edb87\") " pod="openstack/nova-cell1-conductor-db-sync-v8d6d" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.898193 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16554019-fb17-4257-9bfd-1c1ffe3edb87-scripts\") pod \"nova-cell1-conductor-db-sync-v8d6d\" (UID: \"16554019-fb17-4257-9bfd-1c1ffe3edb87\") " pod="openstack/nova-cell1-conductor-db-sync-v8d6d" Nov 28 12:57:11 crc kubenswrapper[4779]: I1128 12:57:11.898302 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16554019-fb17-4257-9bfd-1c1ffe3edb87-config-data\") pod \"nova-cell1-conductor-db-sync-v8d6d\" (UID: \"16554019-fb17-4257-9bfd-1c1ffe3edb87\") " pod="openstack/nova-cell1-conductor-db-sync-v8d6d" Nov 28 12:57:12 crc kubenswrapper[4779]: I1128 12:57:11.999564 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16554019-fb17-4257-9bfd-1c1ffe3edb87-config-data\") pod \"nova-cell1-conductor-db-sync-v8d6d\" (UID: \"16554019-fb17-4257-9bfd-1c1ffe3edb87\") " pod="openstack/nova-cell1-conductor-db-sync-v8d6d" Nov 28 12:57:12 crc kubenswrapper[4779]: I1128 12:57:11.999957 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16554019-fb17-4257-9bfd-1c1ffe3edb87-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-v8d6d\" (UID: \"16554019-fb17-4257-9bfd-1c1ffe3edb87\") " pod="openstack/nova-cell1-conductor-db-sync-v8d6d" Nov 28 12:57:12 crc kubenswrapper[4779]: I1128 12:57:11.999981 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjrct\" (UniqueName: \"kubernetes.io/projected/16554019-fb17-4257-9bfd-1c1ffe3edb87-kube-api-access-bjrct\") pod \"nova-cell1-conductor-db-sync-v8d6d\" (UID: \"16554019-fb17-4257-9bfd-1c1ffe3edb87\") " pod="openstack/nova-cell1-conductor-db-sync-v8d6d" Nov 28 12:57:12 crc kubenswrapper[4779]: I1128 12:57:12.000022 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16554019-fb17-4257-9bfd-1c1ffe3edb87-scripts\") pod \"nova-cell1-conductor-db-sync-v8d6d\" (UID: \"16554019-fb17-4257-9bfd-1c1ffe3edb87\") " pod="openstack/nova-cell1-conductor-db-sync-v8d6d" Nov 28 12:57:12 crc kubenswrapper[4779]: I1128 12:57:12.011630 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16554019-fb17-4257-9bfd-1c1ffe3edb87-scripts\") pod \"nova-cell1-conductor-db-sync-v8d6d\" (UID: \"16554019-fb17-4257-9bfd-1c1ffe3edb87\") " pod="openstack/nova-cell1-conductor-db-sync-v8d6d" Nov 28 12:57:12 crc kubenswrapper[4779]: I1128 12:57:12.012215 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16554019-fb17-4257-9bfd-1c1ffe3edb87-config-data\") pod \"nova-cell1-conductor-db-sync-v8d6d\" (UID: \"16554019-fb17-4257-9bfd-1c1ffe3edb87\") " pod="openstack/nova-cell1-conductor-db-sync-v8d6d" Nov 28 12:57:12 crc kubenswrapper[4779]: I1128 12:57:12.012343 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16554019-fb17-4257-9bfd-1c1ffe3edb87-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-v8d6d\" (UID: \"16554019-fb17-4257-9bfd-1c1ffe3edb87\") " pod="openstack/nova-cell1-conductor-db-sync-v8d6d" Nov 28 12:57:12 crc kubenswrapper[4779]: I1128 12:57:12.049931 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjrct\" (UniqueName: \"kubernetes.io/projected/16554019-fb17-4257-9bfd-1c1ffe3edb87-kube-api-access-bjrct\") pod \"nova-cell1-conductor-db-sync-v8d6d\" (UID: \"16554019-fb17-4257-9bfd-1c1ffe3edb87\") " pod="openstack/nova-cell1-conductor-db-sync-v8d6d" Nov 28 12:57:12 crc kubenswrapper[4779]: I1128 12:57:12.056294 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 12:57:12 crc kubenswrapper[4779]: I1128 12:57:12.127349 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-v8d6d" Nov 28 12:57:12 crc kubenswrapper[4779]: I1128 12:57:12.213447 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 12:57:12 crc kubenswrapper[4779]: I1128 12:57:12.246844 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 12:57:12 crc kubenswrapper[4779]: I1128 12:57:12.458152 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fbc4d444f-mgxbw"] Nov 28 12:57:12 crc kubenswrapper[4779]: I1128 12:57:12.686475 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"782e7605-5aed-4324-8ccf-964c7c961b48","Type":"ContainerStarted","Data":"820e14bf7be7ac126869638f91a9c1322fc5369a05b1763f7358f8630dedb81c"} Nov 28 12:57:12 crc kubenswrapper[4779]: I1128 12:57:12.687973 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b0c2eda9-dcae-45cc-bee9-d9ce55f34d1c","Type":"ContainerStarted","Data":"7722c5812d1fbb03222e0d8dd967fdbd3f235101c8f3a103af838506e49e92c1"} Nov 28 12:57:12 crc kubenswrapper[4779]: I1128 12:57:12.689308 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5c5b7c8d-60b7-4dab-8392-036bb769ee86","Type":"ContainerStarted","Data":"7de2b20d366a5c1907b6205f79337f05c6b517219a2ed54da1da98a2f4254008"} Nov 28 12:57:12 crc kubenswrapper[4779]: I1128 12:57:12.690542 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-q9xf9" event={"ID":"e860d8bc-f4c3-4923-ba29-3fb022978027","Type":"ContainerStarted","Data":"7e2a4b1a3e594104c9d15bdc1c3db153f7d2a04d3dd12886c0a1faf3fb8e6dad"} Nov 28 12:57:12 crc kubenswrapper[4779]: I1128 12:57:12.690567 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-q9xf9" event={"ID":"e860d8bc-f4c3-4923-ba29-3fb022978027","Type":"ContainerStarted","Data":"71963e8d2bec1a23545150fa49d9305406c6293cee7c62ed4f8e8e818f0ed179"} Nov 28 12:57:12 crc kubenswrapper[4779]: I1128 12:57:12.692749 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"17062727-c25f-4ff0-90af-422314919f7a","Type":"ContainerStarted","Data":"2a576d5a86001d035c069ad0f78f1fcbfb0ffe6434ccb626b4919f22e2eecd9b"} Nov 28 12:57:12 crc kubenswrapper[4779]: I1128 12:57:12.694404 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fbc4d444f-mgxbw" event={"ID":"f9820813-0205-41b5-a0cd-be93c4b28372","Type":"ContainerStarted","Data":"efd54ead9f408c81bba5c67f0ac0dfeb059d2b7b3e4ad1fd680c3eb22fcf8e11"} Nov 28 12:57:12 crc kubenswrapper[4779]: I1128 12:57:12.713840 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-q9xf9" podStartSLOduration=2.713824304 podStartE2EDuration="2.713824304s" podCreationTimestamp="2025-11-28 12:57:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:57:12.705996798 +0000 UTC m=+1293.271672152" watchObservedRunningTime="2025-11-28 12:57:12.713824304 +0000 UTC m=+1293.279499668" Nov 28 12:57:12 crc kubenswrapper[4779]: W1128 12:57:12.736244 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16554019_fb17_4257_9bfd_1c1ffe3edb87.slice/crio-928da867cc665c8ccd14328f458f7044713abea6ede84ea4cde9613691762d65 WatchSource:0}: Error finding container 928da867cc665c8ccd14328f458f7044713abea6ede84ea4cde9613691762d65: Status 404 returned error can't find the container with id 928da867cc665c8ccd14328f458f7044713abea6ede84ea4cde9613691762d65 Nov 28 12:57:12 crc kubenswrapper[4779]: I1128 12:57:12.741065 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-v8d6d"] Nov 28 12:57:13 crc kubenswrapper[4779]: I1128 12:57:13.709798 4779 generic.go:334] "Generic (PLEG): container finished" podID="f9820813-0205-41b5-a0cd-be93c4b28372" containerID="1787d782a64f4c914b55b5a88528a74c40f680e1d50b370be44beb6976efd6fb" exitCode=0 Nov 28 12:57:13 crc kubenswrapper[4779]: I1128 12:57:13.710064 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fbc4d444f-mgxbw" event={"ID":"f9820813-0205-41b5-a0cd-be93c4b28372","Type":"ContainerDied","Data":"1787d782a64f4c914b55b5a88528a74c40f680e1d50b370be44beb6976efd6fb"} Nov 28 12:57:13 crc kubenswrapper[4779]: I1128 12:57:13.748047 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-v8d6d" event={"ID":"16554019-fb17-4257-9bfd-1c1ffe3edb87","Type":"ContainerStarted","Data":"09d013213ac1b1692af87bda94005251c2127c5b1430ef379f877b931cfd15b4"} Nov 28 12:57:13 crc kubenswrapper[4779]: I1128 12:57:13.748088 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-v8d6d" event={"ID":"16554019-fb17-4257-9bfd-1c1ffe3edb87","Type":"ContainerStarted","Data":"928da867cc665c8ccd14328f458f7044713abea6ede84ea4cde9613691762d65"} Nov 28 12:57:13 crc kubenswrapper[4779]: I1128 12:57:13.767074 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-v8d6d" podStartSLOduration=2.767057175 podStartE2EDuration="2.767057175s" podCreationTimestamp="2025-11-28 12:57:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:57:13.753430916 +0000 UTC m=+1294.319106270" watchObservedRunningTime="2025-11-28 12:57:13.767057175 +0000 UTC m=+1294.332732529" Nov 28 12:57:14 crc kubenswrapper[4779]: I1128 12:57:14.535917 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 12:57:14 crc kubenswrapper[4779]: I1128 12:57:14.553522 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 12:57:15 crc kubenswrapper[4779]: I1128 12:57:15.753947 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fbc4d444f-mgxbw" event={"ID":"f9820813-0205-41b5-a0cd-be93c4b28372","Type":"ContainerStarted","Data":"98715bf2e6061faa11ee46c9723df1d5325386fd59bd784f15cec58a04dd23bd"} Nov 28 12:57:15 crc kubenswrapper[4779]: I1128 12:57:15.754971 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5fbc4d444f-mgxbw" Nov 28 12:57:15 crc kubenswrapper[4779]: I1128 12:57:15.786747 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5fbc4d444f-mgxbw" podStartSLOduration=4.78671813 podStartE2EDuration="4.78671813s" podCreationTimestamp="2025-11-28 12:57:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:57:15.777335073 +0000 UTC m=+1296.343010427" watchObservedRunningTime="2025-11-28 12:57:15.78671813 +0000 UTC m=+1296.352393494" Nov 28 12:57:18 crc kubenswrapper[4779]: I1128 12:57:18.791729 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"782e7605-5aed-4324-8ccf-964c7c961b48","Type":"ContainerStarted","Data":"bcbd4d0515f2398ef5fc794df4e7fc77fc020e3fd90f2ca0824eba0521907509"} Nov 28 12:57:18 crc kubenswrapper[4779]: I1128 12:57:18.792433 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"782e7605-5aed-4324-8ccf-964c7c961b48","Type":"ContainerStarted","Data":"586627fb9293ded9d2863454aef0d2b0ed4031f0ad4a80b2ccd16dd0ac73e865"} Nov 28 12:57:18 crc kubenswrapper[4779]: I1128 12:57:18.794492 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b0c2eda9-dcae-45cc-bee9-d9ce55f34d1c","Type":"ContainerStarted","Data":"48a18811228fab9f492469a731b2670a73ce5fa7d755df34f000c2ac5c82d9d4"} Nov 28 12:57:18 crc kubenswrapper[4779]: I1128 12:57:18.795713 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="b0c2eda9-dcae-45cc-bee9-d9ce55f34d1c" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://48a18811228fab9f492469a731b2670a73ce5fa7d755df34f000c2ac5c82d9d4" gracePeriod=30 Nov 28 12:57:18 crc kubenswrapper[4779]: I1128 12:57:18.808006 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5c5b7c8d-60b7-4dab-8392-036bb769ee86","Type":"ContainerStarted","Data":"f717cbb30b281a75996c3c7dd0e3a204632e15294e46c74f8120c49e771a6784"} Nov 28 12:57:18 crc kubenswrapper[4779]: I1128 12:57:18.808123 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5c5b7c8d-60b7-4dab-8392-036bb769ee86" containerName="nova-metadata-log" containerID="cri-o://fa23771511ba024855145c41aaf7e4eae5fec9d3d2f577ddaf971f8096b5c34d" gracePeriod=30 Nov 28 12:57:18 crc kubenswrapper[4779]: I1128 12:57:18.808180 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5c5b7c8d-60b7-4dab-8392-036bb769ee86" containerName="nova-metadata-metadata" containerID="cri-o://f717cbb30b281a75996c3c7dd0e3a204632e15294e46c74f8120c49e771a6784" gracePeriod=30 Nov 28 12:57:18 crc kubenswrapper[4779]: I1128 12:57:18.808138 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5c5b7c8d-60b7-4dab-8392-036bb769ee86","Type":"ContainerStarted","Data":"fa23771511ba024855145c41aaf7e4eae5fec9d3d2f577ddaf971f8096b5c34d"} Nov 28 12:57:18 crc kubenswrapper[4779]: I1128 12:57:18.812597 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"17062727-c25f-4ff0-90af-422314919f7a","Type":"ContainerStarted","Data":"c6ed363c23c42a562a6c5b0aba76930cc6af044b6b02b520840f5a7873631949"} Nov 28 12:57:18 crc kubenswrapper[4779]: I1128 12:57:18.861126 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.302907894 podStartE2EDuration="8.861107216s" podCreationTimestamp="2025-11-28 12:57:10 +0000 UTC" firstStartedPulling="2025-11-28 12:57:12.051729622 +0000 UTC m=+1292.617404976" lastFinishedPulling="2025-11-28 12:57:17.609928934 +0000 UTC m=+1298.175604298" observedRunningTime="2025-11-28 12:57:18.856684159 +0000 UTC m=+1299.422359533" watchObservedRunningTime="2025-11-28 12:57:18.861107216 +0000 UTC m=+1299.426782580" Nov 28 12:57:18 crc kubenswrapper[4779]: I1128 12:57:18.868038 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.449868113 podStartE2EDuration="7.868025328s" podCreationTimestamp="2025-11-28 12:57:11 +0000 UTC" firstStartedPulling="2025-11-28 12:57:12.190784983 +0000 UTC m=+1292.756460337" lastFinishedPulling="2025-11-28 12:57:17.608942198 +0000 UTC m=+1298.174617552" observedRunningTime="2025-11-28 12:57:18.836712403 +0000 UTC m=+1299.402387767" watchObservedRunningTime="2025-11-28 12:57:18.868025328 +0000 UTC m=+1299.433700692" Nov 28 12:57:18 crc kubenswrapper[4779]: I1128 12:57:18.891848 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.179257207 podStartE2EDuration="8.891825484s" podCreationTimestamp="2025-11-28 12:57:10 +0000 UTC" firstStartedPulling="2025-11-28 12:57:11.902663907 +0000 UTC m=+1292.468339261" lastFinishedPulling="2025-11-28 12:57:17.615232174 +0000 UTC m=+1298.180907538" observedRunningTime="2025-11-28 12:57:18.874070337 +0000 UTC m=+1299.439745731" watchObservedRunningTime="2025-11-28 12:57:18.891825484 +0000 UTC m=+1299.457500848" Nov 28 12:57:18 crc kubenswrapper[4779]: I1128 12:57:18.893539 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.495598826 podStartE2EDuration="8.893522789s" podCreationTimestamp="2025-11-28 12:57:10 +0000 UTC" firstStartedPulling="2025-11-28 12:57:12.214654941 +0000 UTC m=+1292.780330295" lastFinishedPulling="2025-11-28 12:57:17.612578904 +0000 UTC m=+1298.178254258" observedRunningTime="2025-11-28 12:57:18.891127746 +0000 UTC m=+1299.456803150" watchObservedRunningTime="2025-11-28 12:57:18.893522789 +0000 UTC m=+1299.459198163" Nov 28 12:57:19 crc kubenswrapper[4779]: I1128 12:57:19.572345 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 12:57:19 crc kubenswrapper[4779]: I1128 12:57:19.698760 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c5b7c8d-60b7-4dab-8392-036bb769ee86-config-data\") pod \"5c5b7c8d-60b7-4dab-8392-036bb769ee86\" (UID: \"5c5b7c8d-60b7-4dab-8392-036bb769ee86\") " Nov 28 12:57:19 crc kubenswrapper[4779]: I1128 12:57:19.698864 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c5b7c8d-60b7-4dab-8392-036bb769ee86-combined-ca-bundle\") pod \"5c5b7c8d-60b7-4dab-8392-036bb769ee86\" (UID: \"5c5b7c8d-60b7-4dab-8392-036bb769ee86\") " Nov 28 12:57:19 crc kubenswrapper[4779]: I1128 12:57:19.699043 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c5b7c8d-60b7-4dab-8392-036bb769ee86-logs\") pod \"5c5b7c8d-60b7-4dab-8392-036bb769ee86\" (UID: \"5c5b7c8d-60b7-4dab-8392-036bb769ee86\") " Nov 28 12:57:19 crc kubenswrapper[4779]: I1128 12:57:19.699080 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdf8b\" (UniqueName: \"kubernetes.io/projected/5c5b7c8d-60b7-4dab-8392-036bb769ee86-kube-api-access-jdf8b\") pod \"5c5b7c8d-60b7-4dab-8392-036bb769ee86\" (UID: \"5c5b7c8d-60b7-4dab-8392-036bb769ee86\") " Nov 28 12:57:19 crc kubenswrapper[4779]: I1128 12:57:19.699732 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c5b7c8d-60b7-4dab-8392-036bb769ee86-logs" (OuterVolumeSpecName: "logs") pod "5c5b7c8d-60b7-4dab-8392-036bb769ee86" (UID: "5c5b7c8d-60b7-4dab-8392-036bb769ee86"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:57:19 crc kubenswrapper[4779]: I1128 12:57:19.706344 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c5b7c8d-60b7-4dab-8392-036bb769ee86-kube-api-access-jdf8b" (OuterVolumeSpecName: "kube-api-access-jdf8b") pod "5c5b7c8d-60b7-4dab-8392-036bb769ee86" (UID: "5c5b7c8d-60b7-4dab-8392-036bb769ee86"). InnerVolumeSpecName "kube-api-access-jdf8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:57:19 crc kubenswrapper[4779]: I1128 12:57:19.743738 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c5b7c8d-60b7-4dab-8392-036bb769ee86-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c5b7c8d-60b7-4dab-8392-036bb769ee86" (UID: "5c5b7c8d-60b7-4dab-8392-036bb769ee86"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:57:19 crc kubenswrapper[4779]: I1128 12:57:19.768507 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c5b7c8d-60b7-4dab-8392-036bb769ee86-config-data" (OuterVolumeSpecName: "config-data") pod "5c5b7c8d-60b7-4dab-8392-036bb769ee86" (UID: "5c5b7c8d-60b7-4dab-8392-036bb769ee86"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:57:19 crc kubenswrapper[4779]: I1128 12:57:19.801896 4779 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c5b7c8d-60b7-4dab-8392-036bb769ee86-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:57:19 crc kubenswrapper[4779]: I1128 12:57:19.801925 4779 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c5b7c8d-60b7-4dab-8392-036bb769ee86-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:57:19 crc kubenswrapper[4779]: I1128 12:57:19.801935 4779 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c5b7c8d-60b7-4dab-8392-036bb769ee86-logs\") on node \"crc\" DevicePath \"\"" Nov 28 12:57:19 crc kubenswrapper[4779]: I1128 12:57:19.801944 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdf8b\" (UniqueName: \"kubernetes.io/projected/5c5b7c8d-60b7-4dab-8392-036bb769ee86-kube-api-access-jdf8b\") on node \"crc\" DevicePath \"\"" Nov 28 12:57:19 crc kubenswrapper[4779]: I1128 12:57:19.822672 4779 generic.go:334] "Generic (PLEG): container finished" podID="5c5b7c8d-60b7-4dab-8392-036bb769ee86" containerID="f717cbb30b281a75996c3c7dd0e3a204632e15294e46c74f8120c49e771a6784" exitCode=0 Nov 28 12:57:19 crc kubenswrapper[4779]: I1128 12:57:19.822698 4779 generic.go:334] "Generic (PLEG): container finished" podID="5c5b7c8d-60b7-4dab-8392-036bb769ee86" containerID="fa23771511ba024855145c41aaf7e4eae5fec9d3d2f577ddaf971f8096b5c34d" exitCode=143 Nov 28 12:57:19 crc kubenswrapper[4779]: I1128 12:57:19.822780 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 12:57:19 crc kubenswrapper[4779]: I1128 12:57:19.824860 4779 generic.go:334] "Generic (PLEG): container finished" podID="e860d8bc-f4c3-4923-ba29-3fb022978027" containerID="7e2a4b1a3e594104c9d15bdc1c3db153f7d2a04d3dd12886c0a1faf3fb8e6dad" exitCode=0 Nov 28 12:57:19 crc kubenswrapper[4779]: I1128 12:57:19.831350 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5c5b7c8d-60b7-4dab-8392-036bb769ee86","Type":"ContainerDied","Data":"f717cbb30b281a75996c3c7dd0e3a204632e15294e46c74f8120c49e771a6784"} Nov 28 12:57:19 crc kubenswrapper[4779]: I1128 12:57:19.831742 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5c5b7c8d-60b7-4dab-8392-036bb769ee86","Type":"ContainerDied","Data":"fa23771511ba024855145c41aaf7e4eae5fec9d3d2f577ddaf971f8096b5c34d"} Nov 28 12:57:19 crc kubenswrapper[4779]: I1128 12:57:19.831764 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5c5b7c8d-60b7-4dab-8392-036bb769ee86","Type":"ContainerDied","Data":"7de2b20d366a5c1907b6205f79337f05c6b517219a2ed54da1da98a2f4254008"} Nov 28 12:57:19 crc kubenswrapper[4779]: I1128 12:57:19.831775 4779 scope.go:117] "RemoveContainer" containerID="f717cbb30b281a75996c3c7dd0e3a204632e15294e46c74f8120c49e771a6784" Nov 28 12:57:19 crc kubenswrapper[4779]: I1128 12:57:19.831784 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-q9xf9" event={"ID":"e860d8bc-f4c3-4923-ba29-3fb022978027","Type":"ContainerDied","Data":"7e2a4b1a3e594104c9d15bdc1c3db153f7d2a04d3dd12886c0a1faf3fb8e6dad"} Nov 28 12:57:19 crc kubenswrapper[4779]: I1128 12:57:19.866488 4779 scope.go:117] "RemoveContainer" containerID="fa23771511ba024855145c41aaf7e4eae5fec9d3d2f577ddaf971f8096b5c34d" Nov 28 12:57:19 crc kubenswrapper[4779]: I1128 12:57:19.886250 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 12:57:19 crc kubenswrapper[4779]: I1128 12:57:19.906160 4779 scope.go:117] "RemoveContainer" containerID="f717cbb30b281a75996c3c7dd0e3a204632e15294e46c74f8120c49e771a6784" Nov 28 12:57:19 crc kubenswrapper[4779]: E1128 12:57:19.908349 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f717cbb30b281a75996c3c7dd0e3a204632e15294e46c74f8120c49e771a6784\": container with ID starting with f717cbb30b281a75996c3c7dd0e3a204632e15294e46c74f8120c49e771a6784 not found: ID does not exist" containerID="f717cbb30b281a75996c3c7dd0e3a204632e15294e46c74f8120c49e771a6784" Nov 28 12:57:19 crc kubenswrapper[4779]: I1128 12:57:19.908379 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f717cbb30b281a75996c3c7dd0e3a204632e15294e46c74f8120c49e771a6784"} err="failed to get container status \"f717cbb30b281a75996c3c7dd0e3a204632e15294e46c74f8120c49e771a6784\": rpc error: code = NotFound desc = could not find container \"f717cbb30b281a75996c3c7dd0e3a204632e15294e46c74f8120c49e771a6784\": container with ID starting with f717cbb30b281a75996c3c7dd0e3a204632e15294e46c74f8120c49e771a6784 not found: ID does not exist" Nov 28 12:57:19 crc kubenswrapper[4779]: I1128 12:57:19.908401 4779 scope.go:117] "RemoveContainer" containerID="fa23771511ba024855145c41aaf7e4eae5fec9d3d2f577ddaf971f8096b5c34d" Nov 28 12:57:19 crc kubenswrapper[4779]: I1128 12:57:19.909085 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 12:57:19 crc kubenswrapper[4779]: E1128 12:57:19.909121 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa23771511ba024855145c41aaf7e4eae5fec9d3d2f577ddaf971f8096b5c34d\": container with ID starting with fa23771511ba024855145c41aaf7e4eae5fec9d3d2f577ddaf971f8096b5c34d not found: ID does not exist" containerID="fa23771511ba024855145c41aaf7e4eae5fec9d3d2f577ddaf971f8096b5c34d" Nov 28 12:57:19 crc kubenswrapper[4779]: I1128 12:57:19.909172 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa23771511ba024855145c41aaf7e4eae5fec9d3d2f577ddaf971f8096b5c34d"} err="failed to get container status \"fa23771511ba024855145c41aaf7e4eae5fec9d3d2f577ddaf971f8096b5c34d\": rpc error: code = NotFound desc = could not find container \"fa23771511ba024855145c41aaf7e4eae5fec9d3d2f577ddaf971f8096b5c34d\": container with ID starting with fa23771511ba024855145c41aaf7e4eae5fec9d3d2f577ddaf971f8096b5c34d not found: ID does not exist" Nov 28 12:57:19 crc kubenswrapper[4779]: I1128 12:57:19.909205 4779 scope.go:117] "RemoveContainer" containerID="f717cbb30b281a75996c3c7dd0e3a204632e15294e46c74f8120c49e771a6784" Nov 28 12:57:19 crc kubenswrapper[4779]: I1128 12:57:19.909962 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f717cbb30b281a75996c3c7dd0e3a204632e15294e46c74f8120c49e771a6784"} err="failed to get container status \"f717cbb30b281a75996c3c7dd0e3a204632e15294e46c74f8120c49e771a6784\": rpc error: code = NotFound desc = could not find container \"f717cbb30b281a75996c3c7dd0e3a204632e15294e46c74f8120c49e771a6784\": container with ID starting with f717cbb30b281a75996c3c7dd0e3a204632e15294e46c74f8120c49e771a6784 not found: ID does not exist" Nov 28 12:57:19 crc kubenswrapper[4779]: I1128 12:57:19.909982 4779 scope.go:117] "RemoveContainer" containerID="fa23771511ba024855145c41aaf7e4eae5fec9d3d2f577ddaf971f8096b5c34d" Nov 28 12:57:19 crc kubenswrapper[4779]: I1128 12:57:19.910360 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa23771511ba024855145c41aaf7e4eae5fec9d3d2f577ddaf971f8096b5c34d"} err="failed to get container status \"fa23771511ba024855145c41aaf7e4eae5fec9d3d2f577ddaf971f8096b5c34d\": rpc error: code = NotFound desc = could not find container \"fa23771511ba024855145c41aaf7e4eae5fec9d3d2f577ddaf971f8096b5c34d\": container with ID starting with fa23771511ba024855145c41aaf7e4eae5fec9d3d2f577ddaf971f8096b5c34d not found: ID does not exist" Nov 28 12:57:19 crc kubenswrapper[4779]: I1128 12:57:19.927183 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 28 12:57:19 crc kubenswrapper[4779]: E1128 12:57:19.927718 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c5b7c8d-60b7-4dab-8392-036bb769ee86" containerName="nova-metadata-log" Nov 28 12:57:19 crc kubenswrapper[4779]: I1128 12:57:19.927740 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c5b7c8d-60b7-4dab-8392-036bb769ee86" containerName="nova-metadata-log" Nov 28 12:57:19 crc kubenswrapper[4779]: E1128 12:57:19.927769 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c5b7c8d-60b7-4dab-8392-036bb769ee86" containerName="nova-metadata-metadata" Nov 28 12:57:19 crc kubenswrapper[4779]: I1128 12:57:19.927775 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c5b7c8d-60b7-4dab-8392-036bb769ee86" containerName="nova-metadata-metadata" Nov 28 12:57:19 crc kubenswrapper[4779]: I1128 12:57:19.927964 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c5b7c8d-60b7-4dab-8392-036bb769ee86" containerName="nova-metadata-log" Nov 28 12:57:19 crc kubenswrapper[4779]: I1128 12:57:19.927991 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c5b7c8d-60b7-4dab-8392-036bb769ee86" containerName="nova-metadata-metadata" Nov 28 12:57:19 crc kubenswrapper[4779]: I1128 12:57:19.929140 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 12:57:19 crc kubenswrapper[4779]: I1128 12:57:19.931851 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 28 12:57:19 crc kubenswrapper[4779]: I1128 12:57:19.934786 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 28 12:57:19 crc kubenswrapper[4779]: I1128 12:57:19.936870 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 12:57:20 crc kubenswrapper[4779]: I1128 12:57:20.005776 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5ac5adf-85a6-400c-88e3-27ab1746ff4a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d5ac5adf-85a6-400c-88e3-27ab1746ff4a\") " pod="openstack/nova-metadata-0" Nov 28 12:57:20 crc kubenswrapper[4779]: I1128 12:57:20.006103 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5ac5adf-85a6-400c-88e3-27ab1746ff4a-config-data\") pod \"nova-metadata-0\" (UID: \"d5ac5adf-85a6-400c-88e3-27ab1746ff4a\") " pod="openstack/nova-metadata-0" Nov 28 12:57:20 crc kubenswrapper[4779]: I1128 12:57:20.006237 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5ac5adf-85a6-400c-88e3-27ab1746ff4a-logs\") pod \"nova-metadata-0\" (UID: \"d5ac5adf-85a6-400c-88e3-27ab1746ff4a\") " pod="openstack/nova-metadata-0" Nov 28 12:57:20 crc kubenswrapper[4779]: I1128 12:57:20.006798 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5ac5adf-85a6-400c-88e3-27ab1746ff4a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d5ac5adf-85a6-400c-88e3-27ab1746ff4a\") " pod="openstack/nova-metadata-0" Nov 28 12:57:20 crc kubenswrapper[4779]: I1128 12:57:20.007082 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jwn6\" (UniqueName: \"kubernetes.io/projected/d5ac5adf-85a6-400c-88e3-27ab1746ff4a-kube-api-access-8jwn6\") pod \"nova-metadata-0\" (UID: \"d5ac5adf-85a6-400c-88e3-27ab1746ff4a\") " pod="openstack/nova-metadata-0" Nov 28 12:57:20 crc kubenswrapper[4779]: I1128 12:57:20.108991 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jwn6\" (UniqueName: \"kubernetes.io/projected/d5ac5adf-85a6-400c-88e3-27ab1746ff4a-kube-api-access-8jwn6\") pod \"nova-metadata-0\" (UID: \"d5ac5adf-85a6-400c-88e3-27ab1746ff4a\") " pod="openstack/nova-metadata-0" Nov 28 12:57:20 crc kubenswrapper[4779]: I1128 12:57:20.109104 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5ac5adf-85a6-400c-88e3-27ab1746ff4a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d5ac5adf-85a6-400c-88e3-27ab1746ff4a\") " pod="openstack/nova-metadata-0" Nov 28 12:57:20 crc kubenswrapper[4779]: I1128 12:57:20.109148 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5ac5adf-85a6-400c-88e3-27ab1746ff4a-config-data\") pod \"nova-metadata-0\" (UID: \"d5ac5adf-85a6-400c-88e3-27ab1746ff4a\") " pod="openstack/nova-metadata-0" Nov 28 12:57:20 crc kubenswrapper[4779]: I1128 12:57:20.109218 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5ac5adf-85a6-400c-88e3-27ab1746ff4a-logs\") pod \"nova-metadata-0\" (UID: \"d5ac5adf-85a6-400c-88e3-27ab1746ff4a\") " pod="openstack/nova-metadata-0" Nov 28 12:57:20 crc kubenswrapper[4779]: I1128 12:57:20.109307 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5ac5adf-85a6-400c-88e3-27ab1746ff4a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d5ac5adf-85a6-400c-88e3-27ab1746ff4a\") " pod="openstack/nova-metadata-0" Nov 28 12:57:20 crc kubenswrapper[4779]: I1128 12:57:20.110058 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5ac5adf-85a6-400c-88e3-27ab1746ff4a-logs\") pod \"nova-metadata-0\" (UID: \"d5ac5adf-85a6-400c-88e3-27ab1746ff4a\") " pod="openstack/nova-metadata-0" Nov 28 12:57:20 crc kubenswrapper[4779]: I1128 12:57:20.113276 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5ac5adf-85a6-400c-88e3-27ab1746ff4a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d5ac5adf-85a6-400c-88e3-27ab1746ff4a\") " pod="openstack/nova-metadata-0" Nov 28 12:57:20 crc kubenswrapper[4779]: I1128 12:57:20.114703 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5ac5adf-85a6-400c-88e3-27ab1746ff4a-config-data\") pod \"nova-metadata-0\" (UID: \"d5ac5adf-85a6-400c-88e3-27ab1746ff4a\") " pod="openstack/nova-metadata-0" Nov 28 12:57:20 crc kubenswrapper[4779]: I1128 12:57:20.124540 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5ac5adf-85a6-400c-88e3-27ab1746ff4a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d5ac5adf-85a6-400c-88e3-27ab1746ff4a\") " pod="openstack/nova-metadata-0" Nov 28 12:57:20 crc kubenswrapper[4779]: I1128 12:57:20.126403 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jwn6\" (UniqueName: \"kubernetes.io/projected/d5ac5adf-85a6-400c-88e3-27ab1746ff4a-kube-api-access-8jwn6\") pod \"nova-metadata-0\" (UID: \"d5ac5adf-85a6-400c-88e3-27ab1746ff4a\") " pod="openstack/nova-metadata-0" Nov 28 12:57:20 crc kubenswrapper[4779]: I1128 12:57:20.246713 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 12:57:20 crc kubenswrapper[4779]: I1128 12:57:20.753670 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 12:57:20 crc kubenswrapper[4779]: I1128 12:57:20.840480 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d5ac5adf-85a6-400c-88e3-27ab1746ff4a","Type":"ContainerStarted","Data":"05abaaab1caee58a5f17f4d40d91e846a1353279dfe65348d8d74a23c49c529a"} Nov 28 12:57:21 crc kubenswrapper[4779]: I1128 12:57:21.181734 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-q9xf9" Nov 28 12:57:21 crc kubenswrapper[4779]: I1128 12:57:21.272080 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 28 12:57:21 crc kubenswrapper[4779]: I1128 12:57:21.329170 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvz5t\" (UniqueName: \"kubernetes.io/projected/e860d8bc-f4c3-4923-ba29-3fb022978027-kube-api-access-fvz5t\") pod \"e860d8bc-f4c3-4923-ba29-3fb022978027\" (UID: \"e860d8bc-f4c3-4923-ba29-3fb022978027\") " Nov 28 12:57:21 crc kubenswrapper[4779]: I1128 12:57:21.329246 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e860d8bc-f4c3-4923-ba29-3fb022978027-scripts\") pod \"e860d8bc-f4c3-4923-ba29-3fb022978027\" (UID: \"e860d8bc-f4c3-4923-ba29-3fb022978027\") " Nov 28 12:57:21 crc kubenswrapper[4779]: I1128 12:57:21.329277 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e860d8bc-f4c3-4923-ba29-3fb022978027-config-data\") pod \"e860d8bc-f4c3-4923-ba29-3fb022978027\" (UID: \"e860d8bc-f4c3-4923-ba29-3fb022978027\") " Nov 28 12:57:21 crc kubenswrapper[4779]: I1128 12:57:21.329408 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e860d8bc-f4c3-4923-ba29-3fb022978027-combined-ca-bundle\") pod \"e860d8bc-f4c3-4923-ba29-3fb022978027\" (UID: \"e860d8bc-f4c3-4923-ba29-3fb022978027\") " Nov 28 12:57:21 crc kubenswrapper[4779]: I1128 12:57:21.334952 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e860d8bc-f4c3-4923-ba29-3fb022978027-kube-api-access-fvz5t" (OuterVolumeSpecName: "kube-api-access-fvz5t") pod "e860d8bc-f4c3-4923-ba29-3fb022978027" (UID: "e860d8bc-f4c3-4923-ba29-3fb022978027"). InnerVolumeSpecName "kube-api-access-fvz5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:57:21 crc kubenswrapper[4779]: I1128 12:57:21.335216 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e860d8bc-f4c3-4923-ba29-3fb022978027-scripts" (OuterVolumeSpecName: "scripts") pod "e860d8bc-f4c3-4923-ba29-3fb022978027" (UID: "e860d8bc-f4c3-4923-ba29-3fb022978027"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:57:21 crc kubenswrapper[4779]: I1128 12:57:21.356400 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e860d8bc-f4c3-4923-ba29-3fb022978027-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e860d8bc-f4c3-4923-ba29-3fb022978027" (UID: "e860d8bc-f4c3-4923-ba29-3fb022978027"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:57:21 crc kubenswrapper[4779]: I1128 12:57:21.365723 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e860d8bc-f4c3-4923-ba29-3fb022978027-config-data" (OuterVolumeSpecName: "config-data") pod "e860d8bc-f4c3-4923-ba29-3fb022978027" (UID: "e860d8bc-f4c3-4923-ba29-3fb022978027"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:57:21 crc kubenswrapper[4779]: I1128 12:57:21.431178 4779 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e860d8bc-f4c3-4923-ba29-3fb022978027-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:57:21 crc kubenswrapper[4779]: I1128 12:57:21.431213 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvz5t\" (UniqueName: \"kubernetes.io/projected/e860d8bc-f4c3-4923-ba29-3fb022978027-kube-api-access-fvz5t\") on node \"crc\" DevicePath \"\"" Nov 28 12:57:21 crc kubenswrapper[4779]: I1128 12:57:21.431224 4779 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e860d8bc-f4c3-4923-ba29-3fb022978027-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:57:21 crc kubenswrapper[4779]: I1128 12:57:21.431233 4779 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e860d8bc-f4c3-4923-ba29-3fb022978027-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:57:21 crc kubenswrapper[4779]: I1128 12:57:21.436878 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 28 12:57:21 crc kubenswrapper[4779]: I1128 12:57:21.436910 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 28 12:57:21 crc kubenswrapper[4779]: I1128 12:57:21.469594 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 28 12:57:21 crc kubenswrapper[4779]: I1128 12:57:21.619849 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 12:57:21 crc kubenswrapper[4779]: I1128 12:57:21.619918 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 12:57:21 crc kubenswrapper[4779]: I1128 12:57:21.660423 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5fbc4d444f-mgxbw" Nov 28 12:57:21 crc kubenswrapper[4779]: I1128 12:57:21.738855 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f6bc4c6c9-pcgm5"] Nov 28 12:57:21 crc kubenswrapper[4779]: I1128 12:57:21.740007 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-f6bc4c6c9-pcgm5" podUID="67f75fbe-5004-449a-bd16-51659985e95e" containerName="dnsmasq-dns" containerID="cri-o://cddf9679e5f2f9f9de75352b786204b93933f8aeecebc3aef560f473634e57b8" gracePeriod=10 Nov 28 12:57:21 crc kubenswrapper[4779]: I1128 12:57:21.808226 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c5b7c8d-60b7-4dab-8392-036bb769ee86" path="/var/lib/kubelet/pods/5c5b7c8d-60b7-4dab-8392-036bb769ee86/volumes" Nov 28 12:57:21 crc kubenswrapper[4779]: I1128 12:57:21.856118 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-q9xf9" Nov 28 12:57:21 crc kubenswrapper[4779]: I1128 12:57:21.856144 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-q9xf9" event={"ID":"e860d8bc-f4c3-4923-ba29-3fb022978027","Type":"ContainerDied","Data":"71963e8d2bec1a23545150fa49d9305406c6293cee7c62ed4f8e8e818f0ed179"} Nov 28 12:57:21 crc kubenswrapper[4779]: I1128 12:57:21.856207 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71963e8d2bec1a23545150fa49d9305406c6293cee7c62ed4f8e8e818f0ed179" Nov 28 12:57:21 crc kubenswrapper[4779]: I1128 12:57:21.871210 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d5ac5adf-85a6-400c-88e3-27ab1746ff4a","Type":"ContainerStarted","Data":"b1fa123649f3c108bb67db73bb26b31a7913549f4f8a7fff5d195c0124bea96b"} Nov 28 12:57:21 crc kubenswrapper[4779]: I1128 12:57:21.871256 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d5ac5adf-85a6-400c-88e3-27ab1746ff4a","Type":"ContainerStarted","Data":"4e9bed36f1c6203bddc755b36b0e07ec9816e3ee6070487103f4c5e72b633607"} Nov 28 12:57:21 crc kubenswrapper[4779]: I1128 12:57:21.903382 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.903364225 podStartE2EDuration="2.903364225s" podCreationTimestamp="2025-11-28 12:57:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:57:21.897191362 +0000 UTC m=+1302.462866716" watchObservedRunningTime="2025-11-28 12:57:21.903364225 +0000 UTC m=+1302.469039579" Nov 28 12:57:21 crc kubenswrapper[4779]: I1128 12:57:21.938839 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 28 12:57:22 crc kubenswrapper[4779]: I1128 12:57:22.034082 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 12:57:22 crc kubenswrapper[4779]: I1128 12:57:22.034298 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="782e7605-5aed-4324-8ccf-964c7c961b48" containerName="nova-api-log" containerID="cri-o://586627fb9293ded9d2863454aef0d2b0ed4031f0ad4a80b2ccd16dd0ac73e865" gracePeriod=30 Nov 28 12:57:22 crc kubenswrapper[4779]: I1128 12:57:22.034495 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="782e7605-5aed-4324-8ccf-964c7c961b48" containerName="nova-api-api" containerID="cri-o://bcbd4d0515f2398ef5fc794df4e7fc77fc020e3fd90f2ca0824eba0521907509" gracePeriod=30 Nov 28 12:57:22 crc kubenswrapper[4779]: I1128 12:57:22.049296 4779 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="782e7605-5aed-4324-8ccf-964c7c961b48" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.197:8774/\": EOF" Nov 28 12:57:22 crc kubenswrapper[4779]: I1128 12:57:22.049366 4779 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="782e7605-5aed-4324-8ccf-964c7c961b48" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.197:8774/\": EOF" Nov 28 12:57:22 crc kubenswrapper[4779]: I1128 12:57:22.056727 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 12:57:22 crc kubenswrapper[4779]: I1128 12:57:22.306015 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f6bc4c6c9-pcgm5" Nov 28 12:57:22 crc kubenswrapper[4779]: I1128 12:57:22.448156 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 12:57:22 crc kubenswrapper[4779]: I1128 12:57:22.504531 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/67f75fbe-5004-449a-bd16-51659985e95e-ovsdbserver-nb\") pod \"67f75fbe-5004-449a-bd16-51659985e95e\" (UID: \"67f75fbe-5004-449a-bd16-51659985e95e\") " Nov 28 12:57:22 crc kubenswrapper[4779]: I1128 12:57:22.504592 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/67f75fbe-5004-449a-bd16-51659985e95e-dns-svc\") pod \"67f75fbe-5004-449a-bd16-51659985e95e\" (UID: \"67f75fbe-5004-449a-bd16-51659985e95e\") " Nov 28 12:57:22 crc kubenswrapper[4779]: I1128 12:57:22.504669 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/67f75fbe-5004-449a-bd16-51659985e95e-dns-swift-storage-0\") pod \"67f75fbe-5004-449a-bd16-51659985e95e\" (UID: \"67f75fbe-5004-449a-bd16-51659985e95e\") " Nov 28 12:57:22 crc kubenswrapper[4779]: I1128 12:57:22.504773 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/67f75fbe-5004-449a-bd16-51659985e95e-ovsdbserver-sb\") pod \"67f75fbe-5004-449a-bd16-51659985e95e\" (UID: \"67f75fbe-5004-449a-bd16-51659985e95e\") " Nov 28 12:57:22 crc kubenswrapper[4779]: I1128 12:57:22.504814 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q92gr\" (UniqueName: \"kubernetes.io/projected/67f75fbe-5004-449a-bd16-51659985e95e-kube-api-access-q92gr\") pod \"67f75fbe-5004-449a-bd16-51659985e95e\" (UID: \"67f75fbe-5004-449a-bd16-51659985e95e\") " Nov 28 12:57:22 crc kubenswrapper[4779]: I1128 12:57:22.504892 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67f75fbe-5004-449a-bd16-51659985e95e-config\") pod \"67f75fbe-5004-449a-bd16-51659985e95e\" (UID: \"67f75fbe-5004-449a-bd16-51659985e95e\") " Nov 28 12:57:22 crc kubenswrapper[4779]: I1128 12:57:22.511367 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67f75fbe-5004-449a-bd16-51659985e95e-kube-api-access-q92gr" (OuterVolumeSpecName: "kube-api-access-q92gr") pod "67f75fbe-5004-449a-bd16-51659985e95e" (UID: "67f75fbe-5004-449a-bd16-51659985e95e"). InnerVolumeSpecName "kube-api-access-q92gr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:57:22 crc kubenswrapper[4779]: I1128 12:57:22.557202 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67f75fbe-5004-449a-bd16-51659985e95e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "67f75fbe-5004-449a-bd16-51659985e95e" (UID: "67f75fbe-5004-449a-bd16-51659985e95e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:57:22 crc kubenswrapper[4779]: I1128 12:57:22.560614 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67f75fbe-5004-449a-bd16-51659985e95e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "67f75fbe-5004-449a-bd16-51659985e95e" (UID: "67f75fbe-5004-449a-bd16-51659985e95e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:57:22 crc kubenswrapper[4779]: I1128 12:57:22.566411 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67f75fbe-5004-449a-bd16-51659985e95e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "67f75fbe-5004-449a-bd16-51659985e95e" (UID: "67f75fbe-5004-449a-bd16-51659985e95e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:57:22 crc kubenswrapper[4779]: I1128 12:57:22.569824 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67f75fbe-5004-449a-bd16-51659985e95e-config" (OuterVolumeSpecName: "config") pod "67f75fbe-5004-449a-bd16-51659985e95e" (UID: "67f75fbe-5004-449a-bd16-51659985e95e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:57:22 crc kubenswrapper[4779]: I1128 12:57:22.573228 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67f75fbe-5004-449a-bd16-51659985e95e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "67f75fbe-5004-449a-bd16-51659985e95e" (UID: "67f75fbe-5004-449a-bd16-51659985e95e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:57:22 crc kubenswrapper[4779]: I1128 12:57:22.607384 4779 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/67f75fbe-5004-449a-bd16-51659985e95e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 12:57:22 crc kubenswrapper[4779]: I1128 12:57:22.607435 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q92gr\" (UniqueName: \"kubernetes.io/projected/67f75fbe-5004-449a-bd16-51659985e95e-kube-api-access-q92gr\") on node \"crc\" DevicePath \"\"" Nov 28 12:57:22 crc kubenswrapper[4779]: I1128 12:57:22.607448 4779 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67f75fbe-5004-449a-bd16-51659985e95e-config\") on node \"crc\" DevicePath \"\"" Nov 28 12:57:22 crc kubenswrapper[4779]: I1128 12:57:22.607456 4779 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/67f75fbe-5004-449a-bd16-51659985e95e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 12:57:22 crc kubenswrapper[4779]: I1128 12:57:22.607466 4779 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/67f75fbe-5004-449a-bd16-51659985e95e-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 12:57:22 crc kubenswrapper[4779]: I1128 12:57:22.607475 4779 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/67f75fbe-5004-449a-bd16-51659985e95e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 12:57:22 crc kubenswrapper[4779]: I1128 12:57:22.876458 4779 generic.go:334] "Generic (PLEG): container finished" podID="782e7605-5aed-4324-8ccf-964c7c961b48" containerID="586627fb9293ded9d2863454aef0d2b0ed4031f0ad4a80b2ccd16dd0ac73e865" exitCode=143 Nov 28 12:57:22 crc kubenswrapper[4779]: I1128 12:57:22.876543 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"782e7605-5aed-4324-8ccf-964c7c961b48","Type":"ContainerDied","Data":"586627fb9293ded9d2863454aef0d2b0ed4031f0ad4a80b2ccd16dd0ac73e865"} Nov 28 12:57:22 crc kubenswrapper[4779]: I1128 12:57:22.879471 4779 generic.go:334] "Generic (PLEG): container finished" podID="67f75fbe-5004-449a-bd16-51659985e95e" containerID="cddf9679e5f2f9f9de75352b786204b93933f8aeecebc3aef560f473634e57b8" exitCode=0 Nov 28 12:57:22 crc kubenswrapper[4779]: I1128 12:57:22.879510 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f6bc4c6c9-pcgm5" event={"ID":"67f75fbe-5004-449a-bd16-51659985e95e","Type":"ContainerDied","Data":"cddf9679e5f2f9f9de75352b786204b93933f8aeecebc3aef560f473634e57b8"} Nov 28 12:57:22 crc kubenswrapper[4779]: I1128 12:57:22.879537 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f6bc4c6c9-pcgm5" event={"ID":"67f75fbe-5004-449a-bd16-51659985e95e","Type":"ContainerDied","Data":"e326ad7149fa46402d1a16f8eb7dbdca6e68e92808c924974a95f2e913850e91"} Nov 28 12:57:22 crc kubenswrapper[4779]: I1128 12:57:22.879555 4779 scope.go:117] "RemoveContainer" containerID="cddf9679e5f2f9f9de75352b786204b93933f8aeecebc3aef560f473634e57b8" Nov 28 12:57:22 crc kubenswrapper[4779]: I1128 12:57:22.879687 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f6bc4c6c9-pcgm5" Nov 28 12:57:22 crc kubenswrapper[4779]: I1128 12:57:22.946274 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f6bc4c6c9-pcgm5"] Nov 28 12:57:22 crc kubenswrapper[4779]: I1128 12:57:22.954765 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f6bc4c6c9-pcgm5"] Nov 28 12:57:22 crc kubenswrapper[4779]: I1128 12:57:22.989941 4779 scope.go:117] "RemoveContainer" containerID="dac14d327ca31947e0c5b44051e25278635e0a3811f5c4bb270eb3324ab0a45a" Nov 28 12:57:23 crc kubenswrapper[4779]: I1128 12:57:23.019016 4779 scope.go:117] "RemoveContainer" containerID="cddf9679e5f2f9f9de75352b786204b93933f8aeecebc3aef560f473634e57b8" Nov 28 12:57:23 crc kubenswrapper[4779]: E1128 12:57:23.019501 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cddf9679e5f2f9f9de75352b786204b93933f8aeecebc3aef560f473634e57b8\": container with ID starting with cddf9679e5f2f9f9de75352b786204b93933f8aeecebc3aef560f473634e57b8 not found: ID does not exist" containerID="cddf9679e5f2f9f9de75352b786204b93933f8aeecebc3aef560f473634e57b8" Nov 28 12:57:23 crc kubenswrapper[4779]: I1128 12:57:23.019558 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cddf9679e5f2f9f9de75352b786204b93933f8aeecebc3aef560f473634e57b8"} err="failed to get container status \"cddf9679e5f2f9f9de75352b786204b93933f8aeecebc3aef560f473634e57b8\": rpc error: code = NotFound desc = could not find container \"cddf9679e5f2f9f9de75352b786204b93933f8aeecebc3aef560f473634e57b8\": container with ID starting with cddf9679e5f2f9f9de75352b786204b93933f8aeecebc3aef560f473634e57b8 not found: ID does not exist" Nov 28 12:57:23 crc kubenswrapper[4779]: I1128 12:57:23.019586 4779 scope.go:117] "RemoveContainer" containerID="dac14d327ca31947e0c5b44051e25278635e0a3811f5c4bb270eb3324ab0a45a" Nov 28 12:57:23 crc kubenswrapper[4779]: E1128 12:57:23.020013 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dac14d327ca31947e0c5b44051e25278635e0a3811f5c4bb270eb3324ab0a45a\": container with ID starting with dac14d327ca31947e0c5b44051e25278635e0a3811f5c4bb270eb3324ab0a45a not found: ID does not exist" containerID="dac14d327ca31947e0c5b44051e25278635e0a3811f5c4bb270eb3324ab0a45a" Nov 28 12:57:23 crc kubenswrapper[4779]: I1128 12:57:23.020052 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dac14d327ca31947e0c5b44051e25278635e0a3811f5c4bb270eb3324ab0a45a"} err="failed to get container status \"dac14d327ca31947e0c5b44051e25278635e0a3811f5c4bb270eb3324ab0a45a\": rpc error: code = NotFound desc = could not find container \"dac14d327ca31947e0c5b44051e25278635e0a3811f5c4bb270eb3324ab0a45a\": container with ID starting with dac14d327ca31947e0c5b44051e25278635e0a3811f5c4bb270eb3324ab0a45a not found: ID does not exist" Nov 28 12:57:23 crc kubenswrapper[4779]: I1128 12:57:23.740037 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67f75fbe-5004-449a-bd16-51659985e95e" path="/var/lib/kubelet/pods/67f75fbe-5004-449a-bd16-51659985e95e/volumes" Nov 28 12:57:23 crc kubenswrapper[4779]: I1128 12:57:23.914534 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d5ac5adf-85a6-400c-88e3-27ab1746ff4a" containerName="nova-metadata-log" containerID="cri-o://4e9bed36f1c6203bddc755b36b0e07ec9816e3ee6070487103f4c5e72b633607" gracePeriod=30 Nov 28 12:57:23 crc kubenswrapper[4779]: I1128 12:57:23.914778 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d5ac5adf-85a6-400c-88e3-27ab1746ff4a" containerName="nova-metadata-metadata" containerID="cri-o://b1fa123649f3c108bb67db73bb26b31a7913549f4f8a7fff5d195c0124bea96b" gracePeriod=30 Nov 28 12:57:23 crc kubenswrapper[4779]: I1128 12:57:23.915062 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="17062727-c25f-4ff0-90af-422314919f7a" containerName="nova-scheduler-scheduler" containerID="cri-o://c6ed363c23c42a562a6c5b0aba76930cc6af044b6b02b520840f5a7873631949" gracePeriod=30 Nov 28 12:57:24 crc kubenswrapper[4779]: I1128 12:57:24.940673 4779 generic.go:334] "Generic (PLEG): container finished" podID="d5ac5adf-85a6-400c-88e3-27ab1746ff4a" containerID="b1fa123649f3c108bb67db73bb26b31a7913549f4f8a7fff5d195c0124bea96b" exitCode=0 Nov 28 12:57:24 crc kubenswrapper[4779]: I1128 12:57:24.941000 4779 generic.go:334] "Generic (PLEG): container finished" podID="d5ac5adf-85a6-400c-88e3-27ab1746ff4a" containerID="4e9bed36f1c6203bddc755b36b0e07ec9816e3ee6070487103f4c5e72b633607" exitCode=143 Nov 28 12:57:24 crc kubenswrapper[4779]: I1128 12:57:24.941033 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d5ac5adf-85a6-400c-88e3-27ab1746ff4a","Type":"ContainerDied","Data":"b1fa123649f3c108bb67db73bb26b31a7913549f4f8a7fff5d195c0124bea96b"} Nov 28 12:57:24 crc kubenswrapper[4779]: I1128 12:57:24.941073 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d5ac5adf-85a6-400c-88e3-27ab1746ff4a","Type":"ContainerDied","Data":"4e9bed36f1c6203bddc755b36b0e07ec9816e3ee6070487103f4c5e72b633607"} Nov 28 12:57:25 crc kubenswrapper[4779]: I1128 12:57:25.246865 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 28 12:57:25 crc kubenswrapper[4779]: I1128 12:57:25.246972 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 28 12:57:25 crc kubenswrapper[4779]: I1128 12:57:25.697415 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 12:57:25 crc kubenswrapper[4779]: I1128 12:57:25.827135 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 12:57:25 crc kubenswrapper[4779]: I1128 12:57:25.869285 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5ac5adf-85a6-400c-88e3-27ab1746ff4a-config-data\") pod \"d5ac5adf-85a6-400c-88e3-27ab1746ff4a\" (UID: \"d5ac5adf-85a6-400c-88e3-27ab1746ff4a\") " Nov 28 12:57:25 crc kubenswrapper[4779]: I1128 12:57:25.869374 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jwn6\" (UniqueName: \"kubernetes.io/projected/d5ac5adf-85a6-400c-88e3-27ab1746ff4a-kube-api-access-8jwn6\") pod \"d5ac5adf-85a6-400c-88e3-27ab1746ff4a\" (UID: \"d5ac5adf-85a6-400c-88e3-27ab1746ff4a\") " Nov 28 12:57:25 crc kubenswrapper[4779]: I1128 12:57:25.869414 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5ac5adf-85a6-400c-88e3-27ab1746ff4a-combined-ca-bundle\") pod \"d5ac5adf-85a6-400c-88e3-27ab1746ff4a\" (UID: \"d5ac5adf-85a6-400c-88e3-27ab1746ff4a\") " Nov 28 12:57:25 crc kubenswrapper[4779]: I1128 12:57:25.869430 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5ac5adf-85a6-400c-88e3-27ab1746ff4a-logs\") pod \"d5ac5adf-85a6-400c-88e3-27ab1746ff4a\" (UID: \"d5ac5adf-85a6-400c-88e3-27ab1746ff4a\") " Nov 28 12:57:25 crc kubenswrapper[4779]: I1128 12:57:25.869553 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5ac5adf-85a6-400c-88e3-27ab1746ff4a-nova-metadata-tls-certs\") pod \"d5ac5adf-85a6-400c-88e3-27ab1746ff4a\" (UID: \"d5ac5adf-85a6-400c-88e3-27ab1746ff4a\") " Nov 28 12:57:25 crc kubenswrapper[4779]: I1128 12:57:25.873531 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5ac5adf-85a6-400c-88e3-27ab1746ff4a-logs" (OuterVolumeSpecName: "logs") pod "d5ac5adf-85a6-400c-88e3-27ab1746ff4a" (UID: "d5ac5adf-85a6-400c-88e3-27ab1746ff4a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:57:25 crc kubenswrapper[4779]: I1128 12:57:25.878402 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5ac5adf-85a6-400c-88e3-27ab1746ff4a-kube-api-access-8jwn6" (OuterVolumeSpecName: "kube-api-access-8jwn6") pod "d5ac5adf-85a6-400c-88e3-27ab1746ff4a" (UID: "d5ac5adf-85a6-400c-88e3-27ab1746ff4a"). InnerVolumeSpecName "kube-api-access-8jwn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:57:25 crc kubenswrapper[4779]: I1128 12:57:25.912658 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5ac5adf-85a6-400c-88e3-27ab1746ff4a-config-data" (OuterVolumeSpecName: "config-data") pod "d5ac5adf-85a6-400c-88e3-27ab1746ff4a" (UID: "d5ac5adf-85a6-400c-88e3-27ab1746ff4a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:57:25 crc kubenswrapper[4779]: I1128 12:57:25.915129 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5ac5adf-85a6-400c-88e3-27ab1746ff4a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d5ac5adf-85a6-400c-88e3-27ab1746ff4a" (UID: "d5ac5adf-85a6-400c-88e3-27ab1746ff4a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:57:25 crc kubenswrapper[4779]: I1128 12:57:25.945383 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5ac5adf-85a6-400c-88e3-27ab1746ff4a-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "d5ac5adf-85a6-400c-88e3-27ab1746ff4a" (UID: "d5ac5adf-85a6-400c-88e3-27ab1746ff4a"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:57:25 crc kubenswrapper[4779]: I1128 12:57:25.953002 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d5ac5adf-85a6-400c-88e3-27ab1746ff4a","Type":"ContainerDied","Data":"05abaaab1caee58a5f17f4d40d91e846a1353279dfe65348d8d74a23c49c529a"} Nov 28 12:57:25 crc kubenswrapper[4779]: I1128 12:57:25.953056 4779 scope.go:117] "RemoveContainer" containerID="b1fa123649f3c108bb67db73bb26b31a7913549f4f8a7fff5d195c0124bea96b" Nov 28 12:57:25 crc kubenswrapper[4779]: I1128 12:57:25.953015 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 12:57:25 crc kubenswrapper[4779]: I1128 12:57:25.954994 4779 generic.go:334] "Generic (PLEG): container finished" podID="17062727-c25f-4ff0-90af-422314919f7a" containerID="c6ed363c23c42a562a6c5b0aba76930cc6af044b6b02b520840f5a7873631949" exitCode=0 Nov 28 12:57:25 crc kubenswrapper[4779]: I1128 12:57:25.955132 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"17062727-c25f-4ff0-90af-422314919f7a","Type":"ContainerDied","Data":"c6ed363c23c42a562a6c5b0aba76930cc6af044b6b02b520840f5a7873631949"} Nov 28 12:57:25 crc kubenswrapper[4779]: I1128 12:57:25.955349 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"17062727-c25f-4ff0-90af-422314919f7a","Type":"ContainerDied","Data":"2a576d5a86001d035c069ad0f78f1fcbfb0ffe6434ccb626b4919f22e2eecd9b"} Nov 28 12:57:25 crc kubenswrapper[4779]: I1128 12:57:25.955230 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 12:57:25 crc kubenswrapper[4779]: I1128 12:57:25.970882 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbxd7\" (UniqueName: \"kubernetes.io/projected/17062727-c25f-4ff0-90af-422314919f7a-kube-api-access-vbxd7\") pod \"17062727-c25f-4ff0-90af-422314919f7a\" (UID: \"17062727-c25f-4ff0-90af-422314919f7a\") " Nov 28 12:57:25 crc kubenswrapper[4779]: I1128 12:57:25.970942 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17062727-c25f-4ff0-90af-422314919f7a-combined-ca-bundle\") pod \"17062727-c25f-4ff0-90af-422314919f7a\" (UID: \"17062727-c25f-4ff0-90af-422314919f7a\") " Nov 28 12:57:25 crc kubenswrapper[4779]: I1128 12:57:25.972992 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17062727-c25f-4ff0-90af-422314919f7a-config-data\") pod \"17062727-c25f-4ff0-90af-422314919f7a\" (UID: \"17062727-c25f-4ff0-90af-422314919f7a\") " Nov 28 12:57:25 crc kubenswrapper[4779]: I1128 12:57:25.974127 4779 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5ac5adf-85a6-400c-88e3-27ab1746ff4a-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:57:25 crc kubenswrapper[4779]: I1128 12:57:25.974161 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jwn6\" (UniqueName: \"kubernetes.io/projected/d5ac5adf-85a6-400c-88e3-27ab1746ff4a-kube-api-access-8jwn6\") on node \"crc\" DevicePath \"\"" Nov 28 12:57:25 crc kubenswrapper[4779]: I1128 12:57:25.974184 4779 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5ac5adf-85a6-400c-88e3-27ab1746ff4a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:57:25 crc kubenswrapper[4779]: I1128 12:57:25.974201 4779 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5ac5adf-85a6-400c-88e3-27ab1746ff4a-logs\") on node \"crc\" DevicePath \"\"" Nov 28 12:57:25 crc kubenswrapper[4779]: I1128 12:57:25.974218 4779 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5ac5adf-85a6-400c-88e3-27ab1746ff4a-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 12:57:25 crc kubenswrapper[4779]: I1128 12:57:25.977740 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17062727-c25f-4ff0-90af-422314919f7a-kube-api-access-vbxd7" (OuterVolumeSpecName: "kube-api-access-vbxd7") pod "17062727-c25f-4ff0-90af-422314919f7a" (UID: "17062727-c25f-4ff0-90af-422314919f7a"). InnerVolumeSpecName "kube-api-access-vbxd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:57:25 crc kubenswrapper[4779]: I1128 12:57:25.990038 4779 scope.go:117] "RemoveContainer" containerID="4e9bed36f1c6203bddc755b36b0e07ec9816e3ee6070487103f4c5e72b633607" Nov 28 12:57:25 crc kubenswrapper[4779]: I1128 12:57:25.992205 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.009515 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17062727-c25f-4ff0-90af-422314919f7a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "17062727-c25f-4ff0-90af-422314919f7a" (UID: "17062727-c25f-4ff0-90af-422314919f7a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.015558 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.015830 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17062727-c25f-4ff0-90af-422314919f7a-config-data" (OuterVolumeSpecName: "config-data") pod "17062727-c25f-4ff0-90af-422314919f7a" (UID: "17062727-c25f-4ff0-90af-422314919f7a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.025158 4779 scope.go:117] "RemoveContainer" containerID="c6ed363c23c42a562a6c5b0aba76930cc6af044b6b02b520840f5a7873631949" Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.040558 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 28 12:57:26 crc kubenswrapper[4779]: E1128 12:57:26.040974 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e860d8bc-f4c3-4923-ba29-3fb022978027" containerName="nova-manage" Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.040992 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="e860d8bc-f4c3-4923-ba29-3fb022978027" containerName="nova-manage" Nov 28 12:57:26 crc kubenswrapper[4779]: E1128 12:57:26.041011 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5ac5adf-85a6-400c-88e3-27ab1746ff4a" containerName="nova-metadata-metadata" Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.041017 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5ac5adf-85a6-400c-88e3-27ab1746ff4a" containerName="nova-metadata-metadata" Nov 28 12:57:26 crc kubenswrapper[4779]: E1128 12:57:26.041056 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67f75fbe-5004-449a-bd16-51659985e95e" containerName="dnsmasq-dns" Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.041063 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="67f75fbe-5004-449a-bd16-51659985e95e" containerName="dnsmasq-dns" Nov 28 12:57:26 crc kubenswrapper[4779]: E1128 12:57:26.041073 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17062727-c25f-4ff0-90af-422314919f7a" containerName="nova-scheduler-scheduler" Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.041079 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="17062727-c25f-4ff0-90af-422314919f7a" containerName="nova-scheduler-scheduler" Nov 28 12:57:26 crc kubenswrapper[4779]: E1128 12:57:26.041122 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67f75fbe-5004-449a-bd16-51659985e95e" containerName="init" Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.041129 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="67f75fbe-5004-449a-bd16-51659985e95e" containerName="init" Nov 28 12:57:26 crc kubenswrapper[4779]: E1128 12:57:26.041140 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5ac5adf-85a6-400c-88e3-27ab1746ff4a" containerName="nova-metadata-log" Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.041146 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5ac5adf-85a6-400c-88e3-27ab1746ff4a" containerName="nova-metadata-log" Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.041393 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="17062727-c25f-4ff0-90af-422314919f7a" containerName="nova-scheduler-scheduler" Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.041408 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="67f75fbe-5004-449a-bd16-51659985e95e" containerName="dnsmasq-dns" Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.041436 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5ac5adf-85a6-400c-88e3-27ab1746ff4a" containerName="nova-metadata-log" Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.041446 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="e860d8bc-f4c3-4923-ba29-3fb022978027" containerName="nova-manage" Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.041461 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5ac5adf-85a6-400c-88e3-27ab1746ff4a" containerName="nova-metadata-metadata" Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.042668 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.050173 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.050619 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.055390 4779 scope.go:117] "RemoveContainer" containerID="c6ed363c23c42a562a6c5b0aba76930cc6af044b6b02b520840f5a7873631949" Nov 28 12:57:26 crc kubenswrapper[4779]: E1128 12:57:26.055865 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6ed363c23c42a562a6c5b0aba76930cc6af044b6b02b520840f5a7873631949\": container with ID starting with c6ed363c23c42a562a6c5b0aba76930cc6af044b6b02b520840f5a7873631949 not found: ID does not exist" containerID="c6ed363c23c42a562a6c5b0aba76930cc6af044b6b02b520840f5a7873631949" Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.055901 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6ed363c23c42a562a6c5b0aba76930cc6af044b6b02b520840f5a7873631949"} err="failed to get container status \"c6ed363c23c42a562a6c5b0aba76930cc6af044b6b02b520840f5a7873631949\": rpc error: code = NotFound desc = could not find container \"c6ed363c23c42a562a6c5b0aba76930cc6af044b6b02b520840f5a7873631949\": container with ID starting with c6ed363c23c42a562a6c5b0aba76930cc6af044b6b02b520840f5a7873631949 not found: ID does not exist" Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.059294 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.075922 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbxd7\" (UniqueName: \"kubernetes.io/projected/17062727-c25f-4ff0-90af-422314919f7a-kube-api-access-vbxd7\") on node \"crc\" DevicePath \"\"" Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.075949 4779 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17062727-c25f-4ff0-90af-422314919f7a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.075958 4779 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17062727-c25f-4ff0-90af-422314919f7a-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.178498 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t2tk\" (UniqueName: \"kubernetes.io/projected/12323b44-9b4d-4d78-991e-b92d4daefcb6-kube-api-access-4t2tk\") pod \"nova-metadata-0\" (UID: \"12323b44-9b4d-4d78-991e-b92d4daefcb6\") " pod="openstack/nova-metadata-0" Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.178573 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12323b44-9b4d-4d78-991e-b92d4daefcb6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"12323b44-9b4d-4d78-991e-b92d4daefcb6\") " pod="openstack/nova-metadata-0" Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.178623 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/12323b44-9b4d-4d78-991e-b92d4daefcb6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"12323b44-9b4d-4d78-991e-b92d4daefcb6\") " pod="openstack/nova-metadata-0" Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.178870 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12323b44-9b4d-4d78-991e-b92d4daefcb6-logs\") pod \"nova-metadata-0\" (UID: \"12323b44-9b4d-4d78-991e-b92d4daefcb6\") " pod="openstack/nova-metadata-0" Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.178948 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12323b44-9b4d-4d78-991e-b92d4daefcb6-config-data\") pod \"nova-metadata-0\" (UID: \"12323b44-9b4d-4d78-991e-b92d4daefcb6\") " pod="openstack/nova-metadata-0" Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.281609 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12323b44-9b4d-4d78-991e-b92d4daefcb6-logs\") pod \"nova-metadata-0\" (UID: \"12323b44-9b4d-4d78-991e-b92d4daefcb6\") " pod="openstack/nova-metadata-0" Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.281683 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12323b44-9b4d-4d78-991e-b92d4daefcb6-config-data\") pod \"nova-metadata-0\" (UID: \"12323b44-9b4d-4d78-991e-b92d4daefcb6\") " pod="openstack/nova-metadata-0" Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.281883 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4t2tk\" (UniqueName: \"kubernetes.io/projected/12323b44-9b4d-4d78-991e-b92d4daefcb6-kube-api-access-4t2tk\") pod \"nova-metadata-0\" (UID: \"12323b44-9b4d-4d78-991e-b92d4daefcb6\") " pod="openstack/nova-metadata-0" Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.281934 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12323b44-9b4d-4d78-991e-b92d4daefcb6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"12323b44-9b4d-4d78-991e-b92d4daefcb6\") " pod="openstack/nova-metadata-0" Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.281974 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/12323b44-9b4d-4d78-991e-b92d4daefcb6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"12323b44-9b4d-4d78-991e-b92d4daefcb6\") " pod="openstack/nova-metadata-0" Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.282297 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12323b44-9b4d-4d78-991e-b92d4daefcb6-logs\") pod \"nova-metadata-0\" (UID: \"12323b44-9b4d-4d78-991e-b92d4daefcb6\") " pod="openstack/nova-metadata-0" Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.287121 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12323b44-9b4d-4d78-991e-b92d4daefcb6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"12323b44-9b4d-4d78-991e-b92d4daefcb6\") " pod="openstack/nova-metadata-0" Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.287437 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/12323b44-9b4d-4d78-991e-b92d4daefcb6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"12323b44-9b4d-4d78-991e-b92d4daefcb6\") " pod="openstack/nova-metadata-0" Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.288145 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12323b44-9b4d-4d78-991e-b92d4daefcb6-config-data\") pod \"nova-metadata-0\" (UID: \"12323b44-9b4d-4d78-991e-b92d4daefcb6\") " pod="openstack/nova-metadata-0" Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.305637 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4t2tk\" (UniqueName: \"kubernetes.io/projected/12323b44-9b4d-4d78-991e-b92d4daefcb6-kube-api-access-4t2tk\") pod \"nova-metadata-0\" (UID: \"12323b44-9b4d-4d78-991e-b92d4daefcb6\") " pod="openstack/nova-metadata-0" Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.312359 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.328499 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.344725 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.346877 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.349546 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.357394 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.363634 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.487272 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1cfc693-c027-4c51-bc8e-d1d5ac495223-config-data\") pod \"nova-scheduler-0\" (UID: \"a1cfc693-c027-4c51-bc8e-d1d5ac495223\") " pod="openstack/nova-scheduler-0" Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.487350 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1cfc693-c027-4c51-bc8e-d1d5ac495223-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a1cfc693-c027-4c51-bc8e-d1d5ac495223\") " pod="openstack/nova-scheduler-0" Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.487567 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbvgx\" (UniqueName: \"kubernetes.io/projected/a1cfc693-c027-4c51-bc8e-d1d5ac495223-kube-api-access-kbvgx\") pod \"nova-scheduler-0\" (UID: \"a1cfc693-c027-4c51-bc8e-d1d5ac495223\") " pod="openstack/nova-scheduler-0" Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.589845 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbvgx\" (UniqueName: \"kubernetes.io/projected/a1cfc693-c027-4c51-bc8e-d1d5ac495223-kube-api-access-kbvgx\") pod \"nova-scheduler-0\" (UID: \"a1cfc693-c027-4c51-bc8e-d1d5ac495223\") " pod="openstack/nova-scheduler-0" Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.590148 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1cfc693-c027-4c51-bc8e-d1d5ac495223-config-data\") pod \"nova-scheduler-0\" (UID: \"a1cfc693-c027-4c51-bc8e-d1d5ac495223\") " pod="openstack/nova-scheduler-0" Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.590185 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1cfc693-c027-4c51-bc8e-d1d5ac495223-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a1cfc693-c027-4c51-bc8e-d1d5ac495223\") " pod="openstack/nova-scheduler-0" Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.601413 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1cfc693-c027-4c51-bc8e-d1d5ac495223-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a1cfc693-c027-4c51-bc8e-d1d5ac495223\") " pod="openstack/nova-scheduler-0" Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.613810 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1cfc693-c027-4c51-bc8e-d1d5ac495223-config-data\") pod \"nova-scheduler-0\" (UID: \"a1cfc693-c027-4c51-bc8e-d1d5ac495223\") " pod="openstack/nova-scheduler-0" Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.616309 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbvgx\" (UniqueName: \"kubernetes.io/projected/a1cfc693-c027-4c51-bc8e-d1d5ac495223-kube-api-access-kbvgx\") pod \"nova-scheduler-0\" (UID: \"a1cfc693-c027-4c51-bc8e-d1d5ac495223\") " pod="openstack/nova-scheduler-0" Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.684474 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.855826 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 12:57:26 crc kubenswrapper[4779]: I1128 12:57:26.981938 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"12323b44-9b4d-4d78-991e-b92d4daefcb6","Type":"ContainerStarted","Data":"d11105c1cb8e05344957cd9447279dfd352de061eeb64f4662c64252dce4cd22"} Nov 28 12:57:27 crc kubenswrapper[4779]: I1128 12:57:27.170250 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 12:57:27 crc kubenswrapper[4779]: I1128 12:57:27.741907 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17062727-c25f-4ff0-90af-422314919f7a" path="/var/lib/kubelet/pods/17062727-c25f-4ff0-90af-422314919f7a/volumes" Nov 28 12:57:27 crc kubenswrapper[4779]: I1128 12:57:27.742908 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5ac5adf-85a6-400c-88e3-27ab1746ff4a" path="/var/lib/kubelet/pods/d5ac5adf-85a6-400c-88e3-27ab1746ff4a/volumes" Nov 28 12:57:27 crc kubenswrapper[4779]: I1128 12:57:27.995685 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"12323b44-9b4d-4d78-991e-b92d4daefcb6","Type":"ContainerStarted","Data":"cc4cf61717ce614e20f674244807640d617d9b35a7b191a1e147b6d9aadadd97"} Nov 28 12:57:27 crc kubenswrapper[4779]: I1128 12:57:27.995724 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"12323b44-9b4d-4d78-991e-b92d4daefcb6","Type":"ContainerStarted","Data":"e64347e284f6efbe656f8cd27a343538be1724c9a6be3f8047e878231429ebf6"} Nov 28 12:57:27 crc kubenswrapper[4779]: I1128 12:57:27.997847 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a1cfc693-c027-4c51-bc8e-d1d5ac495223","Type":"ContainerStarted","Data":"6230a48249aab89bf769c90d7ef6bb0588b2058d6a90dd2932f6cb0f7b466930"} Nov 28 12:57:27 crc kubenswrapper[4779]: I1128 12:57:27.997891 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a1cfc693-c027-4c51-bc8e-d1d5ac495223","Type":"ContainerStarted","Data":"922d76bdc10aa5dbf0239f11458fef11bb616b4481cafb123691ea815eae6f03"} Nov 28 12:57:28 crc kubenswrapper[4779]: I1128 12:57:28.025670 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.02565296 podStartE2EDuration="3.02565296s" podCreationTimestamp="2025-11-28 12:57:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:57:28.022519477 +0000 UTC m=+1308.588194871" watchObservedRunningTime="2025-11-28 12:57:28.02565296 +0000 UTC m=+1308.591328314" Nov 28 12:57:28 crc kubenswrapper[4779]: I1128 12:57:28.052397 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.052371053 podStartE2EDuration="2.052371053s" podCreationTimestamp="2025-11-28 12:57:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:57:28.048342027 +0000 UTC m=+1308.614017381" watchObservedRunningTime="2025-11-28 12:57:28.052371053 +0000 UTC m=+1308.618046437" Nov 28 12:57:28 crc kubenswrapper[4779]: I1128 12:57:28.430478 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 28 12:57:29 crc kubenswrapper[4779]: I1128 12:57:29.011628 4779 generic.go:334] "Generic (PLEG): container finished" podID="782e7605-5aed-4324-8ccf-964c7c961b48" containerID="bcbd4d0515f2398ef5fc794df4e7fc77fc020e3fd90f2ca0824eba0521907509" exitCode=0 Nov 28 12:57:29 crc kubenswrapper[4779]: I1128 12:57:29.011996 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"782e7605-5aed-4324-8ccf-964c7c961b48","Type":"ContainerDied","Data":"bcbd4d0515f2398ef5fc794df4e7fc77fc020e3fd90f2ca0824eba0521907509"} Nov 28 12:57:29 crc kubenswrapper[4779]: I1128 12:57:29.015267 4779 generic.go:334] "Generic (PLEG): container finished" podID="16554019-fb17-4257-9bfd-1c1ffe3edb87" containerID="09d013213ac1b1692af87bda94005251c2127c5b1430ef379f877b931cfd15b4" exitCode=0 Nov 28 12:57:29 crc kubenswrapper[4779]: I1128 12:57:29.016315 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-v8d6d" event={"ID":"16554019-fb17-4257-9bfd-1c1ffe3edb87","Type":"ContainerDied","Data":"09d013213ac1b1692af87bda94005251c2127c5b1430ef379f877b931cfd15b4"} Nov 28 12:57:29 crc kubenswrapper[4779]: I1128 12:57:29.115311 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 12:57:29 crc kubenswrapper[4779]: I1128 12:57:29.243434 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/782e7605-5aed-4324-8ccf-964c7c961b48-logs\") pod \"782e7605-5aed-4324-8ccf-964c7c961b48\" (UID: \"782e7605-5aed-4324-8ccf-964c7c961b48\") " Nov 28 12:57:29 crc kubenswrapper[4779]: I1128 12:57:29.243473 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/782e7605-5aed-4324-8ccf-964c7c961b48-config-data\") pod \"782e7605-5aed-4324-8ccf-964c7c961b48\" (UID: \"782e7605-5aed-4324-8ccf-964c7c961b48\") " Nov 28 12:57:29 crc kubenswrapper[4779]: I1128 12:57:29.243624 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/782e7605-5aed-4324-8ccf-964c7c961b48-combined-ca-bundle\") pod \"782e7605-5aed-4324-8ccf-964c7c961b48\" (UID: \"782e7605-5aed-4324-8ccf-964c7c961b48\") " Nov 28 12:57:29 crc kubenswrapper[4779]: I1128 12:57:29.243767 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nc9qk\" (UniqueName: \"kubernetes.io/projected/782e7605-5aed-4324-8ccf-964c7c961b48-kube-api-access-nc9qk\") pod \"782e7605-5aed-4324-8ccf-964c7c961b48\" (UID: \"782e7605-5aed-4324-8ccf-964c7c961b48\") " Nov 28 12:57:29 crc kubenswrapper[4779]: I1128 12:57:29.244241 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/782e7605-5aed-4324-8ccf-964c7c961b48-logs" (OuterVolumeSpecName: "logs") pod "782e7605-5aed-4324-8ccf-964c7c961b48" (UID: "782e7605-5aed-4324-8ccf-964c7c961b48"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:57:29 crc kubenswrapper[4779]: I1128 12:57:29.244657 4779 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/782e7605-5aed-4324-8ccf-964c7c961b48-logs\") on node \"crc\" DevicePath \"\"" Nov 28 12:57:29 crc kubenswrapper[4779]: I1128 12:57:29.251053 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/782e7605-5aed-4324-8ccf-964c7c961b48-kube-api-access-nc9qk" (OuterVolumeSpecName: "kube-api-access-nc9qk") pod "782e7605-5aed-4324-8ccf-964c7c961b48" (UID: "782e7605-5aed-4324-8ccf-964c7c961b48"). InnerVolumeSpecName "kube-api-access-nc9qk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:57:29 crc kubenswrapper[4779]: I1128 12:57:29.269956 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/782e7605-5aed-4324-8ccf-964c7c961b48-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "782e7605-5aed-4324-8ccf-964c7c961b48" (UID: "782e7605-5aed-4324-8ccf-964c7c961b48"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:57:29 crc kubenswrapper[4779]: I1128 12:57:29.297862 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/782e7605-5aed-4324-8ccf-964c7c961b48-config-data" (OuterVolumeSpecName: "config-data") pod "782e7605-5aed-4324-8ccf-964c7c961b48" (UID: "782e7605-5aed-4324-8ccf-964c7c961b48"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:57:29 crc kubenswrapper[4779]: I1128 12:57:29.346605 4779 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/782e7605-5aed-4324-8ccf-964c7c961b48-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:57:29 crc kubenswrapper[4779]: I1128 12:57:29.346644 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nc9qk\" (UniqueName: \"kubernetes.io/projected/782e7605-5aed-4324-8ccf-964c7c961b48-kube-api-access-nc9qk\") on node \"crc\" DevicePath \"\"" Nov 28 12:57:29 crc kubenswrapper[4779]: I1128 12:57:29.346658 4779 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/782e7605-5aed-4324-8ccf-964c7c961b48-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:57:30 crc kubenswrapper[4779]: I1128 12:57:30.029616 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"782e7605-5aed-4324-8ccf-964c7c961b48","Type":"ContainerDied","Data":"820e14bf7be7ac126869638f91a9c1322fc5369a05b1763f7358f8630dedb81c"} Nov 28 12:57:30 crc kubenswrapper[4779]: I1128 12:57:30.029668 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 12:57:30 crc kubenswrapper[4779]: I1128 12:57:30.030288 4779 scope.go:117] "RemoveContainer" containerID="bcbd4d0515f2398ef5fc794df4e7fc77fc020e3fd90f2ca0824eba0521907509" Nov 28 12:57:30 crc kubenswrapper[4779]: I1128 12:57:30.073299 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 12:57:30 crc kubenswrapper[4779]: I1128 12:57:30.081958 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 28 12:57:30 crc kubenswrapper[4779]: I1128 12:57:30.097596 4779 scope.go:117] "RemoveContainer" containerID="586627fb9293ded9d2863454aef0d2b0ed4031f0ad4a80b2ccd16dd0ac73e865" Nov 28 12:57:30 crc kubenswrapper[4779]: I1128 12:57:30.109700 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 28 12:57:30 crc kubenswrapper[4779]: E1128 12:57:30.110561 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="782e7605-5aed-4324-8ccf-964c7c961b48" containerName="nova-api-log" Nov 28 12:57:30 crc kubenswrapper[4779]: I1128 12:57:30.110589 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="782e7605-5aed-4324-8ccf-964c7c961b48" containerName="nova-api-log" Nov 28 12:57:30 crc kubenswrapper[4779]: E1128 12:57:30.110637 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="782e7605-5aed-4324-8ccf-964c7c961b48" containerName="nova-api-api" Nov 28 12:57:30 crc kubenswrapper[4779]: I1128 12:57:30.110652 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="782e7605-5aed-4324-8ccf-964c7c961b48" containerName="nova-api-api" Nov 28 12:57:30 crc kubenswrapper[4779]: I1128 12:57:30.111033 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="782e7605-5aed-4324-8ccf-964c7c961b48" containerName="nova-api-log" Nov 28 12:57:30 crc kubenswrapper[4779]: I1128 12:57:30.111061 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="782e7605-5aed-4324-8ccf-964c7c961b48" containerName="nova-api-api" Nov 28 12:57:30 crc kubenswrapper[4779]: I1128 12:57:30.112978 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 12:57:30 crc kubenswrapper[4779]: I1128 12:57:30.115152 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 28 12:57:30 crc kubenswrapper[4779]: I1128 12:57:30.119803 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 12:57:30 crc kubenswrapper[4779]: I1128 12:57:30.267035 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7559047-05de-4ba9-acbd-8f57a1362666-config-data\") pod \"nova-api-0\" (UID: \"f7559047-05de-4ba9-acbd-8f57a1362666\") " pod="openstack/nova-api-0" Nov 28 12:57:30 crc kubenswrapper[4779]: I1128 12:57:30.267125 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7559047-05de-4ba9-acbd-8f57a1362666-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f7559047-05de-4ba9-acbd-8f57a1362666\") " pod="openstack/nova-api-0" Nov 28 12:57:30 crc kubenswrapper[4779]: I1128 12:57:30.267358 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7559047-05de-4ba9-acbd-8f57a1362666-logs\") pod \"nova-api-0\" (UID: \"f7559047-05de-4ba9-acbd-8f57a1362666\") " pod="openstack/nova-api-0" Nov 28 12:57:30 crc kubenswrapper[4779]: I1128 12:57:30.267569 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdztw\" (UniqueName: \"kubernetes.io/projected/f7559047-05de-4ba9-acbd-8f57a1362666-kube-api-access-jdztw\") pod \"nova-api-0\" (UID: \"f7559047-05de-4ba9-acbd-8f57a1362666\") " pod="openstack/nova-api-0" Nov 28 12:57:30 crc kubenswrapper[4779]: I1128 12:57:30.369425 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdztw\" (UniqueName: \"kubernetes.io/projected/f7559047-05de-4ba9-acbd-8f57a1362666-kube-api-access-jdztw\") pod \"nova-api-0\" (UID: \"f7559047-05de-4ba9-acbd-8f57a1362666\") " pod="openstack/nova-api-0" Nov 28 12:57:30 crc kubenswrapper[4779]: I1128 12:57:30.369516 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7559047-05de-4ba9-acbd-8f57a1362666-config-data\") pod \"nova-api-0\" (UID: \"f7559047-05de-4ba9-acbd-8f57a1362666\") " pod="openstack/nova-api-0" Nov 28 12:57:30 crc kubenswrapper[4779]: I1128 12:57:30.369592 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7559047-05de-4ba9-acbd-8f57a1362666-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f7559047-05de-4ba9-acbd-8f57a1362666\") " pod="openstack/nova-api-0" Nov 28 12:57:30 crc kubenswrapper[4779]: I1128 12:57:30.370463 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7559047-05de-4ba9-acbd-8f57a1362666-logs\") pod \"nova-api-0\" (UID: \"f7559047-05de-4ba9-acbd-8f57a1362666\") " pod="openstack/nova-api-0" Nov 28 12:57:30 crc kubenswrapper[4779]: I1128 12:57:30.370784 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7559047-05de-4ba9-acbd-8f57a1362666-logs\") pod \"nova-api-0\" (UID: \"f7559047-05de-4ba9-acbd-8f57a1362666\") " pod="openstack/nova-api-0" Nov 28 12:57:30 crc kubenswrapper[4779]: I1128 12:57:30.376333 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7559047-05de-4ba9-acbd-8f57a1362666-config-data\") pod \"nova-api-0\" (UID: \"f7559047-05de-4ba9-acbd-8f57a1362666\") " pod="openstack/nova-api-0" Nov 28 12:57:30 crc kubenswrapper[4779]: I1128 12:57:30.377128 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7559047-05de-4ba9-acbd-8f57a1362666-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f7559047-05de-4ba9-acbd-8f57a1362666\") " pod="openstack/nova-api-0" Nov 28 12:57:30 crc kubenswrapper[4779]: I1128 12:57:30.390358 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdztw\" (UniqueName: \"kubernetes.io/projected/f7559047-05de-4ba9-acbd-8f57a1362666-kube-api-access-jdztw\") pod \"nova-api-0\" (UID: \"f7559047-05de-4ba9-acbd-8f57a1362666\") " pod="openstack/nova-api-0" Nov 28 12:57:30 crc kubenswrapper[4779]: I1128 12:57:30.452858 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 12:57:30 crc kubenswrapper[4779]: I1128 12:57:30.565125 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-v8d6d" Nov 28 12:57:30 crc kubenswrapper[4779]: I1128 12:57:30.674667 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16554019-fb17-4257-9bfd-1c1ffe3edb87-scripts\") pod \"16554019-fb17-4257-9bfd-1c1ffe3edb87\" (UID: \"16554019-fb17-4257-9bfd-1c1ffe3edb87\") " Nov 28 12:57:30 crc kubenswrapper[4779]: I1128 12:57:30.674810 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16554019-fb17-4257-9bfd-1c1ffe3edb87-config-data\") pod \"16554019-fb17-4257-9bfd-1c1ffe3edb87\" (UID: \"16554019-fb17-4257-9bfd-1c1ffe3edb87\") " Nov 28 12:57:30 crc kubenswrapper[4779]: I1128 12:57:30.674887 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16554019-fb17-4257-9bfd-1c1ffe3edb87-combined-ca-bundle\") pod \"16554019-fb17-4257-9bfd-1c1ffe3edb87\" (UID: \"16554019-fb17-4257-9bfd-1c1ffe3edb87\") " Nov 28 12:57:30 crc kubenswrapper[4779]: I1128 12:57:30.674979 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjrct\" (UniqueName: \"kubernetes.io/projected/16554019-fb17-4257-9bfd-1c1ffe3edb87-kube-api-access-bjrct\") pod \"16554019-fb17-4257-9bfd-1c1ffe3edb87\" (UID: \"16554019-fb17-4257-9bfd-1c1ffe3edb87\") " Nov 28 12:57:30 crc kubenswrapper[4779]: I1128 12:57:30.692275 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16554019-fb17-4257-9bfd-1c1ffe3edb87-scripts" (OuterVolumeSpecName: "scripts") pod "16554019-fb17-4257-9bfd-1c1ffe3edb87" (UID: "16554019-fb17-4257-9bfd-1c1ffe3edb87"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:57:30 crc kubenswrapper[4779]: I1128 12:57:30.702716 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16554019-fb17-4257-9bfd-1c1ffe3edb87-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "16554019-fb17-4257-9bfd-1c1ffe3edb87" (UID: "16554019-fb17-4257-9bfd-1c1ffe3edb87"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:57:30 crc kubenswrapper[4779]: I1128 12:57:30.703399 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16554019-fb17-4257-9bfd-1c1ffe3edb87-kube-api-access-bjrct" (OuterVolumeSpecName: "kube-api-access-bjrct") pod "16554019-fb17-4257-9bfd-1c1ffe3edb87" (UID: "16554019-fb17-4257-9bfd-1c1ffe3edb87"). InnerVolumeSpecName "kube-api-access-bjrct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:57:30 crc kubenswrapper[4779]: I1128 12:57:30.716273 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16554019-fb17-4257-9bfd-1c1ffe3edb87-config-data" (OuterVolumeSpecName: "config-data") pod "16554019-fb17-4257-9bfd-1c1ffe3edb87" (UID: "16554019-fb17-4257-9bfd-1c1ffe3edb87"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:57:30 crc kubenswrapper[4779]: I1128 12:57:30.778072 4779 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16554019-fb17-4257-9bfd-1c1ffe3edb87-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:57:30 crc kubenswrapper[4779]: I1128 12:57:30.778157 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjrct\" (UniqueName: \"kubernetes.io/projected/16554019-fb17-4257-9bfd-1c1ffe3edb87-kube-api-access-bjrct\") on node \"crc\" DevicePath \"\"" Nov 28 12:57:30 crc kubenswrapper[4779]: I1128 12:57:30.778171 4779 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16554019-fb17-4257-9bfd-1c1ffe3edb87-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:57:30 crc kubenswrapper[4779]: I1128 12:57:30.778247 4779 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16554019-fb17-4257-9bfd-1c1ffe3edb87-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:57:30 crc kubenswrapper[4779]: I1128 12:57:30.890739 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 12:57:30 crc kubenswrapper[4779]: W1128 12:57:30.896022 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7559047_05de_4ba9_acbd_8f57a1362666.slice/crio-7966a7e0566548789af017b76a94546cb6773326a20bf5a841d40de27cec3324 WatchSource:0}: Error finding container 7966a7e0566548789af017b76a94546cb6773326a20bf5a841d40de27cec3324: Status 404 returned error can't find the container with id 7966a7e0566548789af017b76a94546cb6773326a20bf5a841d40de27cec3324 Nov 28 12:57:31 crc kubenswrapper[4779]: I1128 12:57:31.038690 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f7559047-05de-4ba9-acbd-8f57a1362666","Type":"ContainerStarted","Data":"7966a7e0566548789af017b76a94546cb6773326a20bf5a841d40de27cec3324"} Nov 28 12:57:31 crc kubenswrapper[4779]: I1128 12:57:31.043284 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-v8d6d" event={"ID":"16554019-fb17-4257-9bfd-1c1ffe3edb87","Type":"ContainerDied","Data":"928da867cc665c8ccd14328f458f7044713abea6ede84ea4cde9613691762d65"} Nov 28 12:57:31 crc kubenswrapper[4779]: I1128 12:57:31.043305 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="928da867cc665c8ccd14328f458f7044713abea6ede84ea4cde9613691762d65" Nov 28 12:57:31 crc kubenswrapper[4779]: I1128 12:57:31.043350 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-v8d6d" Nov 28 12:57:31 crc kubenswrapper[4779]: I1128 12:57:31.118905 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 28 12:57:31 crc kubenswrapper[4779]: E1128 12:57:31.119331 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16554019-fb17-4257-9bfd-1c1ffe3edb87" containerName="nova-cell1-conductor-db-sync" Nov 28 12:57:31 crc kubenswrapper[4779]: I1128 12:57:31.119343 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="16554019-fb17-4257-9bfd-1c1ffe3edb87" containerName="nova-cell1-conductor-db-sync" Nov 28 12:57:31 crc kubenswrapper[4779]: I1128 12:57:31.119557 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="16554019-fb17-4257-9bfd-1c1ffe3edb87" containerName="nova-cell1-conductor-db-sync" Nov 28 12:57:31 crc kubenswrapper[4779]: I1128 12:57:31.120197 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 28 12:57:31 crc kubenswrapper[4779]: I1128 12:57:31.123713 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 28 12:57:31 crc kubenswrapper[4779]: I1128 12:57:31.126372 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 28 12:57:31 crc kubenswrapper[4779]: I1128 12:57:31.187189 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfd44820-2805-4e67-a5f8-05a5a31dc047-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"bfd44820-2805-4e67-a5f8-05a5a31dc047\") " pod="openstack/nova-cell1-conductor-0" Nov 28 12:57:31 crc kubenswrapper[4779]: I1128 12:57:31.187476 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5msgf\" (UniqueName: \"kubernetes.io/projected/bfd44820-2805-4e67-a5f8-05a5a31dc047-kube-api-access-5msgf\") pod \"nova-cell1-conductor-0\" (UID: \"bfd44820-2805-4e67-a5f8-05a5a31dc047\") " pod="openstack/nova-cell1-conductor-0" Nov 28 12:57:31 crc kubenswrapper[4779]: I1128 12:57:31.187528 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfd44820-2805-4e67-a5f8-05a5a31dc047-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"bfd44820-2805-4e67-a5f8-05a5a31dc047\") " pod="openstack/nova-cell1-conductor-0" Nov 28 12:57:31 crc kubenswrapper[4779]: I1128 12:57:31.288969 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfd44820-2805-4e67-a5f8-05a5a31dc047-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"bfd44820-2805-4e67-a5f8-05a5a31dc047\") " pod="openstack/nova-cell1-conductor-0" Nov 28 12:57:31 crc kubenswrapper[4779]: I1128 12:57:31.289058 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5msgf\" (UniqueName: \"kubernetes.io/projected/bfd44820-2805-4e67-a5f8-05a5a31dc047-kube-api-access-5msgf\") pod \"nova-cell1-conductor-0\" (UID: \"bfd44820-2805-4e67-a5f8-05a5a31dc047\") " pod="openstack/nova-cell1-conductor-0" Nov 28 12:57:31 crc kubenswrapper[4779]: I1128 12:57:31.289121 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfd44820-2805-4e67-a5f8-05a5a31dc047-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"bfd44820-2805-4e67-a5f8-05a5a31dc047\") " pod="openstack/nova-cell1-conductor-0" Nov 28 12:57:31 crc kubenswrapper[4779]: I1128 12:57:31.297515 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfd44820-2805-4e67-a5f8-05a5a31dc047-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"bfd44820-2805-4e67-a5f8-05a5a31dc047\") " pod="openstack/nova-cell1-conductor-0" Nov 28 12:57:31 crc kubenswrapper[4779]: I1128 12:57:31.297659 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfd44820-2805-4e67-a5f8-05a5a31dc047-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"bfd44820-2805-4e67-a5f8-05a5a31dc047\") " pod="openstack/nova-cell1-conductor-0" Nov 28 12:57:31 crc kubenswrapper[4779]: I1128 12:57:31.304530 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5msgf\" (UniqueName: \"kubernetes.io/projected/bfd44820-2805-4e67-a5f8-05a5a31dc047-kube-api-access-5msgf\") pod \"nova-cell1-conductor-0\" (UID: \"bfd44820-2805-4e67-a5f8-05a5a31dc047\") " pod="openstack/nova-cell1-conductor-0" Nov 28 12:57:31 crc kubenswrapper[4779]: I1128 12:57:31.364924 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 28 12:57:31 crc kubenswrapper[4779]: I1128 12:57:31.365003 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 28 12:57:31 crc kubenswrapper[4779]: I1128 12:57:31.483056 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 28 12:57:31 crc kubenswrapper[4779]: I1128 12:57:31.686106 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 28 12:57:31 crc kubenswrapper[4779]: I1128 12:57:31.744633 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="782e7605-5aed-4324-8ccf-964c7c961b48" path="/var/lib/kubelet/pods/782e7605-5aed-4324-8ccf-964c7c961b48/volumes" Nov 28 12:57:32 crc kubenswrapper[4779]: I1128 12:57:32.012622 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 28 12:57:32 crc kubenswrapper[4779]: W1128 12:57:32.022270 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbfd44820_2805_4e67_a5f8_05a5a31dc047.slice/crio-52a68a906d4f1cdd3c468da9922def8057e2d9e8927e8d29119c3446e33c238f WatchSource:0}: Error finding container 52a68a906d4f1cdd3c468da9922def8057e2d9e8927e8d29119c3446e33c238f: Status 404 returned error can't find the container with id 52a68a906d4f1cdd3c468da9922def8057e2d9e8927e8d29119c3446e33c238f Nov 28 12:57:32 crc kubenswrapper[4779]: I1128 12:57:32.059172 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f7559047-05de-4ba9-acbd-8f57a1362666","Type":"ContainerStarted","Data":"ec3727d08064c13295ea87d27c7b9431c6174a35863fb87ca267396e890e54c1"} Nov 28 12:57:32 crc kubenswrapper[4779]: I1128 12:57:32.059213 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f7559047-05de-4ba9-acbd-8f57a1362666","Type":"ContainerStarted","Data":"8d0cc9a55bb75d96b26d06d327306725907ac538564b6f0852ccd8eed0d14920"} Nov 28 12:57:32 crc kubenswrapper[4779]: I1128 12:57:32.064318 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"bfd44820-2805-4e67-a5f8-05a5a31dc047","Type":"ContainerStarted","Data":"52a68a906d4f1cdd3c468da9922def8057e2d9e8927e8d29119c3446e33c238f"} Nov 28 12:57:32 crc kubenswrapper[4779]: I1128 12:57:32.087796 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.087778331 podStartE2EDuration="2.087778331s" podCreationTimestamp="2025-11-28 12:57:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:57:32.077219403 +0000 UTC m=+1312.642894767" watchObservedRunningTime="2025-11-28 12:57:32.087778331 +0000 UTC m=+1312.653453685" Nov 28 12:57:32 crc kubenswrapper[4779]: I1128 12:57:32.363527 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 12:57:32 crc kubenswrapper[4779]: I1128 12:57:32.363943 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="8482bdcc-fe9d-4ed6-8ade-a1319330b252" containerName="kube-state-metrics" containerID="cri-o://fa711c66a66d58096bd0a0f030fab0b17df3dbdbb30777cf257fdf1b61e4c27e" gracePeriod=30 Nov 28 12:57:32 crc kubenswrapper[4779]: I1128 12:57:32.859522 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 28 12:57:32 crc kubenswrapper[4779]: I1128 12:57:32.934559 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxs72\" (UniqueName: \"kubernetes.io/projected/8482bdcc-fe9d-4ed6-8ade-a1319330b252-kube-api-access-fxs72\") pod \"8482bdcc-fe9d-4ed6-8ade-a1319330b252\" (UID: \"8482bdcc-fe9d-4ed6-8ade-a1319330b252\") " Nov 28 12:57:32 crc kubenswrapper[4779]: I1128 12:57:32.955207 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8482bdcc-fe9d-4ed6-8ade-a1319330b252-kube-api-access-fxs72" (OuterVolumeSpecName: "kube-api-access-fxs72") pod "8482bdcc-fe9d-4ed6-8ade-a1319330b252" (UID: "8482bdcc-fe9d-4ed6-8ade-a1319330b252"). InnerVolumeSpecName "kube-api-access-fxs72". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:57:33 crc kubenswrapper[4779]: I1128 12:57:33.037408 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxs72\" (UniqueName: \"kubernetes.io/projected/8482bdcc-fe9d-4ed6-8ade-a1319330b252-kube-api-access-fxs72\") on node \"crc\" DevicePath \"\"" Nov 28 12:57:33 crc kubenswrapper[4779]: I1128 12:57:33.076278 4779 generic.go:334] "Generic (PLEG): container finished" podID="8482bdcc-fe9d-4ed6-8ade-a1319330b252" containerID="fa711c66a66d58096bd0a0f030fab0b17df3dbdbb30777cf257fdf1b61e4c27e" exitCode=2 Nov 28 12:57:33 crc kubenswrapper[4779]: I1128 12:57:33.076343 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"8482bdcc-fe9d-4ed6-8ade-a1319330b252","Type":"ContainerDied","Data":"fa711c66a66d58096bd0a0f030fab0b17df3dbdbb30777cf257fdf1b61e4c27e"} Nov 28 12:57:33 crc kubenswrapper[4779]: I1128 12:57:33.076373 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"8482bdcc-fe9d-4ed6-8ade-a1319330b252","Type":"ContainerDied","Data":"173be10794ffd53f43c10a6e498e0317e04e24dcb2bd9dbd0cb83b1c10cf4c6a"} Nov 28 12:57:33 crc kubenswrapper[4779]: I1128 12:57:33.076389 4779 scope.go:117] "RemoveContainer" containerID="fa711c66a66d58096bd0a0f030fab0b17df3dbdbb30777cf257fdf1b61e4c27e" Nov 28 12:57:33 crc kubenswrapper[4779]: I1128 12:57:33.076501 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 28 12:57:33 crc kubenswrapper[4779]: I1128 12:57:33.082834 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"bfd44820-2805-4e67-a5f8-05a5a31dc047","Type":"ContainerStarted","Data":"031168763fb8ce9eed0ddfe64d71664165218db5308f9e084cb8c8547f5b9de9"} Nov 28 12:57:33 crc kubenswrapper[4779]: I1128 12:57:33.109265 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.109242415 podStartE2EDuration="2.109242415s" podCreationTimestamp="2025-11-28 12:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:57:33.09957299 +0000 UTC m=+1313.665248344" watchObservedRunningTime="2025-11-28 12:57:33.109242415 +0000 UTC m=+1313.674917769" Nov 28 12:57:33 crc kubenswrapper[4779]: I1128 12:57:33.120273 4779 scope.go:117] "RemoveContainer" containerID="fa711c66a66d58096bd0a0f030fab0b17df3dbdbb30777cf257fdf1b61e4c27e" Nov 28 12:57:33 crc kubenswrapper[4779]: E1128 12:57:33.121073 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa711c66a66d58096bd0a0f030fab0b17df3dbdbb30777cf257fdf1b61e4c27e\": container with ID starting with fa711c66a66d58096bd0a0f030fab0b17df3dbdbb30777cf257fdf1b61e4c27e not found: ID does not exist" containerID="fa711c66a66d58096bd0a0f030fab0b17df3dbdbb30777cf257fdf1b61e4c27e" Nov 28 12:57:33 crc kubenswrapper[4779]: I1128 12:57:33.121141 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa711c66a66d58096bd0a0f030fab0b17df3dbdbb30777cf257fdf1b61e4c27e"} err="failed to get container status \"fa711c66a66d58096bd0a0f030fab0b17df3dbdbb30777cf257fdf1b61e4c27e\": rpc error: code = NotFound desc = could not find container \"fa711c66a66d58096bd0a0f030fab0b17df3dbdbb30777cf257fdf1b61e4c27e\": container with ID starting with fa711c66a66d58096bd0a0f030fab0b17df3dbdbb30777cf257fdf1b61e4c27e not found: ID does not exist" Nov 28 12:57:33 crc kubenswrapper[4779]: I1128 12:57:33.142032 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 12:57:33 crc kubenswrapper[4779]: I1128 12:57:33.152391 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 12:57:33 crc kubenswrapper[4779]: I1128 12:57:33.163142 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 12:57:33 crc kubenswrapper[4779]: E1128 12:57:33.163887 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8482bdcc-fe9d-4ed6-8ade-a1319330b252" containerName="kube-state-metrics" Nov 28 12:57:33 crc kubenswrapper[4779]: I1128 12:57:33.163906 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="8482bdcc-fe9d-4ed6-8ade-a1319330b252" containerName="kube-state-metrics" Nov 28 12:57:33 crc kubenswrapper[4779]: I1128 12:57:33.164164 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="8482bdcc-fe9d-4ed6-8ade-a1319330b252" containerName="kube-state-metrics" Nov 28 12:57:33 crc kubenswrapper[4779]: I1128 12:57:33.164945 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 28 12:57:33 crc kubenswrapper[4779]: I1128 12:57:33.169476 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 28 12:57:33 crc kubenswrapper[4779]: I1128 12:57:33.169664 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 28 12:57:33 crc kubenswrapper[4779]: I1128 12:57:33.175715 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 12:57:33 crc kubenswrapper[4779]: I1128 12:57:33.240383 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4e75c99e-5273-44e8-a5d1-98b317b5dacf-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"4e75c99e-5273-44e8-a5d1-98b317b5dacf\") " pod="openstack/kube-state-metrics-0" Nov 28 12:57:33 crc kubenswrapper[4779]: I1128 12:57:33.240424 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e75c99e-5273-44e8-a5d1-98b317b5dacf-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"4e75c99e-5273-44e8-a5d1-98b317b5dacf\") " pod="openstack/kube-state-metrics-0" Nov 28 12:57:33 crc kubenswrapper[4779]: I1128 12:57:33.240444 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e75c99e-5273-44e8-a5d1-98b317b5dacf-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"4e75c99e-5273-44e8-a5d1-98b317b5dacf\") " pod="openstack/kube-state-metrics-0" Nov 28 12:57:33 crc kubenswrapper[4779]: I1128 12:57:33.240584 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jfsg\" (UniqueName: \"kubernetes.io/projected/4e75c99e-5273-44e8-a5d1-98b317b5dacf-kube-api-access-2jfsg\") pod \"kube-state-metrics-0\" (UID: \"4e75c99e-5273-44e8-a5d1-98b317b5dacf\") " pod="openstack/kube-state-metrics-0" Nov 28 12:57:33 crc kubenswrapper[4779]: I1128 12:57:33.343065 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4e75c99e-5273-44e8-a5d1-98b317b5dacf-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"4e75c99e-5273-44e8-a5d1-98b317b5dacf\") " pod="openstack/kube-state-metrics-0" Nov 28 12:57:33 crc kubenswrapper[4779]: I1128 12:57:33.343202 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e75c99e-5273-44e8-a5d1-98b317b5dacf-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"4e75c99e-5273-44e8-a5d1-98b317b5dacf\") " pod="openstack/kube-state-metrics-0" Nov 28 12:57:33 crc kubenswrapper[4779]: I1128 12:57:33.343260 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e75c99e-5273-44e8-a5d1-98b317b5dacf-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"4e75c99e-5273-44e8-a5d1-98b317b5dacf\") " pod="openstack/kube-state-metrics-0" Nov 28 12:57:33 crc kubenswrapper[4779]: I1128 12:57:33.343610 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jfsg\" (UniqueName: \"kubernetes.io/projected/4e75c99e-5273-44e8-a5d1-98b317b5dacf-kube-api-access-2jfsg\") pod \"kube-state-metrics-0\" (UID: \"4e75c99e-5273-44e8-a5d1-98b317b5dacf\") " pod="openstack/kube-state-metrics-0" Nov 28 12:57:33 crc kubenswrapper[4779]: I1128 12:57:33.351517 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e75c99e-5273-44e8-a5d1-98b317b5dacf-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"4e75c99e-5273-44e8-a5d1-98b317b5dacf\") " pod="openstack/kube-state-metrics-0" Nov 28 12:57:33 crc kubenswrapper[4779]: I1128 12:57:33.352799 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4e75c99e-5273-44e8-a5d1-98b317b5dacf-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"4e75c99e-5273-44e8-a5d1-98b317b5dacf\") " pod="openstack/kube-state-metrics-0" Nov 28 12:57:33 crc kubenswrapper[4779]: I1128 12:57:33.354499 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e75c99e-5273-44e8-a5d1-98b317b5dacf-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"4e75c99e-5273-44e8-a5d1-98b317b5dacf\") " pod="openstack/kube-state-metrics-0" Nov 28 12:57:33 crc kubenswrapper[4779]: I1128 12:57:33.362253 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jfsg\" (UniqueName: \"kubernetes.io/projected/4e75c99e-5273-44e8-a5d1-98b317b5dacf-kube-api-access-2jfsg\") pod \"kube-state-metrics-0\" (UID: \"4e75c99e-5273-44e8-a5d1-98b317b5dacf\") " pod="openstack/kube-state-metrics-0" Nov 28 12:57:33 crc kubenswrapper[4779]: I1128 12:57:33.484687 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 28 12:57:33 crc kubenswrapper[4779]: I1128 12:57:33.746014 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8482bdcc-fe9d-4ed6-8ade-a1319330b252" path="/var/lib/kubelet/pods/8482bdcc-fe9d-4ed6-8ade-a1319330b252/volumes" Nov 28 12:57:33 crc kubenswrapper[4779]: I1128 12:57:33.971904 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 12:57:34 crc kubenswrapper[4779]: I1128 12:57:34.066568 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 12:57:34 crc kubenswrapper[4779]: I1128 12:57:34.066824 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0079f475-1f10-4aeb-a5dd-e74628d26936" containerName="ceilometer-central-agent" containerID="cri-o://5ae620089df6c228f1fbcac70aca74fc8376f08c05250b32f82920fe3952bf81" gracePeriod=30 Nov 28 12:57:34 crc kubenswrapper[4779]: I1128 12:57:34.066903 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0079f475-1f10-4aeb-a5dd-e74628d26936" containerName="sg-core" containerID="cri-o://e0e645c256a7953ca5f8241897036d8dfa6793ada8ba733c07c63c94a2601bb1" gracePeriod=30 Nov 28 12:57:34 crc kubenswrapper[4779]: I1128 12:57:34.066898 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0079f475-1f10-4aeb-a5dd-e74628d26936" containerName="proxy-httpd" containerID="cri-o://a8527b30304e13f25f431b0ec219438e3123f37bc2e492b7397df8161f1021fd" gracePeriod=30 Nov 28 12:57:34 crc kubenswrapper[4779]: I1128 12:57:34.066957 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0079f475-1f10-4aeb-a5dd-e74628d26936" containerName="ceilometer-notification-agent" containerID="cri-o://bd2cad8dcc4c16f6bc865b5df3adeb87500cfa21b6fb3c0680b1b07d3525df9a" gracePeriod=30 Nov 28 12:57:34 crc kubenswrapper[4779]: I1128 12:57:34.094487 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4e75c99e-5273-44e8-a5d1-98b317b5dacf","Type":"ContainerStarted","Data":"31eedc0825f4d2019001f363163c87cb1a74ec61a0ccabc31d563999d7a1994a"} Nov 28 12:57:34 crc kubenswrapper[4779]: I1128 12:57:34.094523 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 28 12:57:35 crc kubenswrapper[4779]: I1128 12:57:35.105342 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4e75c99e-5273-44e8-a5d1-98b317b5dacf","Type":"ContainerStarted","Data":"7bec321609582f3d6b748747e744f1e0548c0aadd58fa0ffafa033673112532f"} Nov 28 12:57:35 crc kubenswrapper[4779]: I1128 12:57:35.105746 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 28 12:57:35 crc kubenswrapper[4779]: I1128 12:57:35.114199 4779 generic.go:334] "Generic (PLEG): container finished" podID="0079f475-1f10-4aeb-a5dd-e74628d26936" containerID="a8527b30304e13f25f431b0ec219438e3123f37bc2e492b7397df8161f1021fd" exitCode=0 Nov 28 12:57:35 crc kubenswrapper[4779]: I1128 12:57:35.114243 4779 generic.go:334] "Generic (PLEG): container finished" podID="0079f475-1f10-4aeb-a5dd-e74628d26936" containerID="e0e645c256a7953ca5f8241897036d8dfa6793ada8ba733c07c63c94a2601bb1" exitCode=2 Nov 28 12:57:35 crc kubenswrapper[4779]: I1128 12:57:35.114258 4779 generic.go:334] "Generic (PLEG): container finished" podID="0079f475-1f10-4aeb-a5dd-e74628d26936" containerID="5ae620089df6c228f1fbcac70aca74fc8376f08c05250b32f82920fe3952bf81" exitCode=0 Nov 28 12:57:35 crc kubenswrapper[4779]: I1128 12:57:35.114262 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0079f475-1f10-4aeb-a5dd-e74628d26936","Type":"ContainerDied","Data":"a8527b30304e13f25f431b0ec219438e3123f37bc2e492b7397df8161f1021fd"} Nov 28 12:57:35 crc kubenswrapper[4779]: I1128 12:57:35.114326 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0079f475-1f10-4aeb-a5dd-e74628d26936","Type":"ContainerDied","Data":"e0e645c256a7953ca5f8241897036d8dfa6793ada8ba733c07c63c94a2601bb1"} Nov 28 12:57:35 crc kubenswrapper[4779]: I1128 12:57:35.114349 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0079f475-1f10-4aeb-a5dd-e74628d26936","Type":"ContainerDied","Data":"5ae620089df6c228f1fbcac70aca74fc8376f08c05250b32f82920fe3952bf81"} Nov 28 12:57:35 crc kubenswrapper[4779]: I1128 12:57:35.135769 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.665613443 podStartE2EDuration="2.135744121s" podCreationTimestamp="2025-11-28 12:57:33 +0000 UTC" firstStartedPulling="2025-11-28 12:57:33.982754264 +0000 UTC m=+1314.548429628" lastFinishedPulling="2025-11-28 12:57:34.452884952 +0000 UTC m=+1315.018560306" observedRunningTime="2025-11-28 12:57:35.120995173 +0000 UTC m=+1315.686670537" watchObservedRunningTime="2025-11-28 12:57:35.135744121 +0000 UTC m=+1315.701419515" Nov 28 12:57:36 crc kubenswrapper[4779]: I1128 12:57:36.128063 4779 generic.go:334] "Generic (PLEG): container finished" podID="0079f475-1f10-4aeb-a5dd-e74628d26936" containerID="bd2cad8dcc4c16f6bc865b5df3adeb87500cfa21b6fb3c0680b1b07d3525df9a" exitCode=0 Nov 28 12:57:36 crc kubenswrapper[4779]: I1128 12:57:36.128123 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0079f475-1f10-4aeb-a5dd-e74628d26936","Type":"ContainerDied","Data":"bd2cad8dcc4c16f6bc865b5df3adeb87500cfa21b6fb3c0680b1b07d3525df9a"} Nov 28 12:57:36 crc kubenswrapper[4779]: I1128 12:57:36.365310 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 28 12:57:36 crc kubenswrapper[4779]: I1128 12:57:36.365640 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 28 12:57:36 crc kubenswrapper[4779]: I1128 12:57:36.387916 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 12:57:36 crc kubenswrapper[4779]: I1128 12:57:36.514570 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0079f475-1f10-4aeb-a5dd-e74628d26936-log-httpd\") pod \"0079f475-1f10-4aeb-a5dd-e74628d26936\" (UID: \"0079f475-1f10-4aeb-a5dd-e74628d26936\") " Nov 28 12:57:36 crc kubenswrapper[4779]: I1128 12:57:36.514658 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ll57k\" (UniqueName: \"kubernetes.io/projected/0079f475-1f10-4aeb-a5dd-e74628d26936-kube-api-access-ll57k\") pod \"0079f475-1f10-4aeb-a5dd-e74628d26936\" (UID: \"0079f475-1f10-4aeb-a5dd-e74628d26936\") " Nov 28 12:57:36 crc kubenswrapper[4779]: I1128 12:57:36.514682 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0079f475-1f10-4aeb-a5dd-e74628d26936-scripts\") pod \"0079f475-1f10-4aeb-a5dd-e74628d26936\" (UID: \"0079f475-1f10-4aeb-a5dd-e74628d26936\") " Nov 28 12:57:36 crc kubenswrapper[4779]: I1128 12:57:36.514720 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0079f475-1f10-4aeb-a5dd-e74628d26936-sg-core-conf-yaml\") pod \"0079f475-1f10-4aeb-a5dd-e74628d26936\" (UID: \"0079f475-1f10-4aeb-a5dd-e74628d26936\") " Nov 28 12:57:36 crc kubenswrapper[4779]: I1128 12:57:36.514782 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0079f475-1f10-4aeb-a5dd-e74628d26936-run-httpd\") pod \"0079f475-1f10-4aeb-a5dd-e74628d26936\" (UID: \"0079f475-1f10-4aeb-a5dd-e74628d26936\") " Nov 28 12:57:36 crc kubenswrapper[4779]: I1128 12:57:36.514798 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0079f475-1f10-4aeb-a5dd-e74628d26936-config-data\") pod \"0079f475-1f10-4aeb-a5dd-e74628d26936\" (UID: \"0079f475-1f10-4aeb-a5dd-e74628d26936\") " Nov 28 12:57:36 crc kubenswrapper[4779]: I1128 12:57:36.514816 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0079f475-1f10-4aeb-a5dd-e74628d26936-combined-ca-bundle\") pod \"0079f475-1f10-4aeb-a5dd-e74628d26936\" (UID: \"0079f475-1f10-4aeb-a5dd-e74628d26936\") " Nov 28 12:57:36 crc kubenswrapper[4779]: I1128 12:57:36.516244 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0079f475-1f10-4aeb-a5dd-e74628d26936-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0079f475-1f10-4aeb-a5dd-e74628d26936" (UID: "0079f475-1f10-4aeb-a5dd-e74628d26936"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:57:36 crc kubenswrapper[4779]: I1128 12:57:36.516423 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0079f475-1f10-4aeb-a5dd-e74628d26936-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0079f475-1f10-4aeb-a5dd-e74628d26936" (UID: "0079f475-1f10-4aeb-a5dd-e74628d26936"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:57:36 crc kubenswrapper[4779]: I1128 12:57:36.520777 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0079f475-1f10-4aeb-a5dd-e74628d26936-kube-api-access-ll57k" (OuterVolumeSpecName: "kube-api-access-ll57k") pod "0079f475-1f10-4aeb-a5dd-e74628d26936" (UID: "0079f475-1f10-4aeb-a5dd-e74628d26936"). InnerVolumeSpecName "kube-api-access-ll57k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:57:36 crc kubenswrapper[4779]: I1128 12:57:36.536285 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0079f475-1f10-4aeb-a5dd-e74628d26936-scripts" (OuterVolumeSpecName: "scripts") pod "0079f475-1f10-4aeb-a5dd-e74628d26936" (UID: "0079f475-1f10-4aeb-a5dd-e74628d26936"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:57:36 crc kubenswrapper[4779]: I1128 12:57:36.562450 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0079f475-1f10-4aeb-a5dd-e74628d26936-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0079f475-1f10-4aeb-a5dd-e74628d26936" (UID: "0079f475-1f10-4aeb-a5dd-e74628d26936"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:57:36 crc kubenswrapper[4779]: I1128 12:57:36.602526 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0079f475-1f10-4aeb-a5dd-e74628d26936-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0079f475-1f10-4aeb-a5dd-e74628d26936" (UID: "0079f475-1f10-4aeb-a5dd-e74628d26936"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:57:36 crc kubenswrapper[4779]: I1128 12:57:36.617637 4779 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0079f475-1f10-4aeb-a5dd-e74628d26936-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 12:57:36 crc kubenswrapper[4779]: I1128 12:57:36.617679 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ll57k\" (UniqueName: \"kubernetes.io/projected/0079f475-1f10-4aeb-a5dd-e74628d26936-kube-api-access-ll57k\") on node \"crc\" DevicePath \"\"" Nov 28 12:57:36 crc kubenswrapper[4779]: I1128 12:57:36.617694 4779 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0079f475-1f10-4aeb-a5dd-e74628d26936-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:57:36 crc kubenswrapper[4779]: I1128 12:57:36.617704 4779 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0079f475-1f10-4aeb-a5dd-e74628d26936-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 12:57:36 crc kubenswrapper[4779]: I1128 12:57:36.617715 4779 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0079f475-1f10-4aeb-a5dd-e74628d26936-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 12:57:36 crc kubenswrapper[4779]: I1128 12:57:36.617725 4779 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0079f475-1f10-4aeb-a5dd-e74628d26936-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:57:36 crc kubenswrapper[4779]: I1128 12:57:36.621295 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0079f475-1f10-4aeb-a5dd-e74628d26936-config-data" (OuterVolumeSpecName: "config-data") pod "0079f475-1f10-4aeb-a5dd-e74628d26936" (UID: "0079f475-1f10-4aeb-a5dd-e74628d26936"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:57:36 crc kubenswrapper[4779]: I1128 12:57:36.685287 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 28 12:57:36 crc kubenswrapper[4779]: I1128 12:57:36.717893 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 28 12:57:36 crc kubenswrapper[4779]: I1128 12:57:36.719647 4779 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0079f475-1f10-4aeb-a5dd-e74628d26936-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:57:37 crc kubenswrapper[4779]: I1128 12:57:37.143399 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 12:57:37 crc kubenswrapper[4779]: I1128 12:57:37.143823 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0079f475-1f10-4aeb-a5dd-e74628d26936","Type":"ContainerDied","Data":"00f29ba359c3507e7c089e87598f9050b89e81d65f0b41b3fe0d188e5d6d9566"} Nov 28 12:57:37 crc kubenswrapper[4779]: I1128 12:57:37.144291 4779 scope.go:117] "RemoveContainer" containerID="a8527b30304e13f25f431b0ec219438e3123f37bc2e492b7397df8161f1021fd" Nov 28 12:57:37 crc kubenswrapper[4779]: I1128 12:57:37.166898 4779 scope.go:117] "RemoveContainer" containerID="e0e645c256a7953ca5f8241897036d8dfa6793ada8ba733c07c63c94a2601bb1" Nov 28 12:57:37 crc kubenswrapper[4779]: I1128 12:57:37.198708 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 28 12:57:37 crc kubenswrapper[4779]: I1128 12:57:37.220686 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 12:57:37 crc kubenswrapper[4779]: I1128 12:57:37.241247 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 12:57:37 crc kubenswrapper[4779]: I1128 12:57:37.245637 4779 scope.go:117] "RemoveContainer" containerID="bd2cad8dcc4c16f6bc865b5df3adeb87500cfa21b6fb3c0680b1b07d3525df9a" Nov 28 12:57:37 crc kubenswrapper[4779]: I1128 12:57:37.252051 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 12:57:37 crc kubenswrapper[4779]: E1128 12:57:37.252484 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0079f475-1f10-4aeb-a5dd-e74628d26936" containerName="proxy-httpd" Nov 28 12:57:37 crc kubenswrapper[4779]: I1128 12:57:37.252500 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="0079f475-1f10-4aeb-a5dd-e74628d26936" containerName="proxy-httpd" Nov 28 12:57:37 crc kubenswrapper[4779]: E1128 12:57:37.252513 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0079f475-1f10-4aeb-a5dd-e74628d26936" containerName="ceilometer-central-agent" Nov 28 12:57:37 crc kubenswrapper[4779]: I1128 12:57:37.252520 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="0079f475-1f10-4aeb-a5dd-e74628d26936" containerName="ceilometer-central-agent" Nov 28 12:57:37 crc kubenswrapper[4779]: E1128 12:57:37.252542 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0079f475-1f10-4aeb-a5dd-e74628d26936" containerName="ceilometer-notification-agent" Nov 28 12:57:37 crc kubenswrapper[4779]: I1128 12:57:37.252549 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="0079f475-1f10-4aeb-a5dd-e74628d26936" containerName="ceilometer-notification-agent" Nov 28 12:57:37 crc kubenswrapper[4779]: E1128 12:57:37.252557 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0079f475-1f10-4aeb-a5dd-e74628d26936" containerName="sg-core" Nov 28 12:57:37 crc kubenswrapper[4779]: I1128 12:57:37.252563 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="0079f475-1f10-4aeb-a5dd-e74628d26936" containerName="sg-core" Nov 28 12:57:37 crc kubenswrapper[4779]: I1128 12:57:37.252745 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="0079f475-1f10-4aeb-a5dd-e74628d26936" containerName="ceilometer-central-agent" Nov 28 12:57:37 crc kubenswrapper[4779]: I1128 12:57:37.252763 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="0079f475-1f10-4aeb-a5dd-e74628d26936" containerName="sg-core" Nov 28 12:57:37 crc kubenswrapper[4779]: I1128 12:57:37.252779 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="0079f475-1f10-4aeb-a5dd-e74628d26936" containerName="ceilometer-notification-agent" Nov 28 12:57:37 crc kubenswrapper[4779]: I1128 12:57:37.252789 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="0079f475-1f10-4aeb-a5dd-e74628d26936" containerName="proxy-httpd" Nov 28 12:57:37 crc kubenswrapper[4779]: I1128 12:57:37.261860 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 12:57:37 crc kubenswrapper[4779]: I1128 12:57:37.268078 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 12:57:37 crc kubenswrapper[4779]: I1128 12:57:37.268114 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 28 12:57:37 crc kubenswrapper[4779]: I1128 12:57:37.268525 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 12:57:37 crc kubenswrapper[4779]: I1128 12:57:37.277444 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 12:57:37 crc kubenswrapper[4779]: I1128 12:57:37.290562 4779 scope.go:117] "RemoveContainer" containerID="5ae620089df6c228f1fbcac70aca74fc8376f08c05250b32f82920fe3952bf81" Nov 28 12:57:37 crc kubenswrapper[4779]: I1128 12:57:37.330657 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc454a59-2c44-49d3-8ff7-c44eedd22b3b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"cc454a59-2c44-49d3-8ff7-c44eedd22b3b\") " pod="openstack/ceilometer-0" Nov 28 12:57:37 crc kubenswrapper[4779]: I1128 12:57:37.330711 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc454a59-2c44-49d3-8ff7-c44eedd22b3b-run-httpd\") pod \"ceilometer-0\" (UID: \"cc454a59-2c44-49d3-8ff7-c44eedd22b3b\") " pod="openstack/ceilometer-0" Nov 28 12:57:37 crc kubenswrapper[4779]: I1128 12:57:37.330771 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc454a59-2c44-49d3-8ff7-c44eedd22b3b-config-data\") pod \"ceilometer-0\" (UID: \"cc454a59-2c44-49d3-8ff7-c44eedd22b3b\") " pod="openstack/ceilometer-0" Nov 28 12:57:37 crc kubenswrapper[4779]: I1128 12:57:37.330798 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc454a59-2c44-49d3-8ff7-c44eedd22b3b-scripts\") pod \"ceilometer-0\" (UID: \"cc454a59-2c44-49d3-8ff7-c44eedd22b3b\") " pod="openstack/ceilometer-0" Nov 28 12:57:37 crc kubenswrapper[4779]: I1128 12:57:37.330830 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq6xc\" (UniqueName: \"kubernetes.io/projected/cc454a59-2c44-49d3-8ff7-c44eedd22b3b-kube-api-access-tq6xc\") pod \"ceilometer-0\" (UID: \"cc454a59-2c44-49d3-8ff7-c44eedd22b3b\") " pod="openstack/ceilometer-0" Nov 28 12:57:37 crc kubenswrapper[4779]: I1128 12:57:37.330879 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cc454a59-2c44-49d3-8ff7-c44eedd22b3b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cc454a59-2c44-49d3-8ff7-c44eedd22b3b\") " pod="openstack/ceilometer-0" Nov 28 12:57:37 crc kubenswrapper[4779]: I1128 12:57:37.330910 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc454a59-2c44-49d3-8ff7-c44eedd22b3b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cc454a59-2c44-49d3-8ff7-c44eedd22b3b\") " pod="openstack/ceilometer-0" Nov 28 12:57:37 crc kubenswrapper[4779]: I1128 12:57:37.330940 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc454a59-2c44-49d3-8ff7-c44eedd22b3b-log-httpd\") pod \"ceilometer-0\" (UID: \"cc454a59-2c44-49d3-8ff7-c44eedd22b3b\") " pod="openstack/ceilometer-0" Nov 28 12:57:37 crc kubenswrapper[4779]: I1128 12:57:37.378267 4779 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="12323b44-9b4d-4d78-991e-b92d4daefcb6" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.201:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 12:57:37 crc kubenswrapper[4779]: I1128 12:57:37.378305 4779 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="12323b44-9b4d-4d78-991e-b92d4daefcb6" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.201:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 12:57:37 crc kubenswrapper[4779]: I1128 12:57:37.432678 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc454a59-2c44-49d3-8ff7-c44eedd22b3b-config-data\") pod \"ceilometer-0\" (UID: \"cc454a59-2c44-49d3-8ff7-c44eedd22b3b\") " pod="openstack/ceilometer-0" Nov 28 12:57:37 crc kubenswrapper[4779]: I1128 12:57:37.432785 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc454a59-2c44-49d3-8ff7-c44eedd22b3b-scripts\") pod \"ceilometer-0\" (UID: \"cc454a59-2c44-49d3-8ff7-c44eedd22b3b\") " pod="openstack/ceilometer-0" Nov 28 12:57:37 crc kubenswrapper[4779]: I1128 12:57:37.432867 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tq6xc\" (UniqueName: \"kubernetes.io/projected/cc454a59-2c44-49d3-8ff7-c44eedd22b3b-kube-api-access-tq6xc\") pod \"ceilometer-0\" (UID: \"cc454a59-2c44-49d3-8ff7-c44eedd22b3b\") " pod="openstack/ceilometer-0" Nov 28 12:57:37 crc kubenswrapper[4779]: I1128 12:57:37.432929 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cc454a59-2c44-49d3-8ff7-c44eedd22b3b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cc454a59-2c44-49d3-8ff7-c44eedd22b3b\") " pod="openstack/ceilometer-0" Nov 28 12:57:37 crc kubenswrapper[4779]: I1128 12:57:37.432978 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc454a59-2c44-49d3-8ff7-c44eedd22b3b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cc454a59-2c44-49d3-8ff7-c44eedd22b3b\") " pod="openstack/ceilometer-0" Nov 28 12:57:37 crc kubenswrapper[4779]: I1128 12:57:37.433026 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc454a59-2c44-49d3-8ff7-c44eedd22b3b-log-httpd\") pod \"ceilometer-0\" (UID: \"cc454a59-2c44-49d3-8ff7-c44eedd22b3b\") " pod="openstack/ceilometer-0" Nov 28 12:57:37 crc kubenswrapper[4779]: I1128 12:57:37.433265 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc454a59-2c44-49d3-8ff7-c44eedd22b3b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"cc454a59-2c44-49d3-8ff7-c44eedd22b3b\") " pod="openstack/ceilometer-0" Nov 28 12:57:37 crc kubenswrapper[4779]: I1128 12:57:37.433334 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc454a59-2c44-49d3-8ff7-c44eedd22b3b-run-httpd\") pod \"ceilometer-0\" (UID: \"cc454a59-2c44-49d3-8ff7-c44eedd22b3b\") " pod="openstack/ceilometer-0" Nov 28 12:57:37 crc kubenswrapper[4779]: I1128 12:57:37.435519 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc454a59-2c44-49d3-8ff7-c44eedd22b3b-run-httpd\") pod \"ceilometer-0\" (UID: \"cc454a59-2c44-49d3-8ff7-c44eedd22b3b\") " pod="openstack/ceilometer-0" Nov 28 12:57:37 crc kubenswrapper[4779]: I1128 12:57:37.435725 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc454a59-2c44-49d3-8ff7-c44eedd22b3b-log-httpd\") pod \"ceilometer-0\" (UID: \"cc454a59-2c44-49d3-8ff7-c44eedd22b3b\") " pod="openstack/ceilometer-0" Nov 28 12:57:37 crc kubenswrapper[4779]: I1128 12:57:37.451218 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc454a59-2c44-49d3-8ff7-c44eedd22b3b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cc454a59-2c44-49d3-8ff7-c44eedd22b3b\") " pod="openstack/ceilometer-0" Nov 28 12:57:37 crc kubenswrapper[4779]: I1128 12:57:37.451245 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc454a59-2c44-49d3-8ff7-c44eedd22b3b-scripts\") pod \"ceilometer-0\" (UID: \"cc454a59-2c44-49d3-8ff7-c44eedd22b3b\") " pod="openstack/ceilometer-0" Nov 28 12:57:37 crc kubenswrapper[4779]: I1128 12:57:37.451375 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cc454a59-2c44-49d3-8ff7-c44eedd22b3b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cc454a59-2c44-49d3-8ff7-c44eedd22b3b\") " pod="openstack/ceilometer-0" Nov 28 12:57:37 crc kubenswrapper[4779]: I1128 12:57:37.452070 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc454a59-2c44-49d3-8ff7-c44eedd22b3b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"cc454a59-2c44-49d3-8ff7-c44eedd22b3b\") " pod="openstack/ceilometer-0" Nov 28 12:57:37 crc kubenswrapper[4779]: I1128 12:57:37.452246 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc454a59-2c44-49d3-8ff7-c44eedd22b3b-config-data\") pod \"ceilometer-0\" (UID: \"cc454a59-2c44-49d3-8ff7-c44eedd22b3b\") " pod="openstack/ceilometer-0" Nov 28 12:57:37 crc kubenswrapper[4779]: I1128 12:57:37.455985 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tq6xc\" (UniqueName: \"kubernetes.io/projected/cc454a59-2c44-49d3-8ff7-c44eedd22b3b-kube-api-access-tq6xc\") pod \"ceilometer-0\" (UID: \"cc454a59-2c44-49d3-8ff7-c44eedd22b3b\") " pod="openstack/ceilometer-0" Nov 28 12:57:37 crc kubenswrapper[4779]: I1128 12:57:37.580429 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 12:57:37 crc kubenswrapper[4779]: I1128 12:57:37.743611 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0079f475-1f10-4aeb-a5dd-e74628d26936" path="/var/lib/kubelet/pods/0079f475-1f10-4aeb-a5dd-e74628d26936/volumes" Nov 28 12:57:38 crc kubenswrapper[4779]: W1128 12:57:38.061355 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc454a59_2c44_49d3_8ff7_c44eedd22b3b.slice/crio-6804e1ecda9895750d3ab8d80a7e1d60d039229f82b6a1dae1d0a681183cf535 WatchSource:0}: Error finding container 6804e1ecda9895750d3ab8d80a7e1d60d039229f82b6a1dae1d0a681183cf535: Status 404 returned error can't find the container with id 6804e1ecda9895750d3ab8d80a7e1d60d039229f82b6a1dae1d0a681183cf535 Nov 28 12:57:38 crc kubenswrapper[4779]: I1128 12:57:38.065669 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 12:57:38 crc kubenswrapper[4779]: I1128 12:57:38.152495 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc454a59-2c44-49d3-8ff7-c44eedd22b3b","Type":"ContainerStarted","Data":"6804e1ecda9895750d3ab8d80a7e1d60d039229f82b6a1dae1d0a681183cf535"} Nov 28 12:57:39 crc kubenswrapper[4779]: I1128 12:57:39.164520 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc454a59-2c44-49d3-8ff7-c44eedd22b3b","Type":"ContainerStarted","Data":"a220ffee9c0a987aa62017cccc187e455b517faf9896a608b587b35ad378eff3"} Nov 28 12:57:40 crc kubenswrapper[4779]: I1128 12:57:40.177228 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc454a59-2c44-49d3-8ff7-c44eedd22b3b","Type":"ContainerStarted","Data":"d242ba2b18e722833c2f2cc07254e31af05831b45f3e3f98f38069a6f8ede94e"} Nov 28 12:57:40 crc kubenswrapper[4779]: I1128 12:57:40.453169 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 12:57:40 crc kubenswrapper[4779]: I1128 12:57:40.453432 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 12:57:41 crc kubenswrapper[4779]: I1128 12:57:41.190516 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc454a59-2c44-49d3-8ff7-c44eedd22b3b","Type":"ContainerStarted","Data":"3c1109b56cf220329f4616a16b82243bbc6638ec1ca7b79c9374aff16de57c28"} Nov 28 12:57:41 crc kubenswrapper[4779]: I1128 12:57:41.524191 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 28 12:57:41 crc kubenswrapper[4779]: I1128 12:57:41.535369 4779 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f7559047-05de-4ba9-acbd-8f57a1362666" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.203:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 12:57:41 crc kubenswrapper[4779]: I1128 12:57:41.535389 4779 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f7559047-05de-4ba9-acbd-8f57a1362666" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.203:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 12:57:42 crc kubenswrapper[4779]: I1128 12:57:42.200331 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc454a59-2c44-49d3-8ff7-c44eedd22b3b","Type":"ContainerStarted","Data":"fc074b042cc8cba69ab8c504d98fdb1e0855a5779ac81092f4f1d51ab5a9daa4"} Nov 28 12:57:42 crc kubenswrapper[4779]: I1128 12:57:42.200748 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 12:57:42 crc kubenswrapper[4779]: I1128 12:57:42.251666 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.582467109 podStartE2EDuration="5.251647365s" podCreationTimestamp="2025-11-28 12:57:37 +0000 UTC" firstStartedPulling="2025-11-28 12:57:38.063543217 +0000 UTC m=+1318.629218571" lastFinishedPulling="2025-11-28 12:57:41.732723453 +0000 UTC m=+1322.298398827" observedRunningTime="2025-11-28 12:57:42.238929211 +0000 UTC m=+1322.804604565" watchObservedRunningTime="2025-11-28 12:57:42.251647365 +0000 UTC m=+1322.817322719" Nov 28 12:57:43 crc kubenswrapper[4779]: I1128 12:57:43.578227 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 28 12:57:46 crc kubenswrapper[4779]: I1128 12:57:46.284464 4779 patch_prober.go:28] interesting pod/machine-config-daemon-kj9g2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 12:57:46 crc kubenswrapper[4779]: I1128 12:57:46.285373 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 12:57:46 crc kubenswrapper[4779]: I1128 12:57:46.371267 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 28 12:57:46 crc kubenswrapper[4779]: I1128 12:57:46.375396 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 28 12:57:46 crc kubenswrapper[4779]: I1128 12:57:46.376621 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 28 12:57:47 crc kubenswrapper[4779]: I1128 12:57:47.266007 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 28 12:57:49 crc kubenswrapper[4779]: I1128 12:57:49.279876 4779 generic.go:334] "Generic (PLEG): container finished" podID="b0c2eda9-dcae-45cc-bee9-d9ce55f34d1c" containerID="48a18811228fab9f492469a731b2670a73ce5fa7d755df34f000c2ac5c82d9d4" exitCode=137 Nov 28 12:57:49 crc kubenswrapper[4779]: I1128 12:57:49.279981 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b0c2eda9-dcae-45cc-bee9-d9ce55f34d1c","Type":"ContainerDied","Data":"48a18811228fab9f492469a731b2670a73ce5fa7d755df34f000c2ac5c82d9d4"} Nov 28 12:57:49 crc kubenswrapper[4779]: I1128 12:57:49.408975 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 28 12:57:49 crc kubenswrapper[4779]: I1128 12:57:49.489779 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0c2eda9-dcae-45cc-bee9-d9ce55f34d1c-config-data\") pod \"b0c2eda9-dcae-45cc-bee9-d9ce55f34d1c\" (UID: \"b0c2eda9-dcae-45cc-bee9-d9ce55f34d1c\") " Nov 28 12:57:49 crc kubenswrapper[4779]: I1128 12:57:49.489886 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0c2eda9-dcae-45cc-bee9-d9ce55f34d1c-combined-ca-bundle\") pod \"b0c2eda9-dcae-45cc-bee9-d9ce55f34d1c\" (UID: \"b0c2eda9-dcae-45cc-bee9-d9ce55f34d1c\") " Nov 28 12:57:49 crc kubenswrapper[4779]: I1128 12:57:49.490062 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dfwj\" (UniqueName: \"kubernetes.io/projected/b0c2eda9-dcae-45cc-bee9-d9ce55f34d1c-kube-api-access-4dfwj\") pod \"b0c2eda9-dcae-45cc-bee9-d9ce55f34d1c\" (UID: \"b0c2eda9-dcae-45cc-bee9-d9ce55f34d1c\") " Nov 28 12:57:49 crc kubenswrapper[4779]: I1128 12:57:49.494993 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0c2eda9-dcae-45cc-bee9-d9ce55f34d1c-kube-api-access-4dfwj" (OuterVolumeSpecName: "kube-api-access-4dfwj") pod "b0c2eda9-dcae-45cc-bee9-d9ce55f34d1c" (UID: "b0c2eda9-dcae-45cc-bee9-d9ce55f34d1c"). InnerVolumeSpecName "kube-api-access-4dfwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:57:49 crc kubenswrapper[4779]: I1128 12:57:49.522043 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0c2eda9-dcae-45cc-bee9-d9ce55f34d1c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b0c2eda9-dcae-45cc-bee9-d9ce55f34d1c" (UID: "b0c2eda9-dcae-45cc-bee9-d9ce55f34d1c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:57:49 crc kubenswrapper[4779]: I1128 12:57:49.523158 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0c2eda9-dcae-45cc-bee9-d9ce55f34d1c-config-data" (OuterVolumeSpecName: "config-data") pod "b0c2eda9-dcae-45cc-bee9-d9ce55f34d1c" (UID: "b0c2eda9-dcae-45cc-bee9-d9ce55f34d1c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:57:49 crc kubenswrapper[4779]: I1128 12:57:49.593245 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dfwj\" (UniqueName: \"kubernetes.io/projected/b0c2eda9-dcae-45cc-bee9-d9ce55f34d1c-kube-api-access-4dfwj\") on node \"crc\" DevicePath \"\"" Nov 28 12:57:49 crc kubenswrapper[4779]: I1128 12:57:49.593295 4779 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0c2eda9-dcae-45cc-bee9-d9ce55f34d1c-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:57:49 crc kubenswrapper[4779]: I1128 12:57:49.593316 4779 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0c2eda9-dcae-45cc-bee9-d9ce55f34d1c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:57:50 crc kubenswrapper[4779]: I1128 12:57:50.292672 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b0c2eda9-dcae-45cc-bee9-d9ce55f34d1c","Type":"ContainerDied","Data":"7722c5812d1fbb03222e0d8dd967fdbd3f235101c8f3a103af838506e49e92c1"} Nov 28 12:57:50 crc kubenswrapper[4779]: I1128 12:57:50.292757 4779 scope.go:117] "RemoveContainer" containerID="48a18811228fab9f492469a731b2670a73ce5fa7d755df34f000c2ac5c82d9d4" Nov 28 12:57:50 crc kubenswrapper[4779]: I1128 12:57:50.292764 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 28 12:57:50 crc kubenswrapper[4779]: I1128 12:57:50.329240 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 12:57:50 crc kubenswrapper[4779]: I1128 12:57:50.344232 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 12:57:50 crc kubenswrapper[4779]: I1128 12:57:50.356211 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 12:57:50 crc kubenswrapper[4779]: E1128 12:57:50.356894 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0c2eda9-dcae-45cc-bee9-d9ce55f34d1c" containerName="nova-cell1-novncproxy-novncproxy" Nov 28 12:57:50 crc kubenswrapper[4779]: I1128 12:57:50.356931 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0c2eda9-dcae-45cc-bee9-d9ce55f34d1c" containerName="nova-cell1-novncproxy-novncproxy" Nov 28 12:57:50 crc kubenswrapper[4779]: I1128 12:57:50.357281 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0c2eda9-dcae-45cc-bee9-d9ce55f34d1c" containerName="nova-cell1-novncproxy-novncproxy" Nov 28 12:57:50 crc kubenswrapper[4779]: I1128 12:57:50.358335 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 28 12:57:50 crc kubenswrapper[4779]: I1128 12:57:50.362063 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 28 12:57:50 crc kubenswrapper[4779]: I1128 12:57:50.362824 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 28 12:57:50 crc kubenswrapper[4779]: I1128 12:57:50.364064 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 28 12:57:50 crc kubenswrapper[4779]: I1128 12:57:50.366585 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 12:57:50 crc kubenswrapper[4779]: I1128 12:57:50.408710 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2f7b630-265b-4501-87b8-44f47fe9a11f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c2f7b630-265b-4501-87b8-44f47fe9a11f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 12:57:50 crc kubenswrapper[4779]: I1128 12:57:50.408769 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2f7b630-265b-4501-87b8-44f47fe9a11f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c2f7b630-265b-4501-87b8-44f47fe9a11f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 12:57:50 crc kubenswrapper[4779]: I1128 12:57:50.408803 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kj86\" (UniqueName: \"kubernetes.io/projected/c2f7b630-265b-4501-87b8-44f47fe9a11f-kube-api-access-2kj86\") pod \"nova-cell1-novncproxy-0\" (UID: \"c2f7b630-265b-4501-87b8-44f47fe9a11f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 12:57:50 crc kubenswrapper[4779]: I1128 12:57:50.408929 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2f7b630-265b-4501-87b8-44f47fe9a11f-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c2f7b630-265b-4501-87b8-44f47fe9a11f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 12:57:50 crc kubenswrapper[4779]: I1128 12:57:50.408966 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2f7b630-265b-4501-87b8-44f47fe9a11f-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c2f7b630-265b-4501-87b8-44f47fe9a11f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 12:57:50 crc kubenswrapper[4779]: I1128 12:57:50.459549 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 28 12:57:50 crc kubenswrapper[4779]: I1128 12:57:50.460781 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 28 12:57:50 crc kubenswrapper[4779]: I1128 12:57:50.461027 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 28 12:57:50 crc kubenswrapper[4779]: I1128 12:57:50.462660 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 28 12:57:50 crc kubenswrapper[4779]: I1128 12:57:50.510259 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2f7b630-265b-4501-87b8-44f47fe9a11f-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c2f7b630-265b-4501-87b8-44f47fe9a11f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 12:57:50 crc kubenswrapper[4779]: I1128 12:57:50.510319 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2f7b630-265b-4501-87b8-44f47fe9a11f-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c2f7b630-265b-4501-87b8-44f47fe9a11f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 12:57:50 crc kubenswrapper[4779]: I1128 12:57:50.510454 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2f7b630-265b-4501-87b8-44f47fe9a11f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c2f7b630-265b-4501-87b8-44f47fe9a11f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 12:57:50 crc kubenswrapper[4779]: I1128 12:57:50.510489 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2f7b630-265b-4501-87b8-44f47fe9a11f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c2f7b630-265b-4501-87b8-44f47fe9a11f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 12:57:50 crc kubenswrapper[4779]: I1128 12:57:50.510513 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kj86\" (UniqueName: \"kubernetes.io/projected/c2f7b630-265b-4501-87b8-44f47fe9a11f-kube-api-access-2kj86\") pod \"nova-cell1-novncproxy-0\" (UID: \"c2f7b630-265b-4501-87b8-44f47fe9a11f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 12:57:50 crc kubenswrapper[4779]: I1128 12:57:50.516796 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2f7b630-265b-4501-87b8-44f47fe9a11f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c2f7b630-265b-4501-87b8-44f47fe9a11f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 12:57:50 crc kubenswrapper[4779]: I1128 12:57:50.516890 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2f7b630-265b-4501-87b8-44f47fe9a11f-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c2f7b630-265b-4501-87b8-44f47fe9a11f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 12:57:50 crc kubenswrapper[4779]: I1128 12:57:50.517690 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2f7b630-265b-4501-87b8-44f47fe9a11f-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c2f7b630-265b-4501-87b8-44f47fe9a11f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 12:57:50 crc kubenswrapper[4779]: I1128 12:57:50.518142 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2f7b630-265b-4501-87b8-44f47fe9a11f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c2f7b630-265b-4501-87b8-44f47fe9a11f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 12:57:50 crc kubenswrapper[4779]: I1128 12:57:50.527452 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kj86\" (UniqueName: \"kubernetes.io/projected/c2f7b630-265b-4501-87b8-44f47fe9a11f-kube-api-access-2kj86\") pod \"nova-cell1-novncproxy-0\" (UID: \"c2f7b630-265b-4501-87b8-44f47fe9a11f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 12:57:50 crc kubenswrapper[4779]: I1128 12:57:50.682964 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 28 12:57:51 crc kubenswrapper[4779]: I1128 12:57:51.201522 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 12:57:51 crc kubenswrapper[4779]: W1128 12:57:51.211988 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2f7b630_265b_4501_87b8_44f47fe9a11f.slice/crio-eec6379279fbe99a4899c9b0f37fd8aefcadf04918eb70cd87c12d50d66b41e7 WatchSource:0}: Error finding container eec6379279fbe99a4899c9b0f37fd8aefcadf04918eb70cd87c12d50d66b41e7: Status 404 returned error can't find the container with id eec6379279fbe99a4899c9b0f37fd8aefcadf04918eb70cd87c12d50d66b41e7 Nov 28 12:57:51 crc kubenswrapper[4779]: I1128 12:57:51.308442 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c2f7b630-265b-4501-87b8-44f47fe9a11f","Type":"ContainerStarted","Data":"eec6379279fbe99a4899c9b0f37fd8aefcadf04918eb70cd87c12d50d66b41e7"} Nov 28 12:57:51 crc kubenswrapper[4779]: I1128 12:57:51.308896 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 28 12:57:51 crc kubenswrapper[4779]: I1128 12:57:51.312860 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 28 12:57:51 crc kubenswrapper[4779]: I1128 12:57:51.531155 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79b5d74c8c-4kf9h"] Nov 28 12:57:51 crc kubenswrapper[4779]: I1128 12:57:51.532990 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79b5d74c8c-4kf9h" Nov 28 12:57:51 crc kubenswrapper[4779]: I1128 12:57:51.538957 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79b5d74c8c-4kf9h"] Nov 28 12:57:51 crc kubenswrapper[4779]: I1128 12:57:51.642869 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4259e6bd-29c2-44a9-a7d1-eccc321fd8a5-config\") pod \"dnsmasq-dns-79b5d74c8c-4kf9h\" (UID: \"4259e6bd-29c2-44a9-a7d1-eccc321fd8a5\") " pod="openstack/dnsmasq-dns-79b5d74c8c-4kf9h" Nov 28 12:57:51 crc kubenswrapper[4779]: I1128 12:57:51.642915 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4259e6bd-29c2-44a9-a7d1-eccc321fd8a5-ovsdbserver-sb\") pod \"dnsmasq-dns-79b5d74c8c-4kf9h\" (UID: \"4259e6bd-29c2-44a9-a7d1-eccc321fd8a5\") " pod="openstack/dnsmasq-dns-79b5d74c8c-4kf9h" Nov 28 12:57:51 crc kubenswrapper[4779]: I1128 12:57:51.643074 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4259e6bd-29c2-44a9-a7d1-eccc321fd8a5-ovsdbserver-nb\") pod \"dnsmasq-dns-79b5d74c8c-4kf9h\" (UID: \"4259e6bd-29c2-44a9-a7d1-eccc321fd8a5\") " pod="openstack/dnsmasq-dns-79b5d74c8c-4kf9h" Nov 28 12:57:51 crc kubenswrapper[4779]: I1128 12:57:51.643131 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqhvf\" (UniqueName: \"kubernetes.io/projected/4259e6bd-29c2-44a9-a7d1-eccc321fd8a5-kube-api-access-qqhvf\") pod \"dnsmasq-dns-79b5d74c8c-4kf9h\" (UID: \"4259e6bd-29c2-44a9-a7d1-eccc321fd8a5\") " pod="openstack/dnsmasq-dns-79b5d74c8c-4kf9h" Nov 28 12:57:51 crc kubenswrapper[4779]: I1128 12:57:51.643283 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4259e6bd-29c2-44a9-a7d1-eccc321fd8a5-dns-svc\") pod \"dnsmasq-dns-79b5d74c8c-4kf9h\" (UID: \"4259e6bd-29c2-44a9-a7d1-eccc321fd8a5\") " pod="openstack/dnsmasq-dns-79b5d74c8c-4kf9h" Nov 28 12:57:51 crc kubenswrapper[4779]: I1128 12:57:51.643412 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4259e6bd-29c2-44a9-a7d1-eccc321fd8a5-dns-swift-storage-0\") pod \"dnsmasq-dns-79b5d74c8c-4kf9h\" (UID: \"4259e6bd-29c2-44a9-a7d1-eccc321fd8a5\") " pod="openstack/dnsmasq-dns-79b5d74c8c-4kf9h" Nov 28 12:57:51 crc kubenswrapper[4779]: I1128 12:57:51.742441 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0c2eda9-dcae-45cc-bee9-d9ce55f34d1c" path="/var/lib/kubelet/pods/b0c2eda9-dcae-45cc-bee9-d9ce55f34d1c/volumes" Nov 28 12:57:51 crc kubenswrapper[4779]: I1128 12:57:51.744986 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4259e6bd-29c2-44a9-a7d1-eccc321fd8a5-dns-svc\") pod \"dnsmasq-dns-79b5d74c8c-4kf9h\" (UID: \"4259e6bd-29c2-44a9-a7d1-eccc321fd8a5\") " pod="openstack/dnsmasq-dns-79b5d74c8c-4kf9h" Nov 28 12:57:51 crc kubenswrapper[4779]: I1128 12:57:51.745050 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4259e6bd-29c2-44a9-a7d1-eccc321fd8a5-dns-swift-storage-0\") pod \"dnsmasq-dns-79b5d74c8c-4kf9h\" (UID: \"4259e6bd-29c2-44a9-a7d1-eccc321fd8a5\") " pod="openstack/dnsmasq-dns-79b5d74c8c-4kf9h" Nov 28 12:57:51 crc kubenswrapper[4779]: I1128 12:57:51.745114 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4259e6bd-29c2-44a9-a7d1-eccc321fd8a5-config\") pod \"dnsmasq-dns-79b5d74c8c-4kf9h\" (UID: \"4259e6bd-29c2-44a9-a7d1-eccc321fd8a5\") " pod="openstack/dnsmasq-dns-79b5d74c8c-4kf9h" Nov 28 12:57:51 crc kubenswrapper[4779]: I1128 12:57:51.745131 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4259e6bd-29c2-44a9-a7d1-eccc321fd8a5-ovsdbserver-sb\") pod \"dnsmasq-dns-79b5d74c8c-4kf9h\" (UID: \"4259e6bd-29c2-44a9-a7d1-eccc321fd8a5\") " pod="openstack/dnsmasq-dns-79b5d74c8c-4kf9h" Nov 28 12:57:51 crc kubenswrapper[4779]: I1128 12:57:51.745178 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqhvf\" (UniqueName: \"kubernetes.io/projected/4259e6bd-29c2-44a9-a7d1-eccc321fd8a5-kube-api-access-qqhvf\") pod \"dnsmasq-dns-79b5d74c8c-4kf9h\" (UID: \"4259e6bd-29c2-44a9-a7d1-eccc321fd8a5\") " pod="openstack/dnsmasq-dns-79b5d74c8c-4kf9h" Nov 28 12:57:51 crc kubenswrapper[4779]: I1128 12:57:51.745196 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4259e6bd-29c2-44a9-a7d1-eccc321fd8a5-ovsdbserver-nb\") pod \"dnsmasq-dns-79b5d74c8c-4kf9h\" (UID: \"4259e6bd-29c2-44a9-a7d1-eccc321fd8a5\") " pod="openstack/dnsmasq-dns-79b5d74c8c-4kf9h" Nov 28 12:57:51 crc kubenswrapper[4779]: I1128 12:57:51.745908 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4259e6bd-29c2-44a9-a7d1-eccc321fd8a5-ovsdbserver-nb\") pod \"dnsmasq-dns-79b5d74c8c-4kf9h\" (UID: \"4259e6bd-29c2-44a9-a7d1-eccc321fd8a5\") " pod="openstack/dnsmasq-dns-79b5d74c8c-4kf9h" Nov 28 12:57:51 crc kubenswrapper[4779]: I1128 12:57:51.746414 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4259e6bd-29c2-44a9-a7d1-eccc321fd8a5-dns-svc\") pod \"dnsmasq-dns-79b5d74c8c-4kf9h\" (UID: \"4259e6bd-29c2-44a9-a7d1-eccc321fd8a5\") " pod="openstack/dnsmasq-dns-79b5d74c8c-4kf9h" Nov 28 12:57:51 crc kubenswrapper[4779]: I1128 12:57:51.746905 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4259e6bd-29c2-44a9-a7d1-eccc321fd8a5-dns-swift-storage-0\") pod \"dnsmasq-dns-79b5d74c8c-4kf9h\" (UID: \"4259e6bd-29c2-44a9-a7d1-eccc321fd8a5\") " pod="openstack/dnsmasq-dns-79b5d74c8c-4kf9h" Nov 28 12:57:51 crc kubenswrapper[4779]: I1128 12:57:51.757528 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4259e6bd-29c2-44a9-a7d1-eccc321fd8a5-ovsdbserver-sb\") pod \"dnsmasq-dns-79b5d74c8c-4kf9h\" (UID: \"4259e6bd-29c2-44a9-a7d1-eccc321fd8a5\") " pod="openstack/dnsmasq-dns-79b5d74c8c-4kf9h" Nov 28 12:57:51 crc kubenswrapper[4779]: I1128 12:57:51.764485 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4259e6bd-29c2-44a9-a7d1-eccc321fd8a5-config\") pod \"dnsmasq-dns-79b5d74c8c-4kf9h\" (UID: \"4259e6bd-29c2-44a9-a7d1-eccc321fd8a5\") " pod="openstack/dnsmasq-dns-79b5d74c8c-4kf9h" Nov 28 12:57:51 crc kubenswrapper[4779]: I1128 12:57:51.804977 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqhvf\" (UniqueName: \"kubernetes.io/projected/4259e6bd-29c2-44a9-a7d1-eccc321fd8a5-kube-api-access-qqhvf\") pod \"dnsmasq-dns-79b5d74c8c-4kf9h\" (UID: \"4259e6bd-29c2-44a9-a7d1-eccc321fd8a5\") " pod="openstack/dnsmasq-dns-79b5d74c8c-4kf9h" Nov 28 12:57:51 crc kubenswrapper[4779]: I1128 12:57:51.887855 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79b5d74c8c-4kf9h" Nov 28 12:57:52 crc kubenswrapper[4779]: I1128 12:57:52.329078 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c2f7b630-265b-4501-87b8-44f47fe9a11f","Type":"ContainerStarted","Data":"27a2f99503f6100b6e4f53898dacf84ca1e8251a0b1f2c32f002cfeef6faf2f1"} Nov 28 12:57:52 crc kubenswrapper[4779]: I1128 12:57:52.347409 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.347391236 podStartE2EDuration="2.347391236s" podCreationTimestamp="2025-11-28 12:57:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:57:52.346205495 +0000 UTC m=+1332.911880859" watchObservedRunningTime="2025-11-28 12:57:52.347391236 +0000 UTC m=+1332.913066590" Nov 28 12:57:52 crc kubenswrapper[4779]: I1128 12:57:52.385172 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79b5d74c8c-4kf9h"] Nov 28 12:57:52 crc kubenswrapper[4779]: W1128 12:57:52.388802 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4259e6bd_29c2_44a9_a7d1_eccc321fd8a5.slice/crio-4d64f54318daeb319dfcde38aa6acd4e8c8ee288d26e59826c851202c83a6671 WatchSource:0}: Error finding container 4d64f54318daeb319dfcde38aa6acd4e8c8ee288d26e59826c851202c83a6671: Status 404 returned error can't find the container with id 4d64f54318daeb319dfcde38aa6acd4e8c8ee288d26e59826c851202c83a6671 Nov 28 12:57:53 crc kubenswrapper[4779]: I1128 12:57:53.337864 4779 generic.go:334] "Generic (PLEG): container finished" podID="4259e6bd-29c2-44a9-a7d1-eccc321fd8a5" containerID="a3bcb45235ed02510acf16dd7e98dfda8ec085d10d3fb48c20d9a6ef0e6a6273" exitCode=0 Nov 28 12:57:53 crc kubenswrapper[4779]: I1128 12:57:53.338059 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79b5d74c8c-4kf9h" event={"ID":"4259e6bd-29c2-44a9-a7d1-eccc321fd8a5","Type":"ContainerDied","Data":"a3bcb45235ed02510acf16dd7e98dfda8ec085d10d3fb48c20d9a6ef0e6a6273"} Nov 28 12:57:53 crc kubenswrapper[4779]: I1128 12:57:53.339010 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79b5d74c8c-4kf9h" event={"ID":"4259e6bd-29c2-44a9-a7d1-eccc321fd8a5","Type":"ContainerStarted","Data":"4d64f54318daeb319dfcde38aa6acd4e8c8ee288d26e59826c851202c83a6671"} Nov 28 12:57:53 crc kubenswrapper[4779]: I1128 12:57:53.543586 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 12:57:53 crc kubenswrapper[4779]: I1128 12:57:53.543958 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cc454a59-2c44-49d3-8ff7-c44eedd22b3b" containerName="proxy-httpd" containerID="cri-o://fc074b042cc8cba69ab8c504d98fdb1e0855a5779ac81092f4f1d51ab5a9daa4" gracePeriod=30 Nov 28 12:57:53 crc kubenswrapper[4779]: I1128 12:57:53.543976 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cc454a59-2c44-49d3-8ff7-c44eedd22b3b" containerName="sg-core" containerID="cri-o://3c1109b56cf220329f4616a16b82243bbc6638ec1ca7b79c9374aff16de57c28" gracePeriod=30 Nov 28 12:57:53 crc kubenswrapper[4779]: I1128 12:57:53.544062 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cc454a59-2c44-49d3-8ff7-c44eedd22b3b" containerName="ceilometer-notification-agent" containerID="cri-o://d242ba2b18e722833c2f2cc07254e31af05831b45f3e3f98f38069a6f8ede94e" gracePeriod=30 Nov 28 12:57:53 crc kubenswrapper[4779]: I1128 12:57:53.544251 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cc454a59-2c44-49d3-8ff7-c44eedd22b3b" containerName="ceilometer-central-agent" containerID="cri-o://a220ffee9c0a987aa62017cccc187e455b517faf9896a608b587b35ad378eff3" gracePeriod=30 Nov 28 12:57:53 crc kubenswrapper[4779]: I1128 12:57:53.550723 4779 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="cc454a59-2c44-49d3-8ff7-c44eedd22b3b" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.206:3000/\": EOF" Nov 28 12:57:54 crc kubenswrapper[4779]: I1128 12:57:54.034414 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 12:57:54 crc kubenswrapper[4779]: I1128 12:57:54.351992 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79b5d74c8c-4kf9h" event={"ID":"4259e6bd-29c2-44a9-a7d1-eccc321fd8a5","Type":"ContainerStarted","Data":"543a19a78f763af3433632c70f3f4d2126135d5c0f2582a67c019269a3b73a9f"} Nov 28 12:57:54 crc kubenswrapper[4779]: I1128 12:57:54.352089 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79b5d74c8c-4kf9h" Nov 28 12:57:54 crc kubenswrapper[4779]: I1128 12:57:54.354554 4779 generic.go:334] "Generic (PLEG): container finished" podID="cc454a59-2c44-49d3-8ff7-c44eedd22b3b" containerID="fc074b042cc8cba69ab8c504d98fdb1e0855a5779ac81092f4f1d51ab5a9daa4" exitCode=0 Nov 28 12:57:54 crc kubenswrapper[4779]: I1128 12:57:54.354582 4779 generic.go:334] "Generic (PLEG): container finished" podID="cc454a59-2c44-49d3-8ff7-c44eedd22b3b" containerID="3c1109b56cf220329f4616a16b82243bbc6638ec1ca7b79c9374aff16de57c28" exitCode=2 Nov 28 12:57:54 crc kubenswrapper[4779]: I1128 12:57:54.354594 4779 generic.go:334] "Generic (PLEG): container finished" podID="cc454a59-2c44-49d3-8ff7-c44eedd22b3b" containerID="a220ffee9c0a987aa62017cccc187e455b517faf9896a608b587b35ad378eff3" exitCode=0 Nov 28 12:57:54 crc kubenswrapper[4779]: I1128 12:57:54.354835 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f7559047-05de-4ba9-acbd-8f57a1362666" containerName="nova-api-log" containerID="cri-o://8d0cc9a55bb75d96b26d06d327306725907ac538564b6f0852ccd8eed0d14920" gracePeriod=30 Nov 28 12:57:54 crc kubenswrapper[4779]: I1128 12:57:54.355082 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc454a59-2c44-49d3-8ff7-c44eedd22b3b","Type":"ContainerDied","Data":"fc074b042cc8cba69ab8c504d98fdb1e0855a5779ac81092f4f1d51ab5a9daa4"} Nov 28 12:57:54 crc kubenswrapper[4779]: I1128 12:57:54.355143 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc454a59-2c44-49d3-8ff7-c44eedd22b3b","Type":"ContainerDied","Data":"3c1109b56cf220329f4616a16b82243bbc6638ec1ca7b79c9374aff16de57c28"} Nov 28 12:57:54 crc kubenswrapper[4779]: I1128 12:57:54.355157 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc454a59-2c44-49d3-8ff7-c44eedd22b3b","Type":"ContainerDied","Data":"a220ffee9c0a987aa62017cccc187e455b517faf9896a608b587b35ad378eff3"} Nov 28 12:57:54 crc kubenswrapper[4779]: I1128 12:57:54.355239 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f7559047-05de-4ba9-acbd-8f57a1362666" containerName="nova-api-api" containerID="cri-o://ec3727d08064c13295ea87d27c7b9431c6174a35863fb87ca267396e890e54c1" gracePeriod=30 Nov 28 12:57:54 crc kubenswrapper[4779]: I1128 12:57:54.389553 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79b5d74c8c-4kf9h" podStartSLOduration=3.389525363 podStartE2EDuration="3.389525363s" podCreationTimestamp="2025-11-28 12:57:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:57:54.376306155 +0000 UTC m=+1334.941981509" watchObservedRunningTime="2025-11-28 12:57:54.389525363 +0000 UTC m=+1334.955200727" Nov 28 12:57:55 crc kubenswrapper[4779]: I1128 12:57:55.364806 4779 generic.go:334] "Generic (PLEG): container finished" podID="cc454a59-2c44-49d3-8ff7-c44eedd22b3b" containerID="d242ba2b18e722833c2f2cc07254e31af05831b45f3e3f98f38069a6f8ede94e" exitCode=0 Nov 28 12:57:55 crc kubenswrapper[4779]: I1128 12:57:55.364906 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc454a59-2c44-49d3-8ff7-c44eedd22b3b","Type":"ContainerDied","Data":"d242ba2b18e722833c2f2cc07254e31af05831b45f3e3f98f38069a6f8ede94e"} Nov 28 12:57:55 crc kubenswrapper[4779]: I1128 12:57:55.369416 4779 generic.go:334] "Generic (PLEG): container finished" podID="f7559047-05de-4ba9-acbd-8f57a1362666" containerID="8d0cc9a55bb75d96b26d06d327306725907ac538564b6f0852ccd8eed0d14920" exitCode=143 Nov 28 12:57:55 crc kubenswrapper[4779]: I1128 12:57:55.369534 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f7559047-05de-4ba9-acbd-8f57a1362666","Type":"ContainerDied","Data":"8d0cc9a55bb75d96b26d06d327306725907ac538564b6f0852ccd8eed0d14920"} Nov 28 12:57:55 crc kubenswrapper[4779]: I1128 12:57:55.683533 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 28 12:57:55 crc kubenswrapper[4779]: I1128 12:57:55.695422 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 12:57:55 crc kubenswrapper[4779]: I1128 12:57:55.749405 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc454a59-2c44-49d3-8ff7-c44eedd22b3b-combined-ca-bundle\") pod \"cc454a59-2c44-49d3-8ff7-c44eedd22b3b\" (UID: \"cc454a59-2c44-49d3-8ff7-c44eedd22b3b\") " Nov 28 12:57:55 crc kubenswrapper[4779]: I1128 12:57:55.749476 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tq6xc\" (UniqueName: \"kubernetes.io/projected/cc454a59-2c44-49d3-8ff7-c44eedd22b3b-kube-api-access-tq6xc\") pod \"cc454a59-2c44-49d3-8ff7-c44eedd22b3b\" (UID: \"cc454a59-2c44-49d3-8ff7-c44eedd22b3b\") " Nov 28 12:57:55 crc kubenswrapper[4779]: I1128 12:57:55.749512 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cc454a59-2c44-49d3-8ff7-c44eedd22b3b-sg-core-conf-yaml\") pod \"cc454a59-2c44-49d3-8ff7-c44eedd22b3b\" (UID: \"cc454a59-2c44-49d3-8ff7-c44eedd22b3b\") " Nov 28 12:57:55 crc kubenswrapper[4779]: I1128 12:57:55.749556 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc454a59-2c44-49d3-8ff7-c44eedd22b3b-ceilometer-tls-certs\") pod \"cc454a59-2c44-49d3-8ff7-c44eedd22b3b\" (UID: \"cc454a59-2c44-49d3-8ff7-c44eedd22b3b\") " Nov 28 12:57:55 crc kubenswrapper[4779]: I1128 12:57:55.749593 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc454a59-2c44-49d3-8ff7-c44eedd22b3b-scripts\") pod \"cc454a59-2c44-49d3-8ff7-c44eedd22b3b\" (UID: \"cc454a59-2c44-49d3-8ff7-c44eedd22b3b\") " Nov 28 12:57:55 crc kubenswrapper[4779]: I1128 12:57:55.749816 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc454a59-2c44-49d3-8ff7-c44eedd22b3b-log-httpd\") pod \"cc454a59-2c44-49d3-8ff7-c44eedd22b3b\" (UID: \"cc454a59-2c44-49d3-8ff7-c44eedd22b3b\") " Nov 28 12:57:55 crc kubenswrapper[4779]: I1128 12:57:55.749851 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc454a59-2c44-49d3-8ff7-c44eedd22b3b-run-httpd\") pod \"cc454a59-2c44-49d3-8ff7-c44eedd22b3b\" (UID: \"cc454a59-2c44-49d3-8ff7-c44eedd22b3b\") " Nov 28 12:57:55 crc kubenswrapper[4779]: I1128 12:57:55.749934 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc454a59-2c44-49d3-8ff7-c44eedd22b3b-config-data\") pod \"cc454a59-2c44-49d3-8ff7-c44eedd22b3b\" (UID: \"cc454a59-2c44-49d3-8ff7-c44eedd22b3b\") " Nov 28 12:57:55 crc kubenswrapper[4779]: I1128 12:57:55.754343 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc454a59-2c44-49d3-8ff7-c44eedd22b3b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "cc454a59-2c44-49d3-8ff7-c44eedd22b3b" (UID: "cc454a59-2c44-49d3-8ff7-c44eedd22b3b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:57:55 crc kubenswrapper[4779]: I1128 12:57:55.754954 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc454a59-2c44-49d3-8ff7-c44eedd22b3b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "cc454a59-2c44-49d3-8ff7-c44eedd22b3b" (UID: "cc454a59-2c44-49d3-8ff7-c44eedd22b3b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:57:55 crc kubenswrapper[4779]: I1128 12:57:55.758834 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc454a59-2c44-49d3-8ff7-c44eedd22b3b-scripts" (OuterVolumeSpecName: "scripts") pod "cc454a59-2c44-49d3-8ff7-c44eedd22b3b" (UID: "cc454a59-2c44-49d3-8ff7-c44eedd22b3b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:57:55 crc kubenswrapper[4779]: I1128 12:57:55.759559 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc454a59-2c44-49d3-8ff7-c44eedd22b3b-kube-api-access-tq6xc" (OuterVolumeSpecName: "kube-api-access-tq6xc") pod "cc454a59-2c44-49d3-8ff7-c44eedd22b3b" (UID: "cc454a59-2c44-49d3-8ff7-c44eedd22b3b"). InnerVolumeSpecName "kube-api-access-tq6xc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:57:55 crc kubenswrapper[4779]: I1128 12:57:55.814561 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc454a59-2c44-49d3-8ff7-c44eedd22b3b-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "cc454a59-2c44-49d3-8ff7-c44eedd22b3b" (UID: "cc454a59-2c44-49d3-8ff7-c44eedd22b3b"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:57:55 crc kubenswrapper[4779]: I1128 12:57:55.834703 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc454a59-2c44-49d3-8ff7-c44eedd22b3b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "cc454a59-2c44-49d3-8ff7-c44eedd22b3b" (UID: "cc454a59-2c44-49d3-8ff7-c44eedd22b3b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:57:55 crc kubenswrapper[4779]: I1128 12:57:55.852124 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tq6xc\" (UniqueName: \"kubernetes.io/projected/cc454a59-2c44-49d3-8ff7-c44eedd22b3b-kube-api-access-tq6xc\") on node \"crc\" DevicePath \"\"" Nov 28 12:57:55 crc kubenswrapper[4779]: I1128 12:57:55.852156 4779 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cc454a59-2c44-49d3-8ff7-c44eedd22b3b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 12:57:55 crc kubenswrapper[4779]: I1128 12:57:55.852165 4779 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc454a59-2c44-49d3-8ff7-c44eedd22b3b-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 12:57:55 crc kubenswrapper[4779]: I1128 12:57:55.852173 4779 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc454a59-2c44-49d3-8ff7-c44eedd22b3b-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:57:55 crc kubenswrapper[4779]: I1128 12:57:55.852182 4779 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc454a59-2c44-49d3-8ff7-c44eedd22b3b-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 12:57:55 crc kubenswrapper[4779]: I1128 12:57:55.852190 4779 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc454a59-2c44-49d3-8ff7-c44eedd22b3b-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 12:57:55 crc kubenswrapper[4779]: I1128 12:57:55.877381 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc454a59-2c44-49d3-8ff7-c44eedd22b3b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cc454a59-2c44-49d3-8ff7-c44eedd22b3b" (UID: "cc454a59-2c44-49d3-8ff7-c44eedd22b3b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:57:55 crc kubenswrapper[4779]: I1128 12:57:55.890516 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc454a59-2c44-49d3-8ff7-c44eedd22b3b-config-data" (OuterVolumeSpecName: "config-data") pod "cc454a59-2c44-49d3-8ff7-c44eedd22b3b" (UID: "cc454a59-2c44-49d3-8ff7-c44eedd22b3b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:57:55 crc kubenswrapper[4779]: I1128 12:57:55.953600 4779 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc454a59-2c44-49d3-8ff7-c44eedd22b3b-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:57:55 crc kubenswrapper[4779]: I1128 12:57:55.953627 4779 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc454a59-2c44-49d3-8ff7-c44eedd22b3b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:57:56 crc kubenswrapper[4779]: I1128 12:57:56.385171 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc454a59-2c44-49d3-8ff7-c44eedd22b3b","Type":"ContainerDied","Data":"6804e1ecda9895750d3ab8d80a7e1d60d039229f82b6a1dae1d0a681183cf535"} Nov 28 12:57:56 crc kubenswrapper[4779]: I1128 12:57:56.385442 4779 scope.go:117] "RemoveContainer" containerID="fc074b042cc8cba69ab8c504d98fdb1e0855a5779ac81092f4f1d51ab5a9daa4" Nov 28 12:57:56 crc kubenswrapper[4779]: I1128 12:57:56.385497 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 12:57:56 crc kubenswrapper[4779]: I1128 12:57:56.426409 4779 scope.go:117] "RemoveContainer" containerID="3c1109b56cf220329f4616a16b82243bbc6638ec1ca7b79c9374aff16de57c28" Nov 28 12:57:56 crc kubenswrapper[4779]: I1128 12:57:56.431020 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 12:57:56 crc kubenswrapper[4779]: I1128 12:57:56.439523 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 12:57:56 crc kubenswrapper[4779]: I1128 12:57:56.454196 4779 scope.go:117] "RemoveContainer" containerID="d242ba2b18e722833c2f2cc07254e31af05831b45f3e3f98f38069a6f8ede94e" Nov 28 12:57:56 crc kubenswrapper[4779]: I1128 12:57:56.469634 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 12:57:56 crc kubenswrapper[4779]: E1128 12:57:56.469989 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc454a59-2c44-49d3-8ff7-c44eedd22b3b" containerName="ceilometer-notification-agent" Nov 28 12:57:56 crc kubenswrapper[4779]: I1128 12:57:56.470007 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc454a59-2c44-49d3-8ff7-c44eedd22b3b" containerName="ceilometer-notification-agent" Nov 28 12:57:56 crc kubenswrapper[4779]: E1128 12:57:56.470024 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc454a59-2c44-49d3-8ff7-c44eedd22b3b" containerName="proxy-httpd" Nov 28 12:57:56 crc kubenswrapper[4779]: I1128 12:57:56.470030 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc454a59-2c44-49d3-8ff7-c44eedd22b3b" containerName="proxy-httpd" Nov 28 12:57:56 crc kubenswrapper[4779]: E1128 12:57:56.470048 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc454a59-2c44-49d3-8ff7-c44eedd22b3b" containerName="ceilometer-central-agent" Nov 28 12:57:56 crc kubenswrapper[4779]: I1128 12:57:56.470055 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc454a59-2c44-49d3-8ff7-c44eedd22b3b" containerName="ceilometer-central-agent" Nov 28 12:57:56 crc kubenswrapper[4779]: E1128 12:57:56.470071 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc454a59-2c44-49d3-8ff7-c44eedd22b3b" containerName="sg-core" Nov 28 12:57:56 crc kubenswrapper[4779]: I1128 12:57:56.470077 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc454a59-2c44-49d3-8ff7-c44eedd22b3b" containerName="sg-core" Nov 28 12:57:56 crc kubenswrapper[4779]: I1128 12:57:56.478298 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc454a59-2c44-49d3-8ff7-c44eedd22b3b" containerName="ceilometer-central-agent" Nov 28 12:57:56 crc kubenswrapper[4779]: I1128 12:57:56.478438 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc454a59-2c44-49d3-8ff7-c44eedd22b3b" containerName="sg-core" Nov 28 12:57:56 crc kubenswrapper[4779]: I1128 12:57:56.478466 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc454a59-2c44-49d3-8ff7-c44eedd22b3b" containerName="proxy-httpd" Nov 28 12:57:56 crc kubenswrapper[4779]: I1128 12:57:56.478583 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc454a59-2c44-49d3-8ff7-c44eedd22b3b" containerName="ceilometer-notification-agent" Nov 28 12:57:56 crc kubenswrapper[4779]: I1128 12:57:56.481750 4779 scope.go:117] "RemoveContainer" containerID="a220ffee9c0a987aa62017cccc187e455b517faf9896a608b587b35ad378eff3" Nov 28 12:57:56 crc kubenswrapper[4779]: I1128 12:57:56.483370 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 12:57:56 crc kubenswrapper[4779]: I1128 12:57:56.482153 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 12:57:56 crc kubenswrapper[4779]: I1128 12:57:56.485758 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 12:57:56 crc kubenswrapper[4779]: I1128 12:57:56.487003 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 12:57:56 crc kubenswrapper[4779]: I1128 12:57:56.487951 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 28 12:57:56 crc kubenswrapper[4779]: I1128 12:57:56.564145 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6a49e784-eabb-4391-a69f-695474f302b7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6a49e784-eabb-4391-a69f-695474f302b7\") " pod="openstack/ceilometer-0" Nov 28 12:57:56 crc kubenswrapper[4779]: I1128 12:57:56.564216 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a49e784-eabb-4391-a69f-695474f302b7-scripts\") pod \"ceilometer-0\" (UID: \"6a49e784-eabb-4391-a69f-695474f302b7\") " pod="openstack/ceilometer-0" Nov 28 12:57:56 crc kubenswrapper[4779]: I1128 12:57:56.564411 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a49e784-eabb-4391-a69f-695474f302b7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6a49e784-eabb-4391-a69f-695474f302b7\") " pod="openstack/ceilometer-0" Nov 28 12:57:56 crc kubenswrapper[4779]: I1128 12:57:56.564479 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a49e784-eabb-4391-a69f-695474f302b7-config-data\") pod \"ceilometer-0\" (UID: \"6a49e784-eabb-4391-a69f-695474f302b7\") " pod="openstack/ceilometer-0" Nov 28 12:57:56 crc kubenswrapper[4779]: I1128 12:57:56.564592 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hn6p\" (UniqueName: \"kubernetes.io/projected/6a49e784-eabb-4391-a69f-695474f302b7-kube-api-access-8hn6p\") pod \"ceilometer-0\" (UID: \"6a49e784-eabb-4391-a69f-695474f302b7\") " pod="openstack/ceilometer-0" Nov 28 12:57:56 crc kubenswrapper[4779]: I1128 12:57:56.564662 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a49e784-eabb-4391-a69f-695474f302b7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6a49e784-eabb-4391-a69f-695474f302b7\") " pod="openstack/ceilometer-0" Nov 28 12:57:56 crc kubenswrapper[4779]: I1128 12:57:56.564695 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a49e784-eabb-4391-a69f-695474f302b7-run-httpd\") pod \"ceilometer-0\" (UID: \"6a49e784-eabb-4391-a69f-695474f302b7\") " pod="openstack/ceilometer-0" Nov 28 12:57:56 crc kubenswrapper[4779]: I1128 12:57:56.564728 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a49e784-eabb-4391-a69f-695474f302b7-log-httpd\") pod \"ceilometer-0\" (UID: \"6a49e784-eabb-4391-a69f-695474f302b7\") " pod="openstack/ceilometer-0" Nov 28 12:57:56 crc kubenswrapper[4779]: I1128 12:57:56.666733 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a49e784-eabb-4391-a69f-695474f302b7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6a49e784-eabb-4391-a69f-695474f302b7\") " pod="openstack/ceilometer-0" Nov 28 12:57:56 crc kubenswrapper[4779]: I1128 12:57:56.666810 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a49e784-eabb-4391-a69f-695474f302b7-config-data\") pod \"ceilometer-0\" (UID: \"6a49e784-eabb-4391-a69f-695474f302b7\") " pod="openstack/ceilometer-0" Nov 28 12:57:56 crc kubenswrapper[4779]: I1128 12:57:56.666915 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hn6p\" (UniqueName: \"kubernetes.io/projected/6a49e784-eabb-4391-a69f-695474f302b7-kube-api-access-8hn6p\") pod \"ceilometer-0\" (UID: \"6a49e784-eabb-4391-a69f-695474f302b7\") " pod="openstack/ceilometer-0" Nov 28 12:57:56 crc kubenswrapper[4779]: I1128 12:57:56.666962 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a49e784-eabb-4391-a69f-695474f302b7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6a49e784-eabb-4391-a69f-695474f302b7\") " pod="openstack/ceilometer-0" Nov 28 12:57:56 crc kubenswrapper[4779]: I1128 12:57:56.666992 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a49e784-eabb-4391-a69f-695474f302b7-run-httpd\") pod \"ceilometer-0\" (UID: \"6a49e784-eabb-4391-a69f-695474f302b7\") " pod="openstack/ceilometer-0" Nov 28 12:57:56 crc kubenswrapper[4779]: I1128 12:57:56.667025 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a49e784-eabb-4391-a69f-695474f302b7-log-httpd\") pod \"ceilometer-0\" (UID: \"6a49e784-eabb-4391-a69f-695474f302b7\") " pod="openstack/ceilometer-0" Nov 28 12:57:56 crc kubenswrapper[4779]: I1128 12:57:56.667165 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6a49e784-eabb-4391-a69f-695474f302b7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6a49e784-eabb-4391-a69f-695474f302b7\") " pod="openstack/ceilometer-0" Nov 28 12:57:56 crc kubenswrapper[4779]: I1128 12:57:56.667201 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a49e784-eabb-4391-a69f-695474f302b7-scripts\") pod \"ceilometer-0\" (UID: \"6a49e784-eabb-4391-a69f-695474f302b7\") " pod="openstack/ceilometer-0" Nov 28 12:57:56 crc kubenswrapper[4779]: I1128 12:57:56.667553 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a49e784-eabb-4391-a69f-695474f302b7-log-httpd\") pod \"ceilometer-0\" (UID: \"6a49e784-eabb-4391-a69f-695474f302b7\") " pod="openstack/ceilometer-0" Nov 28 12:57:56 crc kubenswrapper[4779]: I1128 12:57:56.667602 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a49e784-eabb-4391-a69f-695474f302b7-run-httpd\") pod \"ceilometer-0\" (UID: \"6a49e784-eabb-4391-a69f-695474f302b7\") " pod="openstack/ceilometer-0" Nov 28 12:57:56 crc kubenswrapper[4779]: I1128 12:57:56.673870 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a49e784-eabb-4391-a69f-695474f302b7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6a49e784-eabb-4391-a69f-695474f302b7\") " pod="openstack/ceilometer-0" Nov 28 12:57:56 crc kubenswrapper[4779]: I1128 12:57:56.677919 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a49e784-eabb-4391-a69f-695474f302b7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6a49e784-eabb-4391-a69f-695474f302b7\") " pod="openstack/ceilometer-0" Nov 28 12:57:56 crc kubenswrapper[4779]: I1128 12:57:56.679238 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a49e784-eabb-4391-a69f-695474f302b7-scripts\") pod \"ceilometer-0\" (UID: \"6a49e784-eabb-4391-a69f-695474f302b7\") " pod="openstack/ceilometer-0" Nov 28 12:57:56 crc kubenswrapper[4779]: I1128 12:57:56.681613 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6a49e784-eabb-4391-a69f-695474f302b7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6a49e784-eabb-4391-a69f-695474f302b7\") " pod="openstack/ceilometer-0" Nov 28 12:57:56 crc kubenswrapper[4779]: I1128 12:57:56.686148 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a49e784-eabb-4391-a69f-695474f302b7-config-data\") pod \"ceilometer-0\" (UID: \"6a49e784-eabb-4391-a69f-695474f302b7\") " pod="openstack/ceilometer-0" Nov 28 12:57:56 crc kubenswrapper[4779]: I1128 12:57:56.700030 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hn6p\" (UniqueName: \"kubernetes.io/projected/6a49e784-eabb-4391-a69f-695474f302b7-kube-api-access-8hn6p\") pod \"ceilometer-0\" (UID: \"6a49e784-eabb-4391-a69f-695474f302b7\") " pod="openstack/ceilometer-0" Nov 28 12:57:56 crc kubenswrapper[4779]: I1128 12:57:56.805900 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 12:57:57 crc kubenswrapper[4779]: I1128 12:57:57.311970 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 12:57:57 crc kubenswrapper[4779]: I1128 12:57:57.321998 4779 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 12:57:57 crc kubenswrapper[4779]: I1128 12:57:57.396529 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a49e784-eabb-4391-a69f-695474f302b7","Type":"ContainerStarted","Data":"f4a0125785b585015858eca2b4e8dda26572e2f7dde89693fa05a133fd0748b0"} Nov 28 12:57:57 crc kubenswrapper[4779]: I1128 12:57:57.744323 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc454a59-2c44-49d3-8ff7-c44eedd22b3b" path="/var/lib/kubelet/pods/cc454a59-2c44-49d3-8ff7-c44eedd22b3b/volumes" Nov 28 12:57:58 crc kubenswrapper[4779]: I1128 12:57:58.243232 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 12:57:58 crc kubenswrapper[4779]: I1128 12:57:58.310244 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7559047-05de-4ba9-acbd-8f57a1362666-combined-ca-bundle\") pod \"f7559047-05de-4ba9-acbd-8f57a1362666\" (UID: \"f7559047-05de-4ba9-acbd-8f57a1362666\") " Nov 28 12:57:58 crc kubenswrapper[4779]: I1128 12:57:58.310568 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7559047-05de-4ba9-acbd-8f57a1362666-config-data\") pod \"f7559047-05de-4ba9-acbd-8f57a1362666\" (UID: \"f7559047-05de-4ba9-acbd-8f57a1362666\") " Nov 28 12:57:58 crc kubenswrapper[4779]: I1128 12:57:58.310739 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7559047-05de-4ba9-acbd-8f57a1362666-logs\") pod \"f7559047-05de-4ba9-acbd-8f57a1362666\" (UID: \"f7559047-05de-4ba9-acbd-8f57a1362666\") " Nov 28 12:57:58 crc kubenswrapper[4779]: I1128 12:57:58.310872 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdztw\" (UniqueName: \"kubernetes.io/projected/f7559047-05de-4ba9-acbd-8f57a1362666-kube-api-access-jdztw\") pod \"f7559047-05de-4ba9-acbd-8f57a1362666\" (UID: \"f7559047-05de-4ba9-acbd-8f57a1362666\") " Nov 28 12:57:58 crc kubenswrapper[4779]: I1128 12:57:58.314064 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7559047-05de-4ba9-acbd-8f57a1362666-logs" (OuterVolumeSpecName: "logs") pod "f7559047-05de-4ba9-acbd-8f57a1362666" (UID: "f7559047-05de-4ba9-acbd-8f57a1362666"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:57:58 crc kubenswrapper[4779]: I1128 12:57:58.315604 4779 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7559047-05de-4ba9-acbd-8f57a1362666-logs\") on node \"crc\" DevicePath \"\"" Nov 28 12:57:58 crc kubenswrapper[4779]: I1128 12:57:58.330741 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7559047-05de-4ba9-acbd-8f57a1362666-kube-api-access-jdztw" (OuterVolumeSpecName: "kube-api-access-jdztw") pod "f7559047-05de-4ba9-acbd-8f57a1362666" (UID: "f7559047-05de-4ba9-acbd-8f57a1362666"). InnerVolumeSpecName "kube-api-access-jdztw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:57:58 crc kubenswrapper[4779]: I1128 12:57:58.372014 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7559047-05de-4ba9-acbd-8f57a1362666-config-data" (OuterVolumeSpecName: "config-data") pod "f7559047-05de-4ba9-acbd-8f57a1362666" (UID: "f7559047-05de-4ba9-acbd-8f57a1362666"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:57:58 crc kubenswrapper[4779]: I1128 12:57:58.384164 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7559047-05de-4ba9-acbd-8f57a1362666-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f7559047-05de-4ba9-acbd-8f57a1362666" (UID: "f7559047-05de-4ba9-acbd-8f57a1362666"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:57:58 crc kubenswrapper[4779]: I1128 12:57:58.411996 4779 generic.go:334] "Generic (PLEG): container finished" podID="f7559047-05de-4ba9-acbd-8f57a1362666" containerID="ec3727d08064c13295ea87d27c7b9431c6174a35863fb87ca267396e890e54c1" exitCode=0 Nov 28 12:57:58 crc kubenswrapper[4779]: I1128 12:57:58.412059 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f7559047-05de-4ba9-acbd-8f57a1362666","Type":"ContainerDied","Data":"ec3727d08064c13295ea87d27c7b9431c6174a35863fb87ca267396e890e54c1"} Nov 28 12:57:58 crc kubenswrapper[4779]: I1128 12:57:58.412087 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f7559047-05de-4ba9-acbd-8f57a1362666","Type":"ContainerDied","Data":"7966a7e0566548789af017b76a94546cb6773326a20bf5a841d40de27cec3324"} Nov 28 12:57:58 crc kubenswrapper[4779]: I1128 12:57:58.412147 4779 scope.go:117] "RemoveContainer" containerID="ec3727d08064c13295ea87d27c7b9431c6174a35863fb87ca267396e890e54c1" Nov 28 12:57:58 crc kubenswrapper[4779]: I1128 12:57:58.412276 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 12:57:58 crc kubenswrapper[4779]: I1128 12:57:58.415752 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a49e784-eabb-4391-a69f-695474f302b7","Type":"ContainerStarted","Data":"6880011e54c00c5c9ef65fb20f9aa16c481813ccf5c5cbd95ad5d6bde775ca5a"} Nov 28 12:57:58 crc kubenswrapper[4779]: I1128 12:57:58.418199 4779 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7559047-05de-4ba9-acbd-8f57a1362666-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:57:58 crc kubenswrapper[4779]: I1128 12:57:58.418262 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdztw\" (UniqueName: \"kubernetes.io/projected/f7559047-05de-4ba9-acbd-8f57a1362666-kube-api-access-jdztw\") on node \"crc\" DevicePath \"\"" Nov 28 12:57:58 crc kubenswrapper[4779]: I1128 12:57:58.418277 4779 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7559047-05de-4ba9-acbd-8f57a1362666-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:57:58 crc kubenswrapper[4779]: I1128 12:57:58.466778 4779 scope.go:117] "RemoveContainer" containerID="8d0cc9a55bb75d96b26d06d327306725907ac538564b6f0852ccd8eed0d14920" Nov 28 12:57:58 crc kubenswrapper[4779]: I1128 12:57:58.471018 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 12:57:58 crc kubenswrapper[4779]: I1128 12:57:58.478624 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 28 12:57:58 crc kubenswrapper[4779]: I1128 12:57:58.498508 4779 scope.go:117] "RemoveContainer" containerID="ec3727d08064c13295ea87d27c7b9431c6174a35863fb87ca267396e890e54c1" Nov 28 12:57:58 crc kubenswrapper[4779]: E1128 12:57:58.499036 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec3727d08064c13295ea87d27c7b9431c6174a35863fb87ca267396e890e54c1\": container with ID starting with ec3727d08064c13295ea87d27c7b9431c6174a35863fb87ca267396e890e54c1 not found: ID does not exist" containerID="ec3727d08064c13295ea87d27c7b9431c6174a35863fb87ca267396e890e54c1" Nov 28 12:57:58 crc kubenswrapper[4779]: I1128 12:57:58.499169 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec3727d08064c13295ea87d27c7b9431c6174a35863fb87ca267396e890e54c1"} err="failed to get container status \"ec3727d08064c13295ea87d27c7b9431c6174a35863fb87ca267396e890e54c1\": rpc error: code = NotFound desc = could not find container \"ec3727d08064c13295ea87d27c7b9431c6174a35863fb87ca267396e890e54c1\": container with ID starting with ec3727d08064c13295ea87d27c7b9431c6174a35863fb87ca267396e890e54c1 not found: ID does not exist" Nov 28 12:57:58 crc kubenswrapper[4779]: I1128 12:57:58.499287 4779 scope.go:117] "RemoveContainer" containerID="8d0cc9a55bb75d96b26d06d327306725907ac538564b6f0852ccd8eed0d14920" Nov 28 12:57:58 crc kubenswrapper[4779]: E1128 12:57:58.499880 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d0cc9a55bb75d96b26d06d327306725907ac538564b6f0852ccd8eed0d14920\": container with ID starting with 8d0cc9a55bb75d96b26d06d327306725907ac538564b6f0852ccd8eed0d14920 not found: ID does not exist" containerID="8d0cc9a55bb75d96b26d06d327306725907ac538564b6f0852ccd8eed0d14920" Nov 28 12:57:58 crc kubenswrapper[4779]: I1128 12:57:58.499931 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d0cc9a55bb75d96b26d06d327306725907ac538564b6f0852ccd8eed0d14920"} err="failed to get container status \"8d0cc9a55bb75d96b26d06d327306725907ac538564b6f0852ccd8eed0d14920\": rpc error: code = NotFound desc = could not find container \"8d0cc9a55bb75d96b26d06d327306725907ac538564b6f0852ccd8eed0d14920\": container with ID starting with 8d0cc9a55bb75d96b26d06d327306725907ac538564b6f0852ccd8eed0d14920 not found: ID does not exist" Nov 28 12:57:58 crc kubenswrapper[4779]: I1128 12:57:58.502459 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 28 12:57:58 crc kubenswrapper[4779]: E1128 12:57:58.503323 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7559047-05de-4ba9-acbd-8f57a1362666" containerName="nova-api-api" Nov 28 12:57:58 crc kubenswrapper[4779]: I1128 12:57:58.503378 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7559047-05de-4ba9-acbd-8f57a1362666" containerName="nova-api-api" Nov 28 12:57:58 crc kubenswrapper[4779]: E1128 12:57:58.503410 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7559047-05de-4ba9-acbd-8f57a1362666" containerName="nova-api-log" Nov 28 12:57:58 crc kubenswrapper[4779]: I1128 12:57:58.503415 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7559047-05de-4ba9-acbd-8f57a1362666" containerName="nova-api-log" Nov 28 12:57:58 crc kubenswrapper[4779]: I1128 12:57:58.503637 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7559047-05de-4ba9-acbd-8f57a1362666" containerName="nova-api-log" Nov 28 12:57:58 crc kubenswrapper[4779]: I1128 12:57:58.503662 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7559047-05de-4ba9-acbd-8f57a1362666" containerName="nova-api-api" Nov 28 12:57:58 crc kubenswrapper[4779]: I1128 12:57:58.505795 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 12:57:58 crc kubenswrapper[4779]: I1128 12:57:58.508412 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 28 12:57:58 crc kubenswrapper[4779]: I1128 12:57:58.509029 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 28 12:57:58 crc kubenswrapper[4779]: I1128 12:57:58.509392 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 28 12:57:58 crc kubenswrapper[4779]: I1128 12:57:58.530264 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 12:57:58 crc kubenswrapper[4779]: I1128 12:57:58.623661 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v2cr\" (UniqueName: \"kubernetes.io/projected/2b7eb5ac-85fd-42e9-ae97-b0a51381528d-kube-api-access-5v2cr\") pod \"nova-api-0\" (UID: \"2b7eb5ac-85fd-42e9-ae97-b0a51381528d\") " pod="openstack/nova-api-0" Nov 28 12:57:58 crc kubenswrapper[4779]: I1128 12:57:58.624338 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b7eb5ac-85fd-42e9-ae97-b0a51381528d-public-tls-certs\") pod \"nova-api-0\" (UID: \"2b7eb5ac-85fd-42e9-ae97-b0a51381528d\") " pod="openstack/nova-api-0" Nov 28 12:57:58 crc kubenswrapper[4779]: I1128 12:57:58.624566 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b7eb5ac-85fd-42e9-ae97-b0a51381528d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2b7eb5ac-85fd-42e9-ae97-b0a51381528d\") " pod="openstack/nova-api-0" Nov 28 12:57:58 crc kubenswrapper[4779]: I1128 12:57:58.624747 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b7eb5ac-85fd-42e9-ae97-b0a51381528d-logs\") pod \"nova-api-0\" (UID: \"2b7eb5ac-85fd-42e9-ae97-b0a51381528d\") " pod="openstack/nova-api-0" Nov 28 12:57:58 crc kubenswrapper[4779]: I1128 12:57:58.624929 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b7eb5ac-85fd-42e9-ae97-b0a51381528d-config-data\") pod \"nova-api-0\" (UID: \"2b7eb5ac-85fd-42e9-ae97-b0a51381528d\") " pod="openstack/nova-api-0" Nov 28 12:57:58 crc kubenswrapper[4779]: I1128 12:57:58.625268 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b7eb5ac-85fd-42e9-ae97-b0a51381528d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2b7eb5ac-85fd-42e9-ae97-b0a51381528d\") " pod="openstack/nova-api-0" Nov 28 12:57:58 crc kubenswrapper[4779]: I1128 12:57:58.727990 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b7eb5ac-85fd-42e9-ae97-b0a51381528d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2b7eb5ac-85fd-42e9-ae97-b0a51381528d\") " pod="openstack/nova-api-0" Nov 28 12:57:58 crc kubenswrapper[4779]: I1128 12:57:58.728199 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b7eb5ac-85fd-42e9-ae97-b0a51381528d-logs\") pod \"nova-api-0\" (UID: \"2b7eb5ac-85fd-42e9-ae97-b0a51381528d\") " pod="openstack/nova-api-0" Nov 28 12:57:58 crc kubenswrapper[4779]: I1128 12:57:58.728270 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b7eb5ac-85fd-42e9-ae97-b0a51381528d-config-data\") pod \"nova-api-0\" (UID: \"2b7eb5ac-85fd-42e9-ae97-b0a51381528d\") " pod="openstack/nova-api-0" Nov 28 12:57:58 crc kubenswrapper[4779]: I1128 12:57:58.728340 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b7eb5ac-85fd-42e9-ae97-b0a51381528d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2b7eb5ac-85fd-42e9-ae97-b0a51381528d\") " pod="openstack/nova-api-0" Nov 28 12:57:58 crc kubenswrapper[4779]: I1128 12:57:58.728406 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v2cr\" (UniqueName: \"kubernetes.io/projected/2b7eb5ac-85fd-42e9-ae97-b0a51381528d-kube-api-access-5v2cr\") pod \"nova-api-0\" (UID: \"2b7eb5ac-85fd-42e9-ae97-b0a51381528d\") " pod="openstack/nova-api-0" Nov 28 12:57:58 crc kubenswrapper[4779]: I1128 12:57:58.728438 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b7eb5ac-85fd-42e9-ae97-b0a51381528d-public-tls-certs\") pod \"nova-api-0\" (UID: \"2b7eb5ac-85fd-42e9-ae97-b0a51381528d\") " pod="openstack/nova-api-0" Nov 28 12:57:58 crc kubenswrapper[4779]: I1128 12:57:58.730271 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b7eb5ac-85fd-42e9-ae97-b0a51381528d-logs\") pod \"nova-api-0\" (UID: \"2b7eb5ac-85fd-42e9-ae97-b0a51381528d\") " pod="openstack/nova-api-0" Nov 28 12:57:58 crc kubenswrapper[4779]: I1128 12:57:58.736749 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b7eb5ac-85fd-42e9-ae97-b0a51381528d-public-tls-certs\") pod \"nova-api-0\" (UID: \"2b7eb5ac-85fd-42e9-ae97-b0a51381528d\") " pod="openstack/nova-api-0" Nov 28 12:57:58 crc kubenswrapper[4779]: I1128 12:57:58.736803 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b7eb5ac-85fd-42e9-ae97-b0a51381528d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2b7eb5ac-85fd-42e9-ae97-b0a51381528d\") " pod="openstack/nova-api-0" Nov 28 12:57:58 crc kubenswrapper[4779]: I1128 12:57:58.736875 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b7eb5ac-85fd-42e9-ae97-b0a51381528d-config-data\") pod \"nova-api-0\" (UID: \"2b7eb5ac-85fd-42e9-ae97-b0a51381528d\") " pod="openstack/nova-api-0" Nov 28 12:57:58 crc kubenswrapper[4779]: I1128 12:57:58.738438 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b7eb5ac-85fd-42e9-ae97-b0a51381528d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2b7eb5ac-85fd-42e9-ae97-b0a51381528d\") " pod="openstack/nova-api-0" Nov 28 12:57:58 crc kubenswrapper[4779]: I1128 12:57:58.750702 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5v2cr\" (UniqueName: \"kubernetes.io/projected/2b7eb5ac-85fd-42e9-ae97-b0a51381528d-kube-api-access-5v2cr\") pod \"nova-api-0\" (UID: \"2b7eb5ac-85fd-42e9-ae97-b0a51381528d\") " pod="openstack/nova-api-0" Nov 28 12:57:58 crc kubenswrapper[4779]: I1128 12:57:58.839748 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 12:57:59 crc kubenswrapper[4779]: I1128 12:57:59.425607 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a49e784-eabb-4391-a69f-695474f302b7","Type":"ContainerStarted","Data":"89181b30d62c3f4430f76d72a302618d3ba7217c8aea43dbcee2f552045c6f8c"} Nov 28 12:57:59 crc kubenswrapper[4779]: I1128 12:57:59.489879 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 12:57:59 crc kubenswrapper[4779]: I1128 12:57:59.742003 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7559047-05de-4ba9-acbd-8f57a1362666" path="/var/lib/kubelet/pods/f7559047-05de-4ba9-acbd-8f57a1362666/volumes" Nov 28 12:58:00 crc kubenswrapper[4779]: I1128 12:58:00.442368 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a49e784-eabb-4391-a69f-695474f302b7","Type":"ContainerStarted","Data":"2f0a352d15db1158d23e457bbf089866271c33e67c97172d12947b073d6afeb9"} Nov 28 12:58:00 crc kubenswrapper[4779]: I1128 12:58:00.445602 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2b7eb5ac-85fd-42e9-ae97-b0a51381528d","Type":"ContainerStarted","Data":"82e570a0aeb804d704249ed8f27d3f691a4dc3ba3b4638a20283e9b18b229d25"} Nov 28 12:58:00 crc kubenswrapper[4779]: I1128 12:58:00.445663 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2b7eb5ac-85fd-42e9-ae97-b0a51381528d","Type":"ContainerStarted","Data":"ef1f9e5356a7d1f0d78c19c9151a8e7221c907b46c9b7983d024acb9d9de9926"} Nov 28 12:58:00 crc kubenswrapper[4779]: I1128 12:58:00.445683 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2b7eb5ac-85fd-42e9-ae97-b0a51381528d","Type":"ContainerStarted","Data":"942edb12742a5df209e17a8443c17b189124a0b1fef01086fea6ce0061de0331"} Nov 28 12:58:00 crc kubenswrapper[4779]: I1128 12:58:00.483772 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.483751589 podStartE2EDuration="2.483751589s" podCreationTimestamp="2025-11-28 12:57:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:58:00.468833036 +0000 UTC m=+1341.034508490" watchObservedRunningTime="2025-11-28 12:58:00.483751589 +0000 UTC m=+1341.049426953" Nov 28 12:58:00 crc kubenswrapper[4779]: I1128 12:58:00.684231 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 28 12:58:00 crc kubenswrapper[4779]: I1128 12:58:00.708592 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 28 12:58:01 crc kubenswrapper[4779]: I1128 12:58:01.500430 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 28 12:58:01 crc kubenswrapper[4779]: I1128 12:58:01.671471 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-chfvd"] Nov 28 12:58:01 crc kubenswrapper[4779]: I1128 12:58:01.673687 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-chfvd" Nov 28 12:58:01 crc kubenswrapper[4779]: I1128 12:58:01.680732 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 28 12:58:01 crc kubenswrapper[4779]: I1128 12:58:01.681025 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 28 12:58:01 crc kubenswrapper[4779]: I1128 12:58:01.699414 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-chfvd"] Nov 28 12:58:01 crc kubenswrapper[4779]: I1128 12:58:01.788047 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/393ab5ea-7256-4ffd-85c6-31c5548c4795-scripts\") pod \"nova-cell1-cell-mapping-chfvd\" (UID: \"393ab5ea-7256-4ffd-85c6-31c5548c4795\") " pod="openstack/nova-cell1-cell-mapping-chfvd" Nov 28 12:58:01 crc kubenswrapper[4779]: I1128 12:58:01.788118 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/393ab5ea-7256-4ffd-85c6-31c5548c4795-config-data\") pod \"nova-cell1-cell-mapping-chfvd\" (UID: \"393ab5ea-7256-4ffd-85c6-31c5548c4795\") " pod="openstack/nova-cell1-cell-mapping-chfvd" Nov 28 12:58:01 crc kubenswrapper[4779]: I1128 12:58:01.788196 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvkzt\" (UniqueName: \"kubernetes.io/projected/393ab5ea-7256-4ffd-85c6-31c5548c4795-kube-api-access-vvkzt\") pod \"nova-cell1-cell-mapping-chfvd\" (UID: \"393ab5ea-7256-4ffd-85c6-31c5548c4795\") " pod="openstack/nova-cell1-cell-mapping-chfvd" Nov 28 12:58:01 crc kubenswrapper[4779]: I1128 12:58:01.788486 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/393ab5ea-7256-4ffd-85c6-31c5548c4795-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-chfvd\" (UID: \"393ab5ea-7256-4ffd-85c6-31c5548c4795\") " pod="openstack/nova-cell1-cell-mapping-chfvd" Nov 28 12:58:01 crc kubenswrapper[4779]: I1128 12:58:01.890156 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79b5d74c8c-4kf9h" Nov 28 12:58:01 crc kubenswrapper[4779]: I1128 12:58:01.890719 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/393ab5ea-7256-4ffd-85c6-31c5548c4795-scripts\") pod \"nova-cell1-cell-mapping-chfvd\" (UID: \"393ab5ea-7256-4ffd-85c6-31c5548c4795\") " pod="openstack/nova-cell1-cell-mapping-chfvd" Nov 28 12:58:01 crc kubenswrapper[4779]: I1128 12:58:01.890862 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/393ab5ea-7256-4ffd-85c6-31c5548c4795-config-data\") pod \"nova-cell1-cell-mapping-chfvd\" (UID: \"393ab5ea-7256-4ffd-85c6-31c5548c4795\") " pod="openstack/nova-cell1-cell-mapping-chfvd" Nov 28 12:58:01 crc kubenswrapper[4779]: I1128 12:58:01.891257 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvkzt\" (UniqueName: \"kubernetes.io/projected/393ab5ea-7256-4ffd-85c6-31c5548c4795-kube-api-access-vvkzt\") pod \"nova-cell1-cell-mapping-chfvd\" (UID: \"393ab5ea-7256-4ffd-85c6-31c5548c4795\") " pod="openstack/nova-cell1-cell-mapping-chfvd" Nov 28 12:58:01 crc kubenswrapper[4779]: I1128 12:58:01.891373 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/393ab5ea-7256-4ffd-85c6-31c5548c4795-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-chfvd\" (UID: \"393ab5ea-7256-4ffd-85c6-31c5548c4795\") " pod="openstack/nova-cell1-cell-mapping-chfvd" Nov 28 12:58:01 crc kubenswrapper[4779]: I1128 12:58:01.902879 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/393ab5ea-7256-4ffd-85c6-31c5548c4795-config-data\") pod \"nova-cell1-cell-mapping-chfvd\" (UID: \"393ab5ea-7256-4ffd-85c6-31c5548c4795\") " pod="openstack/nova-cell1-cell-mapping-chfvd" Nov 28 12:58:01 crc kubenswrapper[4779]: I1128 12:58:01.903186 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/393ab5ea-7256-4ffd-85c6-31c5548c4795-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-chfvd\" (UID: \"393ab5ea-7256-4ffd-85c6-31c5548c4795\") " pod="openstack/nova-cell1-cell-mapping-chfvd" Nov 28 12:58:01 crc kubenswrapper[4779]: I1128 12:58:01.902976 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/393ab5ea-7256-4ffd-85c6-31c5548c4795-scripts\") pod \"nova-cell1-cell-mapping-chfvd\" (UID: \"393ab5ea-7256-4ffd-85c6-31c5548c4795\") " pod="openstack/nova-cell1-cell-mapping-chfvd" Nov 28 12:58:01 crc kubenswrapper[4779]: I1128 12:58:01.924910 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvkzt\" (UniqueName: \"kubernetes.io/projected/393ab5ea-7256-4ffd-85c6-31c5548c4795-kube-api-access-vvkzt\") pod \"nova-cell1-cell-mapping-chfvd\" (UID: \"393ab5ea-7256-4ffd-85c6-31c5548c4795\") " pod="openstack/nova-cell1-cell-mapping-chfvd" Nov 28 12:58:02 crc kubenswrapper[4779]: I1128 12:58:02.006490 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fbc4d444f-mgxbw"] Nov 28 12:58:02 crc kubenswrapper[4779]: I1128 12:58:02.006777 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5fbc4d444f-mgxbw" podUID="f9820813-0205-41b5-a0cd-be93c4b28372" containerName="dnsmasq-dns" containerID="cri-o://98715bf2e6061faa11ee46c9723df1d5325386fd59bd784f15cec58a04dd23bd" gracePeriod=10 Nov 28 12:58:02 crc kubenswrapper[4779]: I1128 12:58:02.007629 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-chfvd" Nov 28 12:58:02 crc kubenswrapper[4779]: I1128 12:58:02.482084 4779 generic.go:334] "Generic (PLEG): container finished" podID="f9820813-0205-41b5-a0cd-be93c4b28372" containerID="98715bf2e6061faa11ee46c9723df1d5325386fd59bd784f15cec58a04dd23bd" exitCode=0 Nov 28 12:58:02 crc kubenswrapper[4779]: I1128 12:58:02.484447 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fbc4d444f-mgxbw" event={"ID":"f9820813-0205-41b5-a0cd-be93c4b28372","Type":"ContainerDied","Data":"98715bf2e6061faa11ee46c9723df1d5325386fd59bd784f15cec58a04dd23bd"} Nov 28 12:58:02 crc kubenswrapper[4779]: I1128 12:58:02.549347 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-chfvd"] Nov 28 12:58:02 crc kubenswrapper[4779]: I1128 12:58:02.584749 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fbc4d444f-mgxbw" Nov 28 12:58:02 crc kubenswrapper[4779]: I1128 12:58:02.715072 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9820813-0205-41b5-a0cd-be93c4b28372-dns-svc\") pod \"f9820813-0205-41b5-a0cd-be93c4b28372\" (UID: \"f9820813-0205-41b5-a0cd-be93c4b28372\") " Nov 28 12:58:02 crc kubenswrapper[4779]: I1128 12:58:02.715170 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9820813-0205-41b5-a0cd-be93c4b28372-config\") pod \"f9820813-0205-41b5-a0cd-be93c4b28372\" (UID: \"f9820813-0205-41b5-a0cd-be93c4b28372\") " Nov 28 12:58:02 crc kubenswrapper[4779]: I1128 12:58:02.715234 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ng5j\" (UniqueName: \"kubernetes.io/projected/f9820813-0205-41b5-a0cd-be93c4b28372-kube-api-access-6ng5j\") pod \"f9820813-0205-41b5-a0cd-be93c4b28372\" (UID: \"f9820813-0205-41b5-a0cd-be93c4b28372\") " Nov 28 12:58:02 crc kubenswrapper[4779]: I1128 12:58:02.715302 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f9820813-0205-41b5-a0cd-be93c4b28372-ovsdbserver-nb\") pod \"f9820813-0205-41b5-a0cd-be93c4b28372\" (UID: \"f9820813-0205-41b5-a0cd-be93c4b28372\") " Nov 28 12:58:02 crc kubenswrapper[4779]: I1128 12:58:02.715359 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f9820813-0205-41b5-a0cd-be93c4b28372-ovsdbserver-sb\") pod \"f9820813-0205-41b5-a0cd-be93c4b28372\" (UID: \"f9820813-0205-41b5-a0cd-be93c4b28372\") " Nov 28 12:58:02 crc kubenswrapper[4779]: I1128 12:58:02.715397 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f9820813-0205-41b5-a0cd-be93c4b28372-dns-swift-storage-0\") pod \"f9820813-0205-41b5-a0cd-be93c4b28372\" (UID: \"f9820813-0205-41b5-a0cd-be93c4b28372\") " Nov 28 12:58:02 crc kubenswrapper[4779]: I1128 12:58:02.719315 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9820813-0205-41b5-a0cd-be93c4b28372-kube-api-access-6ng5j" (OuterVolumeSpecName: "kube-api-access-6ng5j") pod "f9820813-0205-41b5-a0cd-be93c4b28372" (UID: "f9820813-0205-41b5-a0cd-be93c4b28372"). InnerVolumeSpecName "kube-api-access-6ng5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:58:02 crc kubenswrapper[4779]: I1128 12:58:02.763733 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9820813-0205-41b5-a0cd-be93c4b28372-config" (OuterVolumeSpecName: "config") pod "f9820813-0205-41b5-a0cd-be93c4b28372" (UID: "f9820813-0205-41b5-a0cd-be93c4b28372"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:58:02 crc kubenswrapper[4779]: I1128 12:58:02.779157 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9820813-0205-41b5-a0cd-be93c4b28372-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f9820813-0205-41b5-a0cd-be93c4b28372" (UID: "f9820813-0205-41b5-a0cd-be93c4b28372"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:58:02 crc kubenswrapper[4779]: I1128 12:58:02.780317 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9820813-0205-41b5-a0cd-be93c4b28372-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f9820813-0205-41b5-a0cd-be93c4b28372" (UID: "f9820813-0205-41b5-a0cd-be93c4b28372"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:58:02 crc kubenswrapper[4779]: I1128 12:58:02.781713 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9820813-0205-41b5-a0cd-be93c4b28372-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f9820813-0205-41b5-a0cd-be93c4b28372" (UID: "f9820813-0205-41b5-a0cd-be93c4b28372"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:58:02 crc kubenswrapper[4779]: I1128 12:58:02.796016 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9820813-0205-41b5-a0cd-be93c4b28372-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f9820813-0205-41b5-a0cd-be93c4b28372" (UID: "f9820813-0205-41b5-a0cd-be93c4b28372"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:58:02 crc kubenswrapper[4779]: I1128 12:58:02.817977 4779 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9820813-0205-41b5-a0cd-be93c4b28372-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 12:58:02 crc kubenswrapper[4779]: I1128 12:58:02.818012 4779 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9820813-0205-41b5-a0cd-be93c4b28372-config\") on node \"crc\" DevicePath \"\"" Nov 28 12:58:02 crc kubenswrapper[4779]: I1128 12:58:02.818023 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ng5j\" (UniqueName: \"kubernetes.io/projected/f9820813-0205-41b5-a0cd-be93c4b28372-kube-api-access-6ng5j\") on node \"crc\" DevicePath \"\"" Nov 28 12:58:02 crc kubenswrapper[4779]: I1128 12:58:02.818033 4779 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f9820813-0205-41b5-a0cd-be93c4b28372-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 12:58:02 crc kubenswrapper[4779]: I1128 12:58:02.818041 4779 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f9820813-0205-41b5-a0cd-be93c4b28372-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 12:58:02 crc kubenswrapper[4779]: I1128 12:58:02.818050 4779 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f9820813-0205-41b5-a0cd-be93c4b28372-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 12:58:03 crc kubenswrapper[4779]: I1128 12:58:03.492358 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a49e784-eabb-4391-a69f-695474f302b7","Type":"ContainerStarted","Data":"9d0b722e3ef9581850dec1db5500c91443fe91e794b6ddf8ad364c3dd1a3b06d"} Nov 28 12:58:03 crc kubenswrapper[4779]: I1128 12:58:03.494084 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 12:58:03 crc kubenswrapper[4779]: I1128 12:58:03.495234 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-chfvd" event={"ID":"393ab5ea-7256-4ffd-85c6-31c5548c4795","Type":"ContainerStarted","Data":"9fc1031303213304d75dc6c33acf72c57b51f43bc27668120a9fe3ed304736da"} Nov 28 12:58:03 crc kubenswrapper[4779]: I1128 12:58:03.495265 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-chfvd" event={"ID":"393ab5ea-7256-4ffd-85c6-31c5548c4795","Type":"ContainerStarted","Data":"c0f3482e49fff77aca11be973b16e17d1c28956c2a4a7bb0f9426d081c4b1218"} Nov 28 12:58:03 crc kubenswrapper[4779]: I1128 12:58:03.502033 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fbc4d444f-mgxbw" event={"ID":"f9820813-0205-41b5-a0cd-be93c4b28372","Type":"ContainerDied","Data":"efd54ead9f408c81bba5c67f0ac0dfeb059d2b7b3e4ad1fd680c3eb22fcf8e11"} Nov 28 12:58:03 crc kubenswrapper[4779]: I1128 12:58:03.502082 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fbc4d444f-mgxbw" Nov 28 12:58:03 crc kubenswrapper[4779]: I1128 12:58:03.502148 4779 scope.go:117] "RemoveContainer" containerID="98715bf2e6061faa11ee46c9723df1d5325386fd59bd784f15cec58a04dd23bd" Nov 28 12:58:03 crc kubenswrapper[4779]: I1128 12:58:03.534858 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.691194302 podStartE2EDuration="7.53482193s" podCreationTimestamp="2025-11-28 12:57:56 +0000 UTC" firstStartedPulling="2025-11-28 12:57:57.321800287 +0000 UTC m=+1337.887475641" lastFinishedPulling="2025-11-28 12:58:02.165427915 +0000 UTC m=+1342.731103269" observedRunningTime="2025-11-28 12:58:03.516755685 +0000 UTC m=+1344.082431059" watchObservedRunningTime="2025-11-28 12:58:03.53482193 +0000 UTC m=+1344.100497344" Nov 28 12:58:03 crc kubenswrapper[4779]: I1128 12:58:03.547429 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-chfvd" podStartSLOduration=2.547391871 podStartE2EDuration="2.547391871s" podCreationTimestamp="2025-11-28 12:58:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:58:03.538796175 +0000 UTC m=+1344.104471529" watchObservedRunningTime="2025-11-28 12:58:03.547391871 +0000 UTC m=+1344.113067225" Nov 28 12:58:03 crc kubenswrapper[4779]: I1128 12:58:03.553608 4779 scope.go:117] "RemoveContainer" containerID="1787d782a64f4c914b55b5a88528a74c40f680e1d50b370be44beb6976efd6fb" Nov 28 12:58:03 crc kubenswrapper[4779]: I1128 12:58:03.561064 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fbc4d444f-mgxbw"] Nov 28 12:58:03 crc kubenswrapper[4779]: I1128 12:58:03.576042 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5fbc4d444f-mgxbw"] Nov 28 12:58:03 crc kubenswrapper[4779]: I1128 12:58:03.760866 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9820813-0205-41b5-a0cd-be93c4b28372" path="/var/lib/kubelet/pods/f9820813-0205-41b5-a0cd-be93c4b28372/volumes" Nov 28 12:58:08 crc kubenswrapper[4779]: I1128 12:58:08.582031 4779 generic.go:334] "Generic (PLEG): container finished" podID="393ab5ea-7256-4ffd-85c6-31c5548c4795" containerID="9fc1031303213304d75dc6c33acf72c57b51f43bc27668120a9fe3ed304736da" exitCode=0 Nov 28 12:58:08 crc kubenswrapper[4779]: I1128 12:58:08.582899 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-chfvd" event={"ID":"393ab5ea-7256-4ffd-85c6-31c5548c4795","Type":"ContainerDied","Data":"9fc1031303213304d75dc6c33acf72c57b51f43bc27668120a9fe3ed304736da"} Nov 28 12:58:08 crc kubenswrapper[4779]: I1128 12:58:08.840900 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 12:58:08 crc kubenswrapper[4779]: I1128 12:58:08.840960 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 12:58:09 crc kubenswrapper[4779]: I1128 12:58:09.891304 4779 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2b7eb5ac-85fd-42e9-ae97-b0a51381528d" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.210:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 12:58:09 crc kubenswrapper[4779]: I1128 12:58:09.891315 4779 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2b7eb5ac-85fd-42e9-ae97-b0a51381528d" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.210:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 12:58:10 crc kubenswrapper[4779]: I1128 12:58:10.123151 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-chfvd" Nov 28 12:58:10 crc kubenswrapper[4779]: I1128 12:58:10.280324 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvkzt\" (UniqueName: \"kubernetes.io/projected/393ab5ea-7256-4ffd-85c6-31c5548c4795-kube-api-access-vvkzt\") pod \"393ab5ea-7256-4ffd-85c6-31c5548c4795\" (UID: \"393ab5ea-7256-4ffd-85c6-31c5548c4795\") " Nov 28 12:58:10 crc kubenswrapper[4779]: I1128 12:58:10.280479 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/393ab5ea-7256-4ffd-85c6-31c5548c4795-config-data\") pod \"393ab5ea-7256-4ffd-85c6-31c5548c4795\" (UID: \"393ab5ea-7256-4ffd-85c6-31c5548c4795\") " Nov 28 12:58:10 crc kubenswrapper[4779]: I1128 12:58:10.280536 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/393ab5ea-7256-4ffd-85c6-31c5548c4795-scripts\") pod \"393ab5ea-7256-4ffd-85c6-31c5548c4795\" (UID: \"393ab5ea-7256-4ffd-85c6-31c5548c4795\") " Nov 28 12:58:10 crc kubenswrapper[4779]: I1128 12:58:10.280582 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/393ab5ea-7256-4ffd-85c6-31c5548c4795-combined-ca-bundle\") pod \"393ab5ea-7256-4ffd-85c6-31c5548c4795\" (UID: \"393ab5ea-7256-4ffd-85c6-31c5548c4795\") " Nov 28 12:58:10 crc kubenswrapper[4779]: I1128 12:58:10.287623 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/393ab5ea-7256-4ffd-85c6-31c5548c4795-scripts" (OuterVolumeSpecName: "scripts") pod "393ab5ea-7256-4ffd-85c6-31c5548c4795" (UID: "393ab5ea-7256-4ffd-85c6-31c5548c4795"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:58:10 crc kubenswrapper[4779]: I1128 12:58:10.290260 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/393ab5ea-7256-4ffd-85c6-31c5548c4795-kube-api-access-vvkzt" (OuterVolumeSpecName: "kube-api-access-vvkzt") pod "393ab5ea-7256-4ffd-85c6-31c5548c4795" (UID: "393ab5ea-7256-4ffd-85c6-31c5548c4795"). InnerVolumeSpecName "kube-api-access-vvkzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:58:10 crc kubenswrapper[4779]: I1128 12:58:10.308504 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/393ab5ea-7256-4ffd-85c6-31c5548c4795-config-data" (OuterVolumeSpecName: "config-data") pod "393ab5ea-7256-4ffd-85c6-31c5548c4795" (UID: "393ab5ea-7256-4ffd-85c6-31c5548c4795"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:58:10 crc kubenswrapper[4779]: I1128 12:58:10.314446 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/393ab5ea-7256-4ffd-85c6-31c5548c4795-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "393ab5ea-7256-4ffd-85c6-31c5548c4795" (UID: "393ab5ea-7256-4ffd-85c6-31c5548c4795"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:58:10 crc kubenswrapper[4779]: I1128 12:58:10.382796 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvkzt\" (UniqueName: \"kubernetes.io/projected/393ab5ea-7256-4ffd-85c6-31c5548c4795-kube-api-access-vvkzt\") on node \"crc\" DevicePath \"\"" Nov 28 12:58:10 crc kubenswrapper[4779]: I1128 12:58:10.383021 4779 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/393ab5ea-7256-4ffd-85c6-31c5548c4795-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:58:10 crc kubenswrapper[4779]: I1128 12:58:10.383032 4779 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/393ab5ea-7256-4ffd-85c6-31c5548c4795-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:58:10 crc kubenswrapper[4779]: I1128 12:58:10.383040 4779 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/393ab5ea-7256-4ffd-85c6-31c5548c4795-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:58:10 crc kubenswrapper[4779]: I1128 12:58:10.606193 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-chfvd" event={"ID":"393ab5ea-7256-4ffd-85c6-31c5548c4795","Type":"ContainerDied","Data":"c0f3482e49fff77aca11be973b16e17d1c28956c2a4a7bb0f9426d081c4b1218"} Nov 28 12:58:10 crc kubenswrapper[4779]: I1128 12:58:10.606250 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-chfvd" Nov 28 12:58:10 crc kubenswrapper[4779]: I1128 12:58:10.606267 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0f3482e49fff77aca11be973b16e17d1c28956c2a4a7bb0f9426d081c4b1218" Nov 28 12:58:10 crc kubenswrapper[4779]: I1128 12:58:10.829475 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 12:58:10 crc kubenswrapper[4779]: I1128 12:58:10.829695 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2b7eb5ac-85fd-42e9-ae97-b0a51381528d" containerName="nova-api-log" containerID="cri-o://ef1f9e5356a7d1f0d78c19c9151a8e7221c907b46c9b7983d024acb9d9de9926" gracePeriod=30 Nov 28 12:58:10 crc kubenswrapper[4779]: I1128 12:58:10.829774 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2b7eb5ac-85fd-42e9-ae97-b0a51381528d" containerName="nova-api-api" containerID="cri-o://82e570a0aeb804d704249ed8f27d3f691a4dc3ba3b4638a20283e9b18b229d25" gracePeriod=30 Nov 28 12:58:10 crc kubenswrapper[4779]: I1128 12:58:10.849445 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 12:58:10 crc kubenswrapper[4779]: I1128 12:58:10.849688 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="a1cfc693-c027-4c51-bc8e-d1d5ac495223" containerName="nova-scheduler-scheduler" containerID="cri-o://6230a48249aab89bf769c90d7ef6bb0588b2058d6a90dd2932f6cb0f7b466930" gracePeriod=30 Nov 28 12:58:10 crc kubenswrapper[4779]: I1128 12:58:10.864609 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 12:58:10 crc kubenswrapper[4779]: I1128 12:58:10.864833 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="12323b44-9b4d-4d78-991e-b92d4daefcb6" containerName="nova-metadata-log" containerID="cri-o://e64347e284f6efbe656f8cd27a343538be1724c9a6be3f8047e878231429ebf6" gracePeriod=30 Nov 28 12:58:10 crc kubenswrapper[4779]: I1128 12:58:10.864952 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="12323b44-9b4d-4d78-991e-b92d4daefcb6" containerName="nova-metadata-metadata" containerID="cri-o://cc4cf61717ce614e20f674244807640d617d9b35a7b191a1e147b6d9aadadd97" gracePeriod=30 Nov 28 12:58:11 crc kubenswrapper[4779]: I1128 12:58:11.618592 4779 generic.go:334] "Generic (PLEG): container finished" podID="2b7eb5ac-85fd-42e9-ae97-b0a51381528d" containerID="ef1f9e5356a7d1f0d78c19c9151a8e7221c907b46c9b7983d024acb9d9de9926" exitCode=143 Nov 28 12:58:11 crc kubenswrapper[4779]: I1128 12:58:11.618653 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2b7eb5ac-85fd-42e9-ae97-b0a51381528d","Type":"ContainerDied","Data":"ef1f9e5356a7d1f0d78c19c9151a8e7221c907b46c9b7983d024acb9d9de9926"} Nov 28 12:58:11 crc kubenswrapper[4779]: I1128 12:58:11.622229 4779 generic.go:334] "Generic (PLEG): container finished" podID="12323b44-9b4d-4d78-991e-b92d4daefcb6" containerID="e64347e284f6efbe656f8cd27a343538be1724c9a6be3f8047e878231429ebf6" exitCode=143 Nov 28 12:58:11 crc kubenswrapper[4779]: I1128 12:58:11.622405 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"12323b44-9b4d-4d78-991e-b92d4daefcb6","Type":"ContainerDied","Data":"e64347e284f6efbe656f8cd27a343538be1724c9a6be3f8047e878231429ebf6"} Nov 28 12:58:11 crc kubenswrapper[4779]: E1128 12:58:11.687557 4779 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6230a48249aab89bf769c90d7ef6bb0588b2058d6a90dd2932f6cb0f7b466930" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 28 12:58:11 crc kubenswrapper[4779]: E1128 12:58:11.689055 4779 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6230a48249aab89bf769c90d7ef6bb0588b2058d6a90dd2932f6cb0f7b466930" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 28 12:58:11 crc kubenswrapper[4779]: E1128 12:58:11.691278 4779 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6230a48249aab89bf769c90d7ef6bb0588b2058d6a90dd2932f6cb0f7b466930" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 28 12:58:11 crc kubenswrapper[4779]: E1128 12:58:11.691336 4779 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="a1cfc693-c027-4c51-bc8e-d1d5ac495223" containerName="nova-scheduler-scheduler" Nov 28 12:58:14 crc kubenswrapper[4779]: I1128 12:58:14.019399 4779 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="12323b44-9b4d-4d78-991e-b92d4daefcb6" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.201:8775/\": read tcp 10.217.0.2:44188->10.217.0.201:8775: read: connection reset by peer" Nov 28 12:58:14 crc kubenswrapper[4779]: I1128 12:58:14.019411 4779 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="12323b44-9b4d-4d78-991e-b92d4daefcb6" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.201:8775/\": read tcp 10.217.0.2:44192->10.217.0.201:8775: read: connection reset by peer" Nov 28 12:58:14 crc kubenswrapper[4779]: I1128 12:58:14.531043 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 12:58:14 crc kubenswrapper[4779]: I1128 12:58:14.655761 4779 generic.go:334] "Generic (PLEG): container finished" podID="12323b44-9b4d-4d78-991e-b92d4daefcb6" containerID="cc4cf61717ce614e20f674244807640d617d9b35a7b191a1e147b6d9aadadd97" exitCode=0 Nov 28 12:58:14 crc kubenswrapper[4779]: I1128 12:58:14.655824 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"12323b44-9b4d-4d78-991e-b92d4daefcb6","Type":"ContainerDied","Data":"cc4cf61717ce614e20f674244807640d617d9b35a7b191a1e147b6d9aadadd97"} Nov 28 12:58:14 crc kubenswrapper[4779]: I1128 12:58:14.655863 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"12323b44-9b4d-4d78-991e-b92d4daefcb6","Type":"ContainerDied","Data":"d11105c1cb8e05344957cd9447279dfd352de061eeb64f4662c64252dce4cd22"} Nov 28 12:58:14 crc kubenswrapper[4779]: I1128 12:58:14.655891 4779 scope.go:117] "RemoveContainer" containerID="cc4cf61717ce614e20f674244807640d617d9b35a7b191a1e147b6d9aadadd97" Nov 28 12:58:14 crc kubenswrapper[4779]: I1128 12:58:14.655967 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 12:58:14 crc kubenswrapper[4779]: I1128 12:58:14.667893 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12323b44-9b4d-4d78-991e-b92d4daefcb6-config-data\") pod \"12323b44-9b4d-4d78-991e-b92d4daefcb6\" (UID: \"12323b44-9b4d-4d78-991e-b92d4daefcb6\") " Nov 28 12:58:14 crc kubenswrapper[4779]: I1128 12:58:14.667979 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12323b44-9b4d-4d78-991e-b92d4daefcb6-logs\") pod \"12323b44-9b4d-4d78-991e-b92d4daefcb6\" (UID: \"12323b44-9b4d-4d78-991e-b92d4daefcb6\") " Nov 28 12:58:14 crc kubenswrapper[4779]: I1128 12:58:14.668038 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4t2tk\" (UniqueName: \"kubernetes.io/projected/12323b44-9b4d-4d78-991e-b92d4daefcb6-kube-api-access-4t2tk\") pod \"12323b44-9b4d-4d78-991e-b92d4daefcb6\" (UID: \"12323b44-9b4d-4d78-991e-b92d4daefcb6\") " Nov 28 12:58:14 crc kubenswrapper[4779]: I1128 12:58:14.668207 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12323b44-9b4d-4d78-991e-b92d4daefcb6-combined-ca-bundle\") pod \"12323b44-9b4d-4d78-991e-b92d4daefcb6\" (UID: \"12323b44-9b4d-4d78-991e-b92d4daefcb6\") " Nov 28 12:58:14 crc kubenswrapper[4779]: I1128 12:58:14.668312 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/12323b44-9b4d-4d78-991e-b92d4daefcb6-nova-metadata-tls-certs\") pod \"12323b44-9b4d-4d78-991e-b92d4daefcb6\" (UID: \"12323b44-9b4d-4d78-991e-b92d4daefcb6\") " Nov 28 12:58:14 crc kubenswrapper[4779]: I1128 12:58:14.669527 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12323b44-9b4d-4d78-991e-b92d4daefcb6-logs" (OuterVolumeSpecName: "logs") pod "12323b44-9b4d-4d78-991e-b92d4daefcb6" (UID: "12323b44-9b4d-4d78-991e-b92d4daefcb6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:58:14 crc kubenswrapper[4779]: I1128 12:58:14.675187 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12323b44-9b4d-4d78-991e-b92d4daefcb6-kube-api-access-4t2tk" (OuterVolumeSpecName: "kube-api-access-4t2tk") pod "12323b44-9b4d-4d78-991e-b92d4daefcb6" (UID: "12323b44-9b4d-4d78-991e-b92d4daefcb6"). InnerVolumeSpecName "kube-api-access-4t2tk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:58:14 crc kubenswrapper[4779]: I1128 12:58:14.700778 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12323b44-9b4d-4d78-991e-b92d4daefcb6-config-data" (OuterVolumeSpecName: "config-data") pod "12323b44-9b4d-4d78-991e-b92d4daefcb6" (UID: "12323b44-9b4d-4d78-991e-b92d4daefcb6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:58:14 crc kubenswrapper[4779]: I1128 12:58:14.704529 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12323b44-9b4d-4d78-991e-b92d4daefcb6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "12323b44-9b4d-4d78-991e-b92d4daefcb6" (UID: "12323b44-9b4d-4d78-991e-b92d4daefcb6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:58:14 crc kubenswrapper[4779]: I1128 12:58:14.720690 4779 scope.go:117] "RemoveContainer" containerID="e64347e284f6efbe656f8cd27a343538be1724c9a6be3f8047e878231429ebf6" Nov 28 12:58:14 crc kubenswrapper[4779]: I1128 12:58:14.757340 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12323b44-9b4d-4d78-991e-b92d4daefcb6-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "12323b44-9b4d-4d78-991e-b92d4daefcb6" (UID: "12323b44-9b4d-4d78-991e-b92d4daefcb6"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:58:14 crc kubenswrapper[4779]: I1128 12:58:14.769968 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4t2tk\" (UniqueName: \"kubernetes.io/projected/12323b44-9b4d-4d78-991e-b92d4daefcb6-kube-api-access-4t2tk\") on node \"crc\" DevicePath \"\"" Nov 28 12:58:14 crc kubenswrapper[4779]: I1128 12:58:14.769995 4779 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12323b44-9b4d-4d78-991e-b92d4daefcb6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:58:14 crc kubenswrapper[4779]: I1128 12:58:14.770004 4779 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/12323b44-9b4d-4d78-991e-b92d4daefcb6-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 12:58:14 crc kubenswrapper[4779]: I1128 12:58:14.770014 4779 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12323b44-9b4d-4d78-991e-b92d4daefcb6-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:58:14 crc kubenswrapper[4779]: I1128 12:58:14.770024 4779 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12323b44-9b4d-4d78-991e-b92d4daefcb6-logs\") on node \"crc\" DevicePath \"\"" Nov 28 12:58:14 crc kubenswrapper[4779]: I1128 12:58:14.782333 4779 scope.go:117] "RemoveContainer" containerID="cc4cf61717ce614e20f674244807640d617d9b35a7b191a1e147b6d9aadadd97" Nov 28 12:58:14 crc kubenswrapper[4779]: E1128 12:58:14.782917 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc4cf61717ce614e20f674244807640d617d9b35a7b191a1e147b6d9aadadd97\": container with ID starting with cc4cf61717ce614e20f674244807640d617d9b35a7b191a1e147b6d9aadadd97 not found: ID does not exist" containerID="cc4cf61717ce614e20f674244807640d617d9b35a7b191a1e147b6d9aadadd97" Nov 28 12:58:14 crc kubenswrapper[4779]: I1128 12:58:14.782961 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc4cf61717ce614e20f674244807640d617d9b35a7b191a1e147b6d9aadadd97"} err="failed to get container status \"cc4cf61717ce614e20f674244807640d617d9b35a7b191a1e147b6d9aadadd97\": rpc error: code = NotFound desc = could not find container \"cc4cf61717ce614e20f674244807640d617d9b35a7b191a1e147b6d9aadadd97\": container with ID starting with cc4cf61717ce614e20f674244807640d617d9b35a7b191a1e147b6d9aadadd97 not found: ID does not exist" Nov 28 12:58:14 crc kubenswrapper[4779]: I1128 12:58:14.782988 4779 scope.go:117] "RemoveContainer" containerID="e64347e284f6efbe656f8cd27a343538be1724c9a6be3f8047e878231429ebf6" Nov 28 12:58:14 crc kubenswrapper[4779]: E1128 12:58:14.783345 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e64347e284f6efbe656f8cd27a343538be1724c9a6be3f8047e878231429ebf6\": container with ID starting with e64347e284f6efbe656f8cd27a343538be1724c9a6be3f8047e878231429ebf6 not found: ID does not exist" containerID="e64347e284f6efbe656f8cd27a343538be1724c9a6be3f8047e878231429ebf6" Nov 28 12:58:14 crc kubenswrapper[4779]: I1128 12:58:14.783402 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e64347e284f6efbe656f8cd27a343538be1724c9a6be3f8047e878231429ebf6"} err="failed to get container status \"e64347e284f6efbe656f8cd27a343538be1724c9a6be3f8047e878231429ebf6\": rpc error: code = NotFound desc = could not find container \"e64347e284f6efbe656f8cd27a343538be1724c9a6be3f8047e878231429ebf6\": container with ID starting with e64347e284f6efbe656f8cd27a343538be1724c9a6be3f8047e878231429ebf6 not found: ID does not exist" Nov 28 12:58:14 crc kubenswrapper[4779]: I1128 12:58:14.996317 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 12:58:15 crc kubenswrapper[4779]: I1128 12:58:15.012554 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 12:58:15 crc kubenswrapper[4779]: I1128 12:58:15.026118 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 28 12:58:15 crc kubenswrapper[4779]: E1128 12:58:15.026680 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12323b44-9b4d-4d78-991e-b92d4daefcb6" containerName="nova-metadata-log" Nov 28 12:58:15 crc kubenswrapper[4779]: I1128 12:58:15.026701 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="12323b44-9b4d-4d78-991e-b92d4daefcb6" containerName="nova-metadata-log" Nov 28 12:58:15 crc kubenswrapper[4779]: E1128 12:58:15.026718 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9820813-0205-41b5-a0cd-be93c4b28372" containerName="init" Nov 28 12:58:15 crc kubenswrapper[4779]: I1128 12:58:15.026729 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9820813-0205-41b5-a0cd-be93c4b28372" containerName="init" Nov 28 12:58:15 crc kubenswrapper[4779]: E1128 12:58:15.026777 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="393ab5ea-7256-4ffd-85c6-31c5548c4795" containerName="nova-manage" Nov 28 12:58:15 crc kubenswrapper[4779]: I1128 12:58:15.026791 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="393ab5ea-7256-4ffd-85c6-31c5548c4795" containerName="nova-manage" Nov 28 12:58:15 crc kubenswrapper[4779]: E1128 12:58:15.026808 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9820813-0205-41b5-a0cd-be93c4b28372" containerName="dnsmasq-dns" Nov 28 12:58:15 crc kubenswrapper[4779]: I1128 12:58:15.026821 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9820813-0205-41b5-a0cd-be93c4b28372" containerName="dnsmasq-dns" Nov 28 12:58:15 crc kubenswrapper[4779]: E1128 12:58:15.026841 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12323b44-9b4d-4d78-991e-b92d4daefcb6" containerName="nova-metadata-metadata" Nov 28 12:58:15 crc kubenswrapper[4779]: I1128 12:58:15.026851 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="12323b44-9b4d-4d78-991e-b92d4daefcb6" containerName="nova-metadata-metadata" Nov 28 12:58:15 crc kubenswrapper[4779]: I1128 12:58:15.027145 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9820813-0205-41b5-a0cd-be93c4b28372" containerName="dnsmasq-dns" Nov 28 12:58:15 crc kubenswrapper[4779]: I1128 12:58:15.027181 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="12323b44-9b4d-4d78-991e-b92d4daefcb6" containerName="nova-metadata-log" Nov 28 12:58:15 crc kubenswrapper[4779]: I1128 12:58:15.027202 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="393ab5ea-7256-4ffd-85c6-31c5548c4795" containerName="nova-manage" Nov 28 12:58:15 crc kubenswrapper[4779]: I1128 12:58:15.027221 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="12323b44-9b4d-4d78-991e-b92d4daefcb6" containerName="nova-metadata-metadata" Nov 28 12:58:15 crc kubenswrapper[4779]: I1128 12:58:15.028659 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 12:58:15 crc kubenswrapper[4779]: I1128 12:58:15.031128 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 28 12:58:15 crc kubenswrapper[4779]: I1128 12:58:15.031265 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 28 12:58:15 crc kubenswrapper[4779]: I1128 12:58:15.037544 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 12:58:15 crc kubenswrapper[4779]: I1128 12:58:15.176304 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bdd523c-399a-4ea8-999b-850a2dd6897c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9bdd523c-399a-4ea8-999b-850a2dd6897c\") " pod="openstack/nova-metadata-0" Nov 28 12:58:15 crc kubenswrapper[4779]: I1128 12:58:15.176367 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bdd523c-399a-4ea8-999b-850a2dd6897c-logs\") pod \"nova-metadata-0\" (UID: \"9bdd523c-399a-4ea8-999b-850a2dd6897c\") " pod="openstack/nova-metadata-0" Nov 28 12:58:15 crc kubenswrapper[4779]: I1128 12:58:15.176413 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq5rt\" (UniqueName: \"kubernetes.io/projected/9bdd523c-399a-4ea8-999b-850a2dd6897c-kube-api-access-bq5rt\") pod \"nova-metadata-0\" (UID: \"9bdd523c-399a-4ea8-999b-850a2dd6897c\") " pod="openstack/nova-metadata-0" Nov 28 12:58:15 crc kubenswrapper[4779]: I1128 12:58:15.176439 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bdd523c-399a-4ea8-999b-850a2dd6897c-config-data\") pod \"nova-metadata-0\" (UID: \"9bdd523c-399a-4ea8-999b-850a2dd6897c\") " pod="openstack/nova-metadata-0" Nov 28 12:58:15 crc kubenswrapper[4779]: I1128 12:58:15.176662 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bdd523c-399a-4ea8-999b-850a2dd6897c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9bdd523c-399a-4ea8-999b-850a2dd6897c\") " pod="openstack/nova-metadata-0" Nov 28 12:58:15 crc kubenswrapper[4779]: I1128 12:58:15.278212 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bdd523c-399a-4ea8-999b-850a2dd6897c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9bdd523c-399a-4ea8-999b-850a2dd6897c\") " pod="openstack/nova-metadata-0" Nov 28 12:58:15 crc kubenswrapper[4779]: I1128 12:58:15.278300 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bdd523c-399a-4ea8-999b-850a2dd6897c-logs\") pod \"nova-metadata-0\" (UID: \"9bdd523c-399a-4ea8-999b-850a2dd6897c\") " pod="openstack/nova-metadata-0" Nov 28 12:58:15 crc kubenswrapper[4779]: I1128 12:58:15.278370 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bq5rt\" (UniqueName: \"kubernetes.io/projected/9bdd523c-399a-4ea8-999b-850a2dd6897c-kube-api-access-bq5rt\") pod \"nova-metadata-0\" (UID: \"9bdd523c-399a-4ea8-999b-850a2dd6897c\") " pod="openstack/nova-metadata-0" Nov 28 12:58:15 crc kubenswrapper[4779]: I1128 12:58:15.278392 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bdd523c-399a-4ea8-999b-850a2dd6897c-config-data\") pod \"nova-metadata-0\" (UID: \"9bdd523c-399a-4ea8-999b-850a2dd6897c\") " pod="openstack/nova-metadata-0" Nov 28 12:58:15 crc kubenswrapper[4779]: I1128 12:58:15.278433 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bdd523c-399a-4ea8-999b-850a2dd6897c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9bdd523c-399a-4ea8-999b-850a2dd6897c\") " pod="openstack/nova-metadata-0" Nov 28 12:58:15 crc kubenswrapper[4779]: I1128 12:58:15.278683 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bdd523c-399a-4ea8-999b-850a2dd6897c-logs\") pod \"nova-metadata-0\" (UID: \"9bdd523c-399a-4ea8-999b-850a2dd6897c\") " pod="openstack/nova-metadata-0" Nov 28 12:58:15 crc kubenswrapper[4779]: I1128 12:58:15.282743 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bdd523c-399a-4ea8-999b-850a2dd6897c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9bdd523c-399a-4ea8-999b-850a2dd6897c\") " pod="openstack/nova-metadata-0" Nov 28 12:58:15 crc kubenswrapper[4779]: I1128 12:58:15.282912 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bdd523c-399a-4ea8-999b-850a2dd6897c-config-data\") pod \"nova-metadata-0\" (UID: \"9bdd523c-399a-4ea8-999b-850a2dd6897c\") " pod="openstack/nova-metadata-0" Nov 28 12:58:15 crc kubenswrapper[4779]: I1128 12:58:15.283921 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bdd523c-399a-4ea8-999b-850a2dd6897c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9bdd523c-399a-4ea8-999b-850a2dd6897c\") " pod="openstack/nova-metadata-0" Nov 28 12:58:15 crc kubenswrapper[4779]: I1128 12:58:15.294828 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bq5rt\" (UniqueName: \"kubernetes.io/projected/9bdd523c-399a-4ea8-999b-850a2dd6897c-kube-api-access-bq5rt\") pod \"nova-metadata-0\" (UID: \"9bdd523c-399a-4ea8-999b-850a2dd6897c\") " pod="openstack/nova-metadata-0" Nov 28 12:58:15 crc kubenswrapper[4779]: I1128 12:58:15.361608 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 12:58:15 crc kubenswrapper[4779]: I1128 12:58:15.661318 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 12:58:15 crc kubenswrapper[4779]: I1128 12:58:15.710243 4779 generic.go:334] "Generic (PLEG): container finished" podID="a1cfc693-c027-4c51-bc8e-d1d5ac495223" containerID="6230a48249aab89bf769c90d7ef6bb0588b2058d6a90dd2932f6cb0f7b466930" exitCode=0 Nov 28 12:58:15 crc kubenswrapper[4779]: I1128 12:58:15.710422 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 12:58:15 crc kubenswrapper[4779]: I1128 12:58:15.711175 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a1cfc693-c027-4c51-bc8e-d1d5ac495223","Type":"ContainerDied","Data":"6230a48249aab89bf769c90d7ef6bb0588b2058d6a90dd2932f6cb0f7b466930"} Nov 28 12:58:15 crc kubenswrapper[4779]: I1128 12:58:15.711264 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a1cfc693-c027-4c51-bc8e-d1d5ac495223","Type":"ContainerDied","Data":"922d76bdc10aa5dbf0239f11458fef11bb616b4481cafb123691ea815eae6f03"} Nov 28 12:58:15 crc kubenswrapper[4779]: I1128 12:58:15.711283 4779 scope.go:117] "RemoveContainer" containerID="6230a48249aab89bf769c90d7ef6bb0588b2058d6a90dd2932f6cb0f7b466930" Nov 28 12:58:15 crc kubenswrapper[4779]: I1128 12:58:15.713954 4779 generic.go:334] "Generic (PLEG): container finished" podID="2b7eb5ac-85fd-42e9-ae97-b0a51381528d" containerID="82e570a0aeb804d704249ed8f27d3f691a4dc3ba3b4638a20283e9b18b229d25" exitCode=0 Nov 28 12:58:15 crc kubenswrapper[4779]: I1128 12:58:15.714019 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2b7eb5ac-85fd-42e9-ae97-b0a51381528d","Type":"ContainerDied","Data":"82e570a0aeb804d704249ed8f27d3f691a4dc3ba3b4638a20283e9b18b229d25"} Nov 28 12:58:15 crc kubenswrapper[4779]: I1128 12:58:15.752679 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12323b44-9b4d-4d78-991e-b92d4daefcb6" path="/var/lib/kubelet/pods/12323b44-9b4d-4d78-991e-b92d4daefcb6/volumes" Nov 28 12:58:15 crc kubenswrapper[4779]: I1128 12:58:15.758031 4779 scope.go:117] "RemoveContainer" containerID="6230a48249aab89bf769c90d7ef6bb0588b2058d6a90dd2932f6cb0f7b466930" Nov 28 12:58:15 crc kubenswrapper[4779]: E1128 12:58:15.758409 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6230a48249aab89bf769c90d7ef6bb0588b2058d6a90dd2932f6cb0f7b466930\": container with ID starting with 6230a48249aab89bf769c90d7ef6bb0588b2058d6a90dd2932f6cb0f7b466930 not found: ID does not exist" containerID="6230a48249aab89bf769c90d7ef6bb0588b2058d6a90dd2932f6cb0f7b466930" Nov 28 12:58:15 crc kubenswrapper[4779]: I1128 12:58:15.758440 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6230a48249aab89bf769c90d7ef6bb0588b2058d6a90dd2932f6cb0f7b466930"} err="failed to get container status \"6230a48249aab89bf769c90d7ef6bb0588b2058d6a90dd2932f6cb0f7b466930\": rpc error: code = NotFound desc = could not find container \"6230a48249aab89bf769c90d7ef6bb0588b2058d6a90dd2932f6cb0f7b466930\": container with ID starting with 6230a48249aab89bf769c90d7ef6bb0588b2058d6a90dd2932f6cb0f7b466930 not found: ID does not exist" Nov 28 12:58:15 crc kubenswrapper[4779]: I1128 12:58:15.780381 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 12:58:15 crc kubenswrapper[4779]: I1128 12:58:15.797572 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbvgx\" (UniqueName: \"kubernetes.io/projected/a1cfc693-c027-4c51-bc8e-d1d5ac495223-kube-api-access-kbvgx\") pod \"a1cfc693-c027-4c51-bc8e-d1d5ac495223\" (UID: \"a1cfc693-c027-4c51-bc8e-d1d5ac495223\") " Nov 28 12:58:15 crc kubenswrapper[4779]: I1128 12:58:15.797697 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1cfc693-c027-4c51-bc8e-d1d5ac495223-combined-ca-bundle\") pod \"a1cfc693-c027-4c51-bc8e-d1d5ac495223\" (UID: \"a1cfc693-c027-4c51-bc8e-d1d5ac495223\") " Nov 28 12:58:15 crc kubenswrapper[4779]: I1128 12:58:15.797825 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1cfc693-c027-4c51-bc8e-d1d5ac495223-config-data\") pod \"a1cfc693-c027-4c51-bc8e-d1d5ac495223\" (UID: \"a1cfc693-c027-4c51-bc8e-d1d5ac495223\") " Nov 28 12:58:15 crc kubenswrapper[4779]: I1128 12:58:15.808331 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1cfc693-c027-4c51-bc8e-d1d5ac495223-kube-api-access-kbvgx" (OuterVolumeSpecName: "kube-api-access-kbvgx") pod "a1cfc693-c027-4c51-bc8e-d1d5ac495223" (UID: "a1cfc693-c027-4c51-bc8e-d1d5ac495223"). InnerVolumeSpecName "kube-api-access-kbvgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:58:15 crc kubenswrapper[4779]: I1128 12:58:15.832733 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1cfc693-c027-4c51-bc8e-d1d5ac495223-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a1cfc693-c027-4c51-bc8e-d1d5ac495223" (UID: "a1cfc693-c027-4c51-bc8e-d1d5ac495223"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:58:15 crc kubenswrapper[4779]: I1128 12:58:15.899158 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b7eb5ac-85fd-42e9-ae97-b0a51381528d-internal-tls-certs\") pod \"2b7eb5ac-85fd-42e9-ae97-b0a51381528d\" (UID: \"2b7eb5ac-85fd-42e9-ae97-b0a51381528d\") " Nov 28 12:58:15 crc kubenswrapper[4779]: I1128 12:58:15.899281 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b7eb5ac-85fd-42e9-ae97-b0a51381528d-public-tls-certs\") pod \"2b7eb5ac-85fd-42e9-ae97-b0a51381528d\" (UID: \"2b7eb5ac-85fd-42e9-ae97-b0a51381528d\") " Nov 28 12:58:15 crc kubenswrapper[4779]: I1128 12:58:15.899347 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b7eb5ac-85fd-42e9-ae97-b0a51381528d-logs\") pod \"2b7eb5ac-85fd-42e9-ae97-b0a51381528d\" (UID: \"2b7eb5ac-85fd-42e9-ae97-b0a51381528d\") " Nov 28 12:58:15 crc kubenswrapper[4779]: I1128 12:58:15.899378 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b7eb5ac-85fd-42e9-ae97-b0a51381528d-config-data\") pod \"2b7eb5ac-85fd-42e9-ae97-b0a51381528d\" (UID: \"2b7eb5ac-85fd-42e9-ae97-b0a51381528d\") " Nov 28 12:58:15 crc kubenswrapper[4779]: I1128 12:58:15.899398 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b7eb5ac-85fd-42e9-ae97-b0a51381528d-combined-ca-bundle\") pod \"2b7eb5ac-85fd-42e9-ae97-b0a51381528d\" (UID: \"2b7eb5ac-85fd-42e9-ae97-b0a51381528d\") " Nov 28 12:58:15 crc kubenswrapper[4779]: I1128 12:58:15.899462 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5v2cr\" (UniqueName: \"kubernetes.io/projected/2b7eb5ac-85fd-42e9-ae97-b0a51381528d-kube-api-access-5v2cr\") pod \"2b7eb5ac-85fd-42e9-ae97-b0a51381528d\" (UID: \"2b7eb5ac-85fd-42e9-ae97-b0a51381528d\") " Nov 28 12:58:15 crc kubenswrapper[4779]: I1128 12:58:15.899708 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b7eb5ac-85fd-42e9-ae97-b0a51381528d-logs" (OuterVolumeSpecName: "logs") pod "2b7eb5ac-85fd-42e9-ae97-b0a51381528d" (UID: "2b7eb5ac-85fd-42e9-ae97-b0a51381528d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:58:15 crc kubenswrapper[4779]: I1128 12:58:15.900306 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbvgx\" (UniqueName: \"kubernetes.io/projected/a1cfc693-c027-4c51-bc8e-d1d5ac495223-kube-api-access-kbvgx\") on node \"crc\" DevicePath \"\"" Nov 28 12:58:15 crc kubenswrapper[4779]: I1128 12:58:15.900325 4779 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1cfc693-c027-4c51-bc8e-d1d5ac495223-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:58:15 crc kubenswrapper[4779]: I1128 12:58:15.900335 4779 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b7eb5ac-85fd-42e9-ae97-b0a51381528d-logs\") on node \"crc\" DevicePath \"\"" Nov 28 12:58:16 crc kubenswrapper[4779]: I1128 12:58:16.350041 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b7eb5ac-85fd-42e9-ae97-b0a51381528d-kube-api-access-5v2cr" (OuterVolumeSpecName: "kube-api-access-5v2cr") pod "2b7eb5ac-85fd-42e9-ae97-b0a51381528d" (UID: "2b7eb5ac-85fd-42e9-ae97-b0a51381528d"). InnerVolumeSpecName "kube-api-access-5v2cr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:58:16 crc kubenswrapper[4779]: I1128 12:58:16.357710 4779 patch_prober.go:28] interesting pod/machine-config-daemon-kj9g2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 12:58:16 crc kubenswrapper[4779]: I1128 12:58:16.357757 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 12:58:16 crc kubenswrapper[4779]: I1128 12:58:16.363443 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b7eb5ac-85fd-42e9-ae97-b0a51381528d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2b7eb5ac-85fd-42e9-ae97-b0a51381528d" (UID: "2b7eb5ac-85fd-42e9-ae97-b0a51381528d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:58:16 crc kubenswrapper[4779]: I1128 12:58:16.389356 4779 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b7eb5ac-85fd-42e9-ae97-b0a51381528d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:58:16 crc kubenswrapper[4779]: I1128 12:58:16.389700 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5v2cr\" (UniqueName: \"kubernetes.io/projected/2b7eb5ac-85fd-42e9-ae97-b0a51381528d-kube-api-access-5v2cr\") on node \"crc\" DevicePath \"\"" Nov 28 12:58:16 crc kubenswrapper[4779]: I1128 12:58:16.392830 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b7eb5ac-85fd-42e9-ae97-b0a51381528d-config-data" (OuterVolumeSpecName: "config-data") pod "2b7eb5ac-85fd-42e9-ae97-b0a51381528d" (UID: "2b7eb5ac-85fd-42e9-ae97-b0a51381528d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:58:16 crc kubenswrapper[4779]: I1128 12:58:16.411471 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b7eb5ac-85fd-42e9-ae97-b0a51381528d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2b7eb5ac-85fd-42e9-ae97-b0a51381528d" (UID: "2b7eb5ac-85fd-42e9-ae97-b0a51381528d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:58:16 crc kubenswrapper[4779]: I1128 12:58:16.415453 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1cfc693-c027-4c51-bc8e-d1d5ac495223-config-data" (OuterVolumeSpecName: "config-data") pod "a1cfc693-c027-4c51-bc8e-d1d5ac495223" (UID: "a1cfc693-c027-4c51-bc8e-d1d5ac495223"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:58:16 crc kubenswrapper[4779]: I1128 12:58:16.466136 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b7eb5ac-85fd-42e9-ae97-b0a51381528d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2b7eb5ac-85fd-42e9-ae97-b0a51381528d" (UID: "2b7eb5ac-85fd-42e9-ae97-b0a51381528d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:58:16 crc kubenswrapper[4779]: I1128 12:58:16.476903 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 12:58:16 crc kubenswrapper[4779]: I1128 12:58:16.504307 4779 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b7eb5ac-85fd-42e9-ae97-b0a51381528d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 12:58:16 crc kubenswrapper[4779]: I1128 12:58:16.504544 4779 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b7eb5ac-85fd-42e9-ae97-b0a51381528d-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 12:58:16 crc kubenswrapper[4779]: I1128 12:58:16.504605 4779 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1cfc693-c027-4c51-bc8e-d1d5ac495223-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:58:16 crc kubenswrapper[4779]: I1128 12:58:16.504666 4779 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b7eb5ac-85fd-42e9-ae97-b0a51381528d-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:58:16 crc kubenswrapper[4779]: I1128 12:58:16.671230 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 12:58:16 crc kubenswrapper[4779]: I1128 12:58:16.680630 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 12:58:16 crc kubenswrapper[4779]: I1128 12:58:16.689719 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 12:58:16 crc kubenswrapper[4779]: E1128 12:58:16.690131 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b7eb5ac-85fd-42e9-ae97-b0a51381528d" containerName="nova-api-log" Nov 28 12:58:16 crc kubenswrapper[4779]: I1128 12:58:16.690147 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b7eb5ac-85fd-42e9-ae97-b0a51381528d" containerName="nova-api-log" Nov 28 12:58:16 crc kubenswrapper[4779]: E1128 12:58:16.690169 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b7eb5ac-85fd-42e9-ae97-b0a51381528d" containerName="nova-api-api" Nov 28 12:58:16 crc kubenswrapper[4779]: I1128 12:58:16.690175 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b7eb5ac-85fd-42e9-ae97-b0a51381528d" containerName="nova-api-api" Nov 28 12:58:16 crc kubenswrapper[4779]: E1128 12:58:16.690200 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1cfc693-c027-4c51-bc8e-d1d5ac495223" containerName="nova-scheduler-scheduler" Nov 28 12:58:16 crc kubenswrapper[4779]: I1128 12:58:16.690206 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1cfc693-c027-4c51-bc8e-d1d5ac495223" containerName="nova-scheduler-scheduler" Nov 28 12:58:16 crc kubenswrapper[4779]: I1128 12:58:16.690377 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b7eb5ac-85fd-42e9-ae97-b0a51381528d" containerName="nova-api-api" Nov 28 12:58:16 crc kubenswrapper[4779]: I1128 12:58:16.690398 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1cfc693-c027-4c51-bc8e-d1d5ac495223" containerName="nova-scheduler-scheduler" Nov 28 12:58:16 crc kubenswrapper[4779]: I1128 12:58:16.690405 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b7eb5ac-85fd-42e9-ae97-b0a51381528d" containerName="nova-api-log" Nov 28 12:58:16 crc kubenswrapper[4779]: I1128 12:58:16.691036 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 12:58:16 crc kubenswrapper[4779]: I1128 12:58:16.693108 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 28 12:58:16 crc kubenswrapper[4779]: I1128 12:58:16.698005 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 12:58:16 crc kubenswrapper[4779]: I1128 12:58:16.723064 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9bdd523c-399a-4ea8-999b-850a2dd6897c","Type":"ContainerStarted","Data":"9af71186260d2cdf740438ff55fbbb47e9254ab2c21714938297288738d3de0b"} Nov 28 12:58:16 crc kubenswrapper[4779]: I1128 12:58:16.725032 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2b7eb5ac-85fd-42e9-ae97-b0a51381528d","Type":"ContainerDied","Data":"942edb12742a5df209e17a8443c17b189124a0b1fef01086fea6ce0061de0331"} Nov 28 12:58:16 crc kubenswrapper[4779]: I1128 12:58:16.725148 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 12:58:16 crc kubenswrapper[4779]: I1128 12:58:16.725182 4779 scope.go:117] "RemoveContainer" containerID="82e570a0aeb804d704249ed8f27d3f691a4dc3ba3b4638a20283e9b18b229d25" Nov 28 12:58:16 crc kubenswrapper[4779]: I1128 12:58:16.755537 4779 scope.go:117] "RemoveContainer" containerID="ef1f9e5356a7d1f0d78c19c9151a8e7221c907b46c9b7983d024acb9d9de9926" Nov 28 12:58:16 crc kubenswrapper[4779]: I1128 12:58:16.761763 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 12:58:16 crc kubenswrapper[4779]: I1128 12:58:16.773783 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 28 12:58:16 crc kubenswrapper[4779]: I1128 12:58:16.788219 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 28 12:58:16 crc kubenswrapper[4779]: I1128 12:58:16.790655 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 12:58:16 crc kubenswrapper[4779]: I1128 12:58:16.796371 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 28 12:58:16 crc kubenswrapper[4779]: I1128 12:58:16.796618 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 28 12:58:16 crc kubenswrapper[4779]: I1128 12:58:16.796629 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 28 12:58:16 crc kubenswrapper[4779]: I1128 12:58:16.796917 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 12:58:16 crc kubenswrapper[4779]: I1128 12:58:16.809160 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkk5s\" (UniqueName: \"kubernetes.io/projected/458c0c46-271b-40b8-aadc-10cfb6939487-kube-api-access-tkk5s\") pod \"nova-scheduler-0\" (UID: \"458c0c46-271b-40b8-aadc-10cfb6939487\") " pod="openstack/nova-scheduler-0" Nov 28 12:58:16 crc kubenswrapper[4779]: I1128 12:58:16.810040 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/458c0c46-271b-40b8-aadc-10cfb6939487-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"458c0c46-271b-40b8-aadc-10cfb6939487\") " pod="openstack/nova-scheduler-0" Nov 28 12:58:16 crc kubenswrapper[4779]: I1128 12:58:16.810176 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/458c0c46-271b-40b8-aadc-10cfb6939487-config-data\") pod \"nova-scheduler-0\" (UID: \"458c0c46-271b-40b8-aadc-10cfb6939487\") " pod="openstack/nova-scheduler-0" Nov 28 12:58:16 crc kubenswrapper[4779]: I1128 12:58:16.912131 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ccd118c8-a309-4e17-952e-647ce404bbeb-logs\") pod \"nova-api-0\" (UID: \"ccd118c8-a309-4e17-952e-647ce404bbeb\") " pod="openstack/nova-api-0" Nov 28 12:58:16 crc kubenswrapper[4779]: I1128 12:58:16.912178 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rtbw\" (UniqueName: \"kubernetes.io/projected/ccd118c8-a309-4e17-952e-647ce404bbeb-kube-api-access-4rtbw\") pod \"nova-api-0\" (UID: \"ccd118c8-a309-4e17-952e-647ce404bbeb\") " pod="openstack/nova-api-0" Nov 28 12:58:16 crc kubenswrapper[4779]: I1128 12:58:16.912220 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkk5s\" (UniqueName: \"kubernetes.io/projected/458c0c46-271b-40b8-aadc-10cfb6939487-kube-api-access-tkk5s\") pod \"nova-scheduler-0\" (UID: \"458c0c46-271b-40b8-aadc-10cfb6939487\") " pod="openstack/nova-scheduler-0" Nov 28 12:58:16 crc kubenswrapper[4779]: I1128 12:58:16.912306 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccd118c8-a309-4e17-952e-647ce404bbeb-public-tls-certs\") pod \"nova-api-0\" (UID: \"ccd118c8-a309-4e17-952e-647ce404bbeb\") " pod="openstack/nova-api-0" Nov 28 12:58:16 crc kubenswrapper[4779]: I1128 12:58:16.912335 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccd118c8-a309-4e17-952e-647ce404bbeb-config-data\") pod \"nova-api-0\" (UID: \"ccd118c8-a309-4e17-952e-647ce404bbeb\") " pod="openstack/nova-api-0" Nov 28 12:58:16 crc kubenswrapper[4779]: I1128 12:58:16.912415 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccd118c8-a309-4e17-952e-647ce404bbeb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ccd118c8-a309-4e17-952e-647ce404bbeb\") " pod="openstack/nova-api-0" Nov 28 12:58:16 crc kubenswrapper[4779]: I1128 12:58:16.912466 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/458c0c46-271b-40b8-aadc-10cfb6939487-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"458c0c46-271b-40b8-aadc-10cfb6939487\") " pod="openstack/nova-scheduler-0" Nov 28 12:58:16 crc kubenswrapper[4779]: I1128 12:58:16.912498 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccd118c8-a309-4e17-952e-647ce404bbeb-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ccd118c8-a309-4e17-952e-647ce404bbeb\") " pod="openstack/nova-api-0" Nov 28 12:58:16 crc kubenswrapper[4779]: I1128 12:58:16.912529 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/458c0c46-271b-40b8-aadc-10cfb6939487-config-data\") pod \"nova-scheduler-0\" (UID: \"458c0c46-271b-40b8-aadc-10cfb6939487\") " pod="openstack/nova-scheduler-0" Nov 28 12:58:16 crc kubenswrapper[4779]: I1128 12:58:16.919070 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/458c0c46-271b-40b8-aadc-10cfb6939487-config-data\") pod \"nova-scheduler-0\" (UID: \"458c0c46-271b-40b8-aadc-10cfb6939487\") " pod="openstack/nova-scheduler-0" Nov 28 12:58:16 crc kubenswrapper[4779]: I1128 12:58:16.922242 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/458c0c46-271b-40b8-aadc-10cfb6939487-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"458c0c46-271b-40b8-aadc-10cfb6939487\") " pod="openstack/nova-scheduler-0" Nov 28 12:58:16 crc kubenswrapper[4779]: I1128 12:58:16.929864 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkk5s\" (UniqueName: \"kubernetes.io/projected/458c0c46-271b-40b8-aadc-10cfb6939487-kube-api-access-tkk5s\") pod \"nova-scheduler-0\" (UID: \"458c0c46-271b-40b8-aadc-10cfb6939487\") " pod="openstack/nova-scheduler-0" Nov 28 12:58:17 crc kubenswrapper[4779]: I1128 12:58:17.005193 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 12:58:17 crc kubenswrapper[4779]: I1128 12:58:17.014032 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ccd118c8-a309-4e17-952e-647ce404bbeb-logs\") pod \"nova-api-0\" (UID: \"ccd118c8-a309-4e17-952e-647ce404bbeb\") " pod="openstack/nova-api-0" Nov 28 12:58:17 crc kubenswrapper[4779]: I1128 12:58:17.014130 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rtbw\" (UniqueName: \"kubernetes.io/projected/ccd118c8-a309-4e17-952e-647ce404bbeb-kube-api-access-4rtbw\") pod \"nova-api-0\" (UID: \"ccd118c8-a309-4e17-952e-647ce404bbeb\") " pod="openstack/nova-api-0" Nov 28 12:58:17 crc kubenswrapper[4779]: I1128 12:58:17.014243 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccd118c8-a309-4e17-952e-647ce404bbeb-public-tls-certs\") pod \"nova-api-0\" (UID: \"ccd118c8-a309-4e17-952e-647ce404bbeb\") " pod="openstack/nova-api-0" Nov 28 12:58:17 crc kubenswrapper[4779]: I1128 12:58:17.014266 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccd118c8-a309-4e17-952e-647ce404bbeb-config-data\") pod \"nova-api-0\" (UID: \"ccd118c8-a309-4e17-952e-647ce404bbeb\") " pod="openstack/nova-api-0" Nov 28 12:58:17 crc kubenswrapper[4779]: I1128 12:58:17.014316 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccd118c8-a309-4e17-952e-647ce404bbeb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ccd118c8-a309-4e17-952e-647ce404bbeb\") " pod="openstack/nova-api-0" Nov 28 12:58:17 crc kubenswrapper[4779]: I1128 12:58:17.014367 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccd118c8-a309-4e17-952e-647ce404bbeb-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ccd118c8-a309-4e17-952e-647ce404bbeb\") " pod="openstack/nova-api-0" Nov 28 12:58:17 crc kubenswrapper[4779]: I1128 12:58:17.014839 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ccd118c8-a309-4e17-952e-647ce404bbeb-logs\") pod \"nova-api-0\" (UID: \"ccd118c8-a309-4e17-952e-647ce404bbeb\") " pod="openstack/nova-api-0" Nov 28 12:58:17 crc kubenswrapper[4779]: I1128 12:58:17.020448 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccd118c8-a309-4e17-952e-647ce404bbeb-public-tls-certs\") pod \"nova-api-0\" (UID: \"ccd118c8-a309-4e17-952e-647ce404bbeb\") " pod="openstack/nova-api-0" Nov 28 12:58:17 crc kubenswrapper[4779]: I1128 12:58:17.020696 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccd118c8-a309-4e17-952e-647ce404bbeb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ccd118c8-a309-4e17-952e-647ce404bbeb\") " pod="openstack/nova-api-0" Nov 28 12:58:17 crc kubenswrapper[4779]: I1128 12:58:17.021631 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccd118c8-a309-4e17-952e-647ce404bbeb-config-data\") pod \"nova-api-0\" (UID: \"ccd118c8-a309-4e17-952e-647ce404bbeb\") " pod="openstack/nova-api-0" Nov 28 12:58:17 crc kubenswrapper[4779]: I1128 12:58:17.022141 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccd118c8-a309-4e17-952e-647ce404bbeb-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ccd118c8-a309-4e17-952e-647ce404bbeb\") " pod="openstack/nova-api-0" Nov 28 12:58:17 crc kubenswrapper[4779]: I1128 12:58:17.037891 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rtbw\" (UniqueName: \"kubernetes.io/projected/ccd118c8-a309-4e17-952e-647ce404bbeb-kube-api-access-4rtbw\") pod \"nova-api-0\" (UID: \"ccd118c8-a309-4e17-952e-647ce404bbeb\") " pod="openstack/nova-api-0" Nov 28 12:58:17 crc kubenswrapper[4779]: I1128 12:58:17.126625 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 12:58:18 crc kubenswrapper[4779]: I1128 12:58:17.520262 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 12:58:18 crc kubenswrapper[4779]: I1128 12:58:17.757332 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b7eb5ac-85fd-42e9-ae97-b0a51381528d" path="/var/lib/kubelet/pods/2b7eb5ac-85fd-42e9-ae97-b0a51381528d/volumes" Nov 28 12:58:18 crc kubenswrapper[4779]: I1128 12:58:17.758899 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1cfc693-c027-4c51-bc8e-d1d5ac495223" path="/var/lib/kubelet/pods/a1cfc693-c027-4c51-bc8e-d1d5ac495223/volumes" Nov 28 12:58:18 crc kubenswrapper[4779]: I1128 12:58:17.762219 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9bdd523c-399a-4ea8-999b-850a2dd6897c","Type":"ContainerStarted","Data":"cc2215361e333560c7769d28ca28a869ab7a43eca21ebe3ab5365dfcf853dff8"} Nov 28 12:58:18 crc kubenswrapper[4779]: I1128 12:58:17.762259 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9bdd523c-399a-4ea8-999b-850a2dd6897c","Type":"ContainerStarted","Data":"0e2ea7ea8c7d19d5456916b9639f5df0d869c46e1996af4ae29f2805e1d3617e"} Nov 28 12:58:18 crc kubenswrapper[4779]: I1128 12:58:17.767801 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"458c0c46-271b-40b8-aadc-10cfb6939487","Type":"ContainerStarted","Data":"c684f3ad1d833b9374f50b55336f74645cd52da0a97514f134b8b3a58d7f4370"} Nov 28 12:58:18 crc kubenswrapper[4779]: I1128 12:58:17.797229 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.797204004 podStartE2EDuration="3.797204004s" podCreationTimestamp="2025-11-28 12:58:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:58:17.788403662 +0000 UTC m=+1358.354079056" watchObservedRunningTime="2025-11-28 12:58:17.797204004 +0000 UTC m=+1358.362879398" Nov 28 12:58:18 crc kubenswrapper[4779]: I1128 12:58:18.727004 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 12:58:18 crc kubenswrapper[4779]: I1128 12:58:18.784385 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"458c0c46-271b-40b8-aadc-10cfb6939487","Type":"ContainerStarted","Data":"1f03e68abf43a4fc4c9f4cd6dac89b520d0882989035a28fd6a9dce19bbcaff6"} Nov 28 12:58:18 crc kubenswrapper[4779]: I1128 12:58:18.789885 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ccd118c8-a309-4e17-952e-647ce404bbeb","Type":"ContainerStarted","Data":"8e41c7786df09cf9ae220961e18cb8d0776e9ee7906ac0ed3ee4cb56086568d6"} Nov 28 12:58:18 crc kubenswrapper[4779]: I1128 12:58:18.815777 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.815757161 podStartE2EDuration="2.815757161s" podCreationTimestamp="2025-11-28 12:58:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:58:18.812800133 +0000 UTC m=+1359.378475527" watchObservedRunningTime="2025-11-28 12:58:18.815757161 +0000 UTC m=+1359.381432525" Nov 28 12:58:19 crc kubenswrapper[4779]: I1128 12:58:19.800495 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ccd118c8-a309-4e17-952e-647ce404bbeb","Type":"ContainerStarted","Data":"1ed5f299c2aa97d3a9f3019941eea6a83c123150b8ca3389988e876922b36c73"} Nov 28 12:58:19 crc kubenswrapper[4779]: I1128 12:58:19.800901 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ccd118c8-a309-4e17-952e-647ce404bbeb","Type":"ContainerStarted","Data":"7b10b3881a1b6f00e35b250dce05beaa8f0f9fd93aab7d8b5d7e526ca1f06801"} Nov 28 12:58:19 crc kubenswrapper[4779]: I1128 12:58:19.826951 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.826932274 podStartE2EDuration="3.826932274s" podCreationTimestamp="2025-11-28 12:58:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:58:19.815453442 +0000 UTC m=+1360.381128796" watchObservedRunningTime="2025-11-28 12:58:19.826932274 +0000 UTC m=+1360.392607638" Nov 28 12:58:20 crc kubenswrapper[4779]: I1128 12:58:20.361759 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 28 12:58:20 crc kubenswrapper[4779]: I1128 12:58:20.361852 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 28 12:58:22 crc kubenswrapper[4779]: I1128 12:58:22.005820 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 28 12:58:25 crc kubenswrapper[4779]: I1128 12:58:25.362386 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 28 12:58:25 crc kubenswrapper[4779]: I1128 12:58:25.363297 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 28 12:58:26 crc kubenswrapper[4779]: I1128 12:58:26.383261 4779 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="9bdd523c-399a-4ea8-999b-850a2dd6897c" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.212:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 12:58:26 crc kubenswrapper[4779]: I1128 12:58:26.384958 4779 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="9bdd523c-399a-4ea8-999b-850a2dd6897c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.212:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 12:58:26 crc kubenswrapper[4779]: I1128 12:58:26.819842 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 28 12:58:27 crc kubenswrapper[4779]: I1128 12:58:27.006304 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 28 12:58:27 crc kubenswrapper[4779]: I1128 12:58:27.054872 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 28 12:58:27 crc kubenswrapper[4779]: I1128 12:58:27.127953 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 12:58:27 crc kubenswrapper[4779]: I1128 12:58:27.128070 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 12:58:27 crc kubenswrapper[4779]: I1128 12:58:27.951810 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 28 12:58:28 crc kubenswrapper[4779]: I1128 12:58:28.142231 4779 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ccd118c8-a309-4e17-952e-647ce404bbeb" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.214:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 12:58:28 crc kubenswrapper[4779]: I1128 12:58:28.142526 4779 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ccd118c8-a309-4e17-952e-647ce404bbeb" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.214:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 12:58:31 crc kubenswrapper[4779]: I1128 12:58:31.022628 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9mvw8"] Nov 28 12:58:31 crc kubenswrapper[4779]: I1128 12:58:31.024929 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9mvw8" Nov 28 12:58:31 crc kubenswrapper[4779]: I1128 12:58:31.055156 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9mvw8"] Nov 28 12:58:31 crc kubenswrapper[4779]: I1128 12:58:31.132458 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e-catalog-content\") pod \"redhat-operators-9mvw8\" (UID: \"0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e\") " pod="openshift-marketplace/redhat-operators-9mvw8" Nov 28 12:58:31 crc kubenswrapper[4779]: I1128 12:58:31.132719 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zs9g\" (UniqueName: \"kubernetes.io/projected/0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e-kube-api-access-2zs9g\") pod \"redhat-operators-9mvw8\" (UID: \"0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e\") " pod="openshift-marketplace/redhat-operators-9mvw8" Nov 28 12:58:31 crc kubenswrapper[4779]: I1128 12:58:31.132887 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e-utilities\") pod \"redhat-operators-9mvw8\" (UID: \"0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e\") " pod="openshift-marketplace/redhat-operators-9mvw8" Nov 28 12:58:31 crc kubenswrapper[4779]: I1128 12:58:31.234628 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e-catalog-content\") pod \"redhat-operators-9mvw8\" (UID: \"0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e\") " pod="openshift-marketplace/redhat-operators-9mvw8" Nov 28 12:58:31 crc kubenswrapper[4779]: I1128 12:58:31.234853 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zs9g\" (UniqueName: \"kubernetes.io/projected/0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e-kube-api-access-2zs9g\") pod \"redhat-operators-9mvw8\" (UID: \"0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e\") " pod="openshift-marketplace/redhat-operators-9mvw8" Nov 28 12:58:31 crc kubenswrapper[4779]: I1128 12:58:31.235014 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e-utilities\") pod \"redhat-operators-9mvw8\" (UID: \"0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e\") " pod="openshift-marketplace/redhat-operators-9mvw8" Nov 28 12:58:31 crc kubenswrapper[4779]: I1128 12:58:31.235136 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e-catalog-content\") pod \"redhat-operators-9mvw8\" (UID: \"0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e\") " pod="openshift-marketplace/redhat-operators-9mvw8" Nov 28 12:58:31 crc kubenswrapper[4779]: I1128 12:58:31.235740 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e-utilities\") pod \"redhat-operators-9mvw8\" (UID: \"0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e\") " pod="openshift-marketplace/redhat-operators-9mvw8" Nov 28 12:58:31 crc kubenswrapper[4779]: I1128 12:58:31.258999 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zs9g\" (UniqueName: \"kubernetes.io/projected/0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e-kube-api-access-2zs9g\") pod \"redhat-operators-9mvw8\" (UID: \"0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e\") " pod="openshift-marketplace/redhat-operators-9mvw8" Nov 28 12:58:31 crc kubenswrapper[4779]: I1128 12:58:31.347678 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9mvw8" Nov 28 12:58:31 crc kubenswrapper[4779]: I1128 12:58:31.889656 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9mvw8"] Nov 28 12:58:31 crc kubenswrapper[4779]: I1128 12:58:31.946448 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9mvw8" event={"ID":"0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e","Type":"ContainerStarted","Data":"cf0ca44ffd70fc045fd4cd5b28a4fd3356e4ed86c33928f8bcd09f517f5bf505"} Nov 28 12:58:32 crc kubenswrapper[4779]: I1128 12:58:32.965945 4779 generic.go:334] "Generic (PLEG): container finished" podID="0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e" containerID="575d2b023ecab617194d6ca170a19c373df4f0b9541b81c43bfd55b7fda7028a" exitCode=0 Nov 28 12:58:32 crc kubenswrapper[4779]: I1128 12:58:32.965984 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9mvw8" event={"ID":"0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e","Type":"ContainerDied","Data":"575d2b023ecab617194d6ca170a19c373df4f0b9541b81c43bfd55b7fda7028a"} Nov 28 12:58:33 crc kubenswrapper[4779]: I1128 12:58:33.981383 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9mvw8" event={"ID":"0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e","Type":"ContainerStarted","Data":"7127a4e0381a92c826330f55a57f12210e34a61b3af4c95d946d3730f80cc337"} Nov 28 12:58:35 crc kubenswrapper[4779]: I1128 12:58:35.370222 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 28 12:58:35 crc kubenswrapper[4779]: I1128 12:58:35.371794 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 28 12:58:35 crc kubenswrapper[4779]: I1128 12:58:35.375512 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 28 12:58:36 crc kubenswrapper[4779]: I1128 12:58:36.006288 4779 generic.go:334] "Generic (PLEG): container finished" podID="0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e" containerID="7127a4e0381a92c826330f55a57f12210e34a61b3af4c95d946d3730f80cc337" exitCode=0 Nov 28 12:58:36 crc kubenswrapper[4779]: I1128 12:58:36.007159 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9mvw8" event={"ID":"0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e","Type":"ContainerDied","Data":"7127a4e0381a92c826330f55a57f12210e34a61b3af4c95d946d3730f80cc337"} Nov 28 12:58:36 crc kubenswrapper[4779]: I1128 12:58:36.102296 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 28 12:58:37 crc kubenswrapper[4779]: I1128 12:58:37.022070 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9mvw8" event={"ID":"0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e","Type":"ContainerStarted","Data":"4fb261f277d21472d4d4c04a8cf8b4e1166a6c5a80a15ff3cf7884ddf53645e3"} Nov 28 12:58:37 crc kubenswrapper[4779]: I1128 12:58:37.070836 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9mvw8" podStartSLOduration=2.585545662 podStartE2EDuration="6.070818341s" podCreationTimestamp="2025-11-28 12:58:31 +0000 UTC" firstStartedPulling="2025-11-28 12:58:32.971725355 +0000 UTC m=+1373.537400739" lastFinishedPulling="2025-11-28 12:58:36.456998064 +0000 UTC m=+1377.022673418" observedRunningTime="2025-11-28 12:58:37.064704917 +0000 UTC m=+1377.630380271" watchObservedRunningTime="2025-11-28 12:58:37.070818341 +0000 UTC m=+1377.636493705" Nov 28 12:58:37 crc kubenswrapper[4779]: I1128 12:58:37.142520 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 28 12:58:37 crc kubenswrapper[4779]: I1128 12:58:37.142842 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 28 12:58:37 crc kubenswrapper[4779]: I1128 12:58:37.144132 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 28 12:58:37 crc kubenswrapper[4779]: I1128 12:58:37.153257 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 28 12:58:38 crc kubenswrapper[4779]: I1128 12:58:38.033885 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 28 12:58:38 crc kubenswrapper[4779]: I1128 12:58:38.045025 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 28 12:58:41 crc kubenswrapper[4779]: I1128 12:58:41.348743 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9mvw8" Nov 28 12:58:41 crc kubenswrapper[4779]: I1128 12:58:41.349585 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9mvw8" Nov 28 12:58:42 crc kubenswrapper[4779]: I1128 12:58:42.425918 4779 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9mvw8" podUID="0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e" containerName="registry-server" probeResult="failure" output=< Nov 28 12:58:42 crc kubenswrapper[4779]: timeout: failed to connect service ":50051" within 1s Nov 28 12:58:42 crc kubenswrapper[4779]: > Nov 28 12:58:46 crc kubenswrapper[4779]: I1128 12:58:46.285765 4779 patch_prober.go:28] interesting pod/machine-config-daemon-kj9g2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 12:58:46 crc kubenswrapper[4779]: I1128 12:58:46.292375 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 12:58:46 crc kubenswrapper[4779]: I1128 12:58:46.292469 4779 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" Nov 28 12:58:46 crc kubenswrapper[4779]: I1128 12:58:46.293885 4779 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7c21214830b8f1e0b08f1ae5ac2fb71de0793255942c5f72a4ead485743abffa"} pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 12:58:46 crc kubenswrapper[4779]: I1128 12:58:46.294026 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" containerID="cri-o://7c21214830b8f1e0b08f1ae5ac2fb71de0793255942c5f72a4ead485743abffa" gracePeriod=600 Nov 28 12:58:46 crc kubenswrapper[4779]: I1128 12:58:46.417773 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 12:58:47 crc kubenswrapper[4779]: I1128 12:58:47.146137 4779 generic.go:334] "Generic (PLEG): container finished" podID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerID="7c21214830b8f1e0b08f1ae5ac2fb71de0793255942c5f72a4ead485743abffa" exitCode=0 Nov 28 12:58:47 crc kubenswrapper[4779]: I1128 12:58:47.146213 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" event={"ID":"3b2a3eb4-4de5-491b-b466-3a35b7d745ec","Type":"ContainerDied","Data":"7c21214830b8f1e0b08f1ae5ac2fb71de0793255942c5f72a4ead485743abffa"} Nov 28 12:58:47 crc kubenswrapper[4779]: I1128 12:58:47.146620 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" event={"ID":"3b2a3eb4-4de5-491b-b466-3a35b7d745ec","Type":"ContainerStarted","Data":"3a5057813024b5f9eddaf198924d294d2253857acafc1e169a218697e2d27bcf"} Nov 28 12:58:47 crc kubenswrapper[4779]: I1128 12:58:47.146667 4779 scope.go:117] "RemoveContainer" containerID="19d1e85c2d2159fafc03753bd25b2d9cba3a3d26bcb40723109739bd64095a04" Nov 28 12:58:47 crc kubenswrapper[4779]: I1128 12:58:47.327538 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 12:58:51 crc kubenswrapper[4779]: I1128 12:58:51.019473 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="1c8c979a-2995-4080-a0b6-173e62faceee" containerName="rabbitmq" containerID="cri-o://c5011532b76ebd7f52be3d1adec88fce96e7c546eec695918b4b16e74e7c8d0e" gracePeriod=604796 Nov 28 12:58:51 crc kubenswrapper[4779]: I1128 12:58:51.396747 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9mvw8" Nov 28 12:58:51 crc kubenswrapper[4779]: I1128 12:58:51.449958 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9mvw8" Nov 28 12:58:51 crc kubenswrapper[4779]: I1128 12:58:51.572760 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="486d0b33-cc59-495a-ba1f-e51c47e0d37e" containerName="rabbitmq" containerID="cri-o://49b506c13222ec7c43ad84124ec49e368724a10ad38c3fa28aa5a33a5a360647" gracePeriod=604796 Nov 28 12:58:51 crc kubenswrapper[4779]: I1128 12:58:51.633716 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9mvw8"] Nov 28 12:58:53 crc kubenswrapper[4779]: I1128 12:58:53.203312 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9mvw8" podUID="0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e" containerName="registry-server" containerID="cri-o://4fb261f277d21472d4d4c04a8cf8b4e1166a6c5a80a15ff3cf7884ddf53645e3" gracePeriod=2 Nov 28 12:58:53 crc kubenswrapper[4779]: I1128 12:58:53.741385 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9mvw8" Nov 28 12:58:53 crc kubenswrapper[4779]: I1128 12:58:53.838166 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zs9g\" (UniqueName: \"kubernetes.io/projected/0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e-kube-api-access-2zs9g\") pod \"0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e\" (UID: \"0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e\") " Nov 28 12:58:53 crc kubenswrapper[4779]: I1128 12:58:53.838245 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e-catalog-content\") pod \"0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e\" (UID: \"0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e\") " Nov 28 12:58:53 crc kubenswrapper[4779]: I1128 12:58:53.838418 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e-utilities\") pod \"0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e\" (UID: \"0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e\") " Nov 28 12:58:53 crc kubenswrapper[4779]: I1128 12:58:53.840511 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e-utilities" (OuterVolumeSpecName: "utilities") pod "0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e" (UID: "0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:58:53 crc kubenswrapper[4779]: I1128 12:58:53.846523 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e-kube-api-access-2zs9g" (OuterVolumeSpecName: "kube-api-access-2zs9g") pod "0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e" (UID: "0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e"). InnerVolumeSpecName "kube-api-access-2zs9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:58:53 crc kubenswrapper[4779]: I1128 12:58:53.940634 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e" (UID: "0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:58:53 crc kubenswrapper[4779]: I1128 12:58:53.941074 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zs9g\" (UniqueName: \"kubernetes.io/projected/0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e-kube-api-access-2zs9g\") on node \"crc\" DevicePath \"\"" Nov 28 12:58:53 crc kubenswrapper[4779]: I1128 12:58:53.941108 4779 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 12:58:53 crc kubenswrapper[4779]: I1128 12:58:53.941117 4779 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 12:58:54 crc kubenswrapper[4779]: I1128 12:58:54.215491 4779 generic.go:334] "Generic (PLEG): container finished" podID="0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e" containerID="4fb261f277d21472d4d4c04a8cf8b4e1166a6c5a80a15ff3cf7884ddf53645e3" exitCode=0 Nov 28 12:58:54 crc kubenswrapper[4779]: I1128 12:58:54.215583 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9mvw8" event={"ID":"0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e","Type":"ContainerDied","Data":"4fb261f277d21472d4d4c04a8cf8b4e1166a6c5a80a15ff3cf7884ddf53645e3"} Nov 28 12:58:54 crc kubenswrapper[4779]: I1128 12:58:54.215646 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9mvw8" Nov 28 12:58:54 crc kubenswrapper[4779]: I1128 12:58:54.215832 4779 scope.go:117] "RemoveContainer" containerID="4fb261f277d21472d4d4c04a8cf8b4e1166a6c5a80a15ff3cf7884ddf53645e3" Nov 28 12:58:54 crc kubenswrapper[4779]: I1128 12:58:54.215816 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9mvw8" event={"ID":"0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e","Type":"ContainerDied","Data":"cf0ca44ffd70fc045fd4cd5b28a4fd3356e4ed86c33928f8bcd09f517f5bf505"} Nov 28 12:58:54 crc kubenswrapper[4779]: I1128 12:58:54.239066 4779 scope.go:117] "RemoveContainer" containerID="7127a4e0381a92c826330f55a57f12210e34a61b3af4c95d946d3730f80cc337" Nov 28 12:58:54 crc kubenswrapper[4779]: I1128 12:58:54.275921 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9mvw8"] Nov 28 12:58:54 crc kubenswrapper[4779]: I1128 12:58:54.285073 4779 scope.go:117] "RemoveContainer" containerID="575d2b023ecab617194d6ca170a19c373df4f0b9541b81c43bfd55b7fda7028a" Nov 28 12:58:54 crc kubenswrapper[4779]: I1128 12:58:54.290961 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9mvw8"] Nov 28 12:58:54 crc kubenswrapper[4779]: I1128 12:58:54.321047 4779 scope.go:117] "RemoveContainer" containerID="4fb261f277d21472d4d4c04a8cf8b4e1166a6c5a80a15ff3cf7884ddf53645e3" Nov 28 12:58:54 crc kubenswrapper[4779]: E1128 12:58:54.321455 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fb261f277d21472d4d4c04a8cf8b4e1166a6c5a80a15ff3cf7884ddf53645e3\": container with ID starting with 4fb261f277d21472d4d4c04a8cf8b4e1166a6c5a80a15ff3cf7884ddf53645e3 not found: ID does not exist" containerID="4fb261f277d21472d4d4c04a8cf8b4e1166a6c5a80a15ff3cf7884ddf53645e3" Nov 28 12:58:54 crc kubenswrapper[4779]: I1128 12:58:54.321552 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fb261f277d21472d4d4c04a8cf8b4e1166a6c5a80a15ff3cf7884ddf53645e3"} err="failed to get container status \"4fb261f277d21472d4d4c04a8cf8b4e1166a6c5a80a15ff3cf7884ddf53645e3\": rpc error: code = NotFound desc = could not find container \"4fb261f277d21472d4d4c04a8cf8b4e1166a6c5a80a15ff3cf7884ddf53645e3\": container with ID starting with 4fb261f277d21472d4d4c04a8cf8b4e1166a6c5a80a15ff3cf7884ddf53645e3 not found: ID does not exist" Nov 28 12:58:54 crc kubenswrapper[4779]: I1128 12:58:54.321624 4779 scope.go:117] "RemoveContainer" containerID="7127a4e0381a92c826330f55a57f12210e34a61b3af4c95d946d3730f80cc337" Nov 28 12:58:54 crc kubenswrapper[4779]: E1128 12:58:54.322065 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7127a4e0381a92c826330f55a57f12210e34a61b3af4c95d946d3730f80cc337\": container with ID starting with 7127a4e0381a92c826330f55a57f12210e34a61b3af4c95d946d3730f80cc337 not found: ID does not exist" containerID="7127a4e0381a92c826330f55a57f12210e34a61b3af4c95d946d3730f80cc337" Nov 28 12:58:54 crc kubenswrapper[4779]: I1128 12:58:54.322221 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7127a4e0381a92c826330f55a57f12210e34a61b3af4c95d946d3730f80cc337"} err="failed to get container status \"7127a4e0381a92c826330f55a57f12210e34a61b3af4c95d946d3730f80cc337\": rpc error: code = NotFound desc = could not find container \"7127a4e0381a92c826330f55a57f12210e34a61b3af4c95d946d3730f80cc337\": container with ID starting with 7127a4e0381a92c826330f55a57f12210e34a61b3af4c95d946d3730f80cc337 not found: ID does not exist" Nov 28 12:58:54 crc kubenswrapper[4779]: I1128 12:58:54.322251 4779 scope.go:117] "RemoveContainer" containerID="575d2b023ecab617194d6ca170a19c373df4f0b9541b81c43bfd55b7fda7028a" Nov 28 12:58:54 crc kubenswrapper[4779]: E1128 12:58:54.322663 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"575d2b023ecab617194d6ca170a19c373df4f0b9541b81c43bfd55b7fda7028a\": container with ID starting with 575d2b023ecab617194d6ca170a19c373df4f0b9541b81c43bfd55b7fda7028a not found: ID does not exist" containerID="575d2b023ecab617194d6ca170a19c373df4f0b9541b81c43bfd55b7fda7028a" Nov 28 12:58:54 crc kubenswrapper[4779]: I1128 12:58:54.322752 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"575d2b023ecab617194d6ca170a19c373df4f0b9541b81c43bfd55b7fda7028a"} err="failed to get container status \"575d2b023ecab617194d6ca170a19c373df4f0b9541b81c43bfd55b7fda7028a\": rpc error: code = NotFound desc = could not find container \"575d2b023ecab617194d6ca170a19c373df4f0b9541b81c43bfd55b7fda7028a\": container with ID starting with 575d2b023ecab617194d6ca170a19c373df4f0b9541b81c43bfd55b7fda7028a not found: ID does not exist" Nov 28 12:58:55 crc kubenswrapper[4779]: I1128 12:58:55.738318 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e" path="/var/lib/kubelet/pods/0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e/volumes" Nov 28 12:58:57 crc kubenswrapper[4779]: I1128 12:58:57.249269 4779 generic.go:334] "Generic (PLEG): container finished" podID="1c8c979a-2995-4080-a0b6-173e62faceee" containerID="c5011532b76ebd7f52be3d1adec88fce96e7c546eec695918b4b16e74e7c8d0e" exitCode=0 Nov 28 12:58:57 crc kubenswrapper[4779]: I1128 12:58:57.249377 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1c8c979a-2995-4080-a0b6-173e62faceee","Type":"ContainerDied","Data":"c5011532b76ebd7f52be3d1adec88fce96e7c546eec695918b4b16e74e7c8d0e"} Nov 28 12:58:57 crc kubenswrapper[4779]: I1128 12:58:57.661945 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 28 12:58:57 crc kubenswrapper[4779]: I1128 12:58:57.721906 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1c8c979a-2995-4080-a0b6-173e62faceee-rabbitmq-confd\") pod \"1c8c979a-2995-4080-a0b6-173e62faceee\" (UID: \"1c8c979a-2995-4080-a0b6-173e62faceee\") " Nov 28 12:58:57 crc kubenswrapper[4779]: I1128 12:58:57.721976 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1c8c979a-2995-4080-a0b6-173e62faceee-erlang-cookie-secret\") pod \"1c8c979a-2995-4080-a0b6-173e62faceee\" (UID: \"1c8c979a-2995-4080-a0b6-173e62faceee\") " Nov 28 12:58:57 crc kubenswrapper[4779]: I1128 12:58:57.722052 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1c8c979a-2995-4080-a0b6-173e62faceee-config-data\") pod \"1c8c979a-2995-4080-a0b6-173e62faceee\" (UID: \"1c8c979a-2995-4080-a0b6-173e62faceee\") " Nov 28 12:58:57 crc kubenswrapper[4779]: I1128 12:58:57.722088 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1c8c979a-2995-4080-a0b6-173e62faceee-plugins-conf\") pod \"1c8c979a-2995-4080-a0b6-173e62faceee\" (UID: \"1c8c979a-2995-4080-a0b6-173e62faceee\") " Nov 28 12:58:57 crc kubenswrapper[4779]: I1128 12:58:57.722161 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1c8c979a-2995-4080-a0b6-173e62faceee-rabbitmq-tls\") pod \"1c8c979a-2995-4080-a0b6-173e62faceee\" (UID: \"1c8c979a-2995-4080-a0b6-173e62faceee\") " Nov 28 12:58:57 crc kubenswrapper[4779]: I1128 12:58:57.722185 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsbgc\" (UniqueName: \"kubernetes.io/projected/1c8c979a-2995-4080-a0b6-173e62faceee-kube-api-access-fsbgc\") pod \"1c8c979a-2995-4080-a0b6-173e62faceee\" (UID: \"1c8c979a-2995-4080-a0b6-173e62faceee\") " Nov 28 12:58:57 crc kubenswrapper[4779]: I1128 12:58:57.722225 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1c8c979a-2995-4080-a0b6-173e62faceee-pod-info\") pod \"1c8c979a-2995-4080-a0b6-173e62faceee\" (UID: \"1c8c979a-2995-4080-a0b6-173e62faceee\") " Nov 28 12:58:57 crc kubenswrapper[4779]: I1128 12:58:57.722305 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1c8c979a-2995-4080-a0b6-173e62faceee-rabbitmq-erlang-cookie\") pod \"1c8c979a-2995-4080-a0b6-173e62faceee\" (UID: \"1c8c979a-2995-4080-a0b6-173e62faceee\") " Nov 28 12:58:57 crc kubenswrapper[4779]: I1128 12:58:57.722329 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"1c8c979a-2995-4080-a0b6-173e62faceee\" (UID: \"1c8c979a-2995-4080-a0b6-173e62faceee\") " Nov 28 12:58:57 crc kubenswrapper[4779]: I1128 12:58:57.722349 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1c8c979a-2995-4080-a0b6-173e62faceee-server-conf\") pod \"1c8c979a-2995-4080-a0b6-173e62faceee\" (UID: \"1c8c979a-2995-4080-a0b6-173e62faceee\") " Nov 28 12:58:57 crc kubenswrapper[4779]: I1128 12:58:57.722371 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1c8c979a-2995-4080-a0b6-173e62faceee-rabbitmq-plugins\") pod \"1c8c979a-2995-4080-a0b6-173e62faceee\" (UID: \"1c8c979a-2995-4080-a0b6-173e62faceee\") " Nov 28 12:58:57 crc kubenswrapper[4779]: I1128 12:58:57.727717 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c8c979a-2995-4080-a0b6-173e62faceee-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "1c8c979a-2995-4080-a0b6-173e62faceee" (UID: "1c8c979a-2995-4080-a0b6-173e62faceee"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:58:57 crc kubenswrapper[4779]: I1128 12:58:57.730000 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c8c979a-2995-4080-a0b6-173e62faceee-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "1c8c979a-2995-4080-a0b6-173e62faceee" (UID: "1c8c979a-2995-4080-a0b6-173e62faceee"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:58:57 crc kubenswrapper[4779]: I1128 12:58:57.730017 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c8c979a-2995-4080-a0b6-173e62faceee-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "1c8c979a-2995-4080-a0b6-173e62faceee" (UID: "1c8c979a-2995-4080-a0b6-173e62faceee"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:58:57 crc kubenswrapper[4779]: I1128 12:58:57.733394 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c8c979a-2995-4080-a0b6-173e62faceee-kube-api-access-fsbgc" (OuterVolumeSpecName: "kube-api-access-fsbgc") pod "1c8c979a-2995-4080-a0b6-173e62faceee" (UID: "1c8c979a-2995-4080-a0b6-173e62faceee"). InnerVolumeSpecName "kube-api-access-fsbgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:58:57 crc kubenswrapper[4779]: I1128 12:58:57.739584 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c8c979a-2995-4080-a0b6-173e62faceee-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "1c8c979a-2995-4080-a0b6-173e62faceee" (UID: "1c8c979a-2995-4080-a0b6-173e62faceee"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:58:57 crc kubenswrapper[4779]: I1128 12:58:57.738456 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "persistence") pod "1c8c979a-2995-4080-a0b6-173e62faceee" (UID: "1c8c979a-2995-4080-a0b6-173e62faceee"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:58:57 crc kubenswrapper[4779]: I1128 12:58:57.744006 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c8c979a-2995-4080-a0b6-173e62faceee-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "1c8c979a-2995-4080-a0b6-173e62faceee" (UID: "1c8c979a-2995-4080-a0b6-173e62faceee"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:58:57 crc kubenswrapper[4779]: I1128 12:58:57.766927 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/1c8c979a-2995-4080-a0b6-173e62faceee-pod-info" (OuterVolumeSpecName: "pod-info") pod "1c8c979a-2995-4080-a0b6-173e62faceee" (UID: "1c8c979a-2995-4080-a0b6-173e62faceee"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 28 12:58:57 crc kubenswrapper[4779]: I1128 12:58:57.777426 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c8c979a-2995-4080-a0b6-173e62faceee-config-data" (OuterVolumeSpecName: "config-data") pod "1c8c979a-2995-4080-a0b6-173e62faceee" (UID: "1c8c979a-2995-4080-a0b6-173e62faceee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:58:57 crc kubenswrapper[4779]: I1128 12:58:57.824472 4779 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1c8c979a-2995-4080-a0b6-173e62faceee-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 28 12:58:57 crc kubenswrapper[4779]: I1128 12:58:57.824796 4779 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Nov 28 12:58:57 crc kubenswrapper[4779]: I1128 12:58:57.824806 4779 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1c8c979a-2995-4080-a0b6-173e62faceee-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 28 12:58:57 crc kubenswrapper[4779]: I1128 12:58:57.824815 4779 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1c8c979a-2995-4080-a0b6-173e62faceee-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 28 12:58:57 crc kubenswrapper[4779]: I1128 12:58:57.824824 4779 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1c8c979a-2995-4080-a0b6-173e62faceee-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:58:57 crc kubenswrapper[4779]: I1128 12:58:57.824832 4779 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1c8c979a-2995-4080-a0b6-173e62faceee-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 28 12:58:57 crc kubenswrapper[4779]: I1128 12:58:57.824839 4779 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1c8c979a-2995-4080-a0b6-173e62faceee-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 28 12:58:57 crc kubenswrapper[4779]: I1128 12:58:57.824848 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsbgc\" (UniqueName: \"kubernetes.io/projected/1c8c979a-2995-4080-a0b6-173e62faceee-kube-api-access-fsbgc\") on node \"crc\" DevicePath \"\"" Nov 28 12:58:57 crc kubenswrapper[4779]: I1128 12:58:57.824856 4779 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1c8c979a-2995-4080-a0b6-173e62faceee-pod-info\") on node \"crc\" DevicePath \"\"" Nov 28 12:58:57 crc kubenswrapper[4779]: I1128 12:58:57.829547 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c8c979a-2995-4080-a0b6-173e62faceee-server-conf" (OuterVolumeSpecName: "server-conf") pod "1c8c979a-2995-4080-a0b6-173e62faceee" (UID: "1c8c979a-2995-4080-a0b6-173e62faceee"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:58:57 crc kubenswrapper[4779]: I1128 12:58:57.848908 4779 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Nov 28 12:58:57 crc kubenswrapper[4779]: I1128 12:58:57.904289 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c8c979a-2995-4080-a0b6-173e62faceee-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "1c8c979a-2995-4080-a0b6-173e62faceee" (UID: "1c8c979a-2995-4080-a0b6-173e62faceee"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:58:57 crc kubenswrapper[4779]: I1128 12:58:57.926168 4779 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1c8c979a-2995-4080-a0b6-173e62faceee-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 28 12:58:57 crc kubenswrapper[4779]: I1128 12:58:57.926210 4779 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:58:57 crc kubenswrapper[4779]: I1128 12:58:57.926222 4779 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1c8c979a-2995-4080-a0b6-173e62faceee-server-conf\") on node \"crc\" DevicePath \"\"" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.083602 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.128882 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/486d0b33-cc59-495a-ba1f-e51c47e0d37e-rabbitmq-tls\") pod \"486d0b33-cc59-495a-ba1f-e51c47e0d37e\" (UID: \"486d0b33-cc59-495a-ba1f-e51c47e0d37e\") " Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.128944 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/486d0b33-cc59-495a-ba1f-e51c47e0d37e-server-conf\") pod \"486d0b33-cc59-495a-ba1f-e51c47e0d37e\" (UID: \"486d0b33-cc59-495a-ba1f-e51c47e0d37e\") " Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.128994 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/486d0b33-cc59-495a-ba1f-e51c47e0d37e-rabbitmq-confd\") pod \"486d0b33-cc59-495a-ba1f-e51c47e0d37e\" (UID: \"486d0b33-cc59-495a-ba1f-e51c47e0d37e\") " Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.129062 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvw96\" (UniqueName: \"kubernetes.io/projected/486d0b33-cc59-495a-ba1f-e51c47e0d37e-kube-api-access-nvw96\") pod \"486d0b33-cc59-495a-ba1f-e51c47e0d37e\" (UID: \"486d0b33-cc59-495a-ba1f-e51c47e0d37e\") " Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.129113 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/486d0b33-cc59-495a-ba1f-e51c47e0d37e-pod-info\") pod \"486d0b33-cc59-495a-ba1f-e51c47e0d37e\" (UID: \"486d0b33-cc59-495a-ba1f-e51c47e0d37e\") " Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.129146 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/486d0b33-cc59-495a-ba1f-e51c47e0d37e-rabbitmq-plugins\") pod \"486d0b33-cc59-495a-ba1f-e51c47e0d37e\" (UID: \"486d0b33-cc59-495a-ba1f-e51c47e0d37e\") " Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.129177 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/486d0b33-cc59-495a-ba1f-e51c47e0d37e-plugins-conf\") pod \"486d0b33-cc59-495a-ba1f-e51c47e0d37e\" (UID: \"486d0b33-cc59-495a-ba1f-e51c47e0d37e\") " Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.129230 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/486d0b33-cc59-495a-ba1f-e51c47e0d37e-config-data\") pod \"486d0b33-cc59-495a-ba1f-e51c47e0d37e\" (UID: \"486d0b33-cc59-495a-ba1f-e51c47e0d37e\") " Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.129301 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/486d0b33-cc59-495a-ba1f-e51c47e0d37e-rabbitmq-erlang-cookie\") pod \"486d0b33-cc59-495a-ba1f-e51c47e0d37e\" (UID: \"486d0b33-cc59-495a-ba1f-e51c47e0d37e\") " Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.129355 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"486d0b33-cc59-495a-ba1f-e51c47e0d37e\" (UID: \"486d0b33-cc59-495a-ba1f-e51c47e0d37e\") " Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.129402 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/486d0b33-cc59-495a-ba1f-e51c47e0d37e-erlang-cookie-secret\") pod \"486d0b33-cc59-495a-ba1f-e51c47e0d37e\" (UID: \"486d0b33-cc59-495a-ba1f-e51c47e0d37e\") " Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.129885 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/486d0b33-cc59-495a-ba1f-e51c47e0d37e-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "486d0b33-cc59-495a-ba1f-e51c47e0d37e" (UID: "486d0b33-cc59-495a-ba1f-e51c47e0d37e"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.133350 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/486d0b33-cc59-495a-ba1f-e51c47e0d37e-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "486d0b33-cc59-495a-ba1f-e51c47e0d37e" (UID: "486d0b33-cc59-495a-ba1f-e51c47e0d37e"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.133555 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/486d0b33-cc59-495a-ba1f-e51c47e0d37e-kube-api-access-nvw96" (OuterVolumeSpecName: "kube-api-access-nvw96") pod "486d0b33-cc59-495a-ba1f-e51c47e0d37e" (UID: "486d0b33-cc59-495a-ba1f-e51c47e0d37e"). InnerVolumeSpecName "kube-api-access-nvw96". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.133763 4779 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/486d0b33-cc59-495a-ba1f-e51c47e0d37e-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.133805 4779 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/486d0b33-cc59-495a-ba1f-e51c47e0d37e-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.133818 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvw96\" (UniqueName: \"kubernetes.io/projected/486d0b33-cc59-495a-ba1f-e51c47e0d37e-kube-api-access-nvw96\") on node \"crc\" DevicePath \"\"" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.134944 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "persistence") pod "486d0b33-cc59-495a-ba1f-e51c47e0d37e" (UID: "486d0b33-cc59-495a-ba1f-e51c47e0d37e"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.135387 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/486d0b33-cc59-495a-ba1f-e51c47e0d37e-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "486d0b33-cc59-495a-ba1f-e51c47e0d37e" (UID: "486d0b33-cc59-495a-ba1f-e51c47e0d37e"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.135511 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/486d0b33-cc59-495a-ba1f-e51c47e0d37e-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "486d0b33-cc59-495a-ba1f-e51c47e0d37e" (UID: "486d0b33-cc59-495a-ba1f-e51c47e0d37e"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.142122 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/486d0b33-cc59-495a-ba1f-e51c47e0d37e-pod-info" (OuterVolumeSpecName: "pod-info") pod "486d0b33-cc59-495a-ba1f-e51c47e0d37e" (UID: "486d0b33-cc59-495a-ba1f-e51c47e0d37e"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.142351 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/486d0b33-cc59-495a-ba1f-e51c47e0d37e-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "486d0b33-cc59-495a-ba1f-e51c47e0d37e" (UID: "486d0b33-cc59-495a-ba1f-e51c47e0d37e"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.160035 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/486d0b33-cc59-495a-ba1f-e51c47e0d37e-config-data" (OuterVolumeSpecName: "config-data") pod "486d0b33-cc59-495a-ba1f-e51c47e0d37e" (UID: "486d0b33-cc59-495a-ba1f-e51c47e0d37e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.235173 4779 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/486d0b33-cc59-495a-ba1f-e51c47e0d37e-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.235220 4779 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.235230 4779 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/486d0b33-cc59-495a-ba1f-e51c47e0d37e-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.235244 4779 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/486d0b33-cc59-495a-ba1f-e51c47e0d37e-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.235255 4779 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/486d0b33-cc59-495a-ba1f-e51c47e0d37e-pod-info\") on node \"crc\" DevicePath \"\"" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.235264 4779 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/486d0b33-cc59-495a-ba1f-e51c47e0d37e-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.257833 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/486d0b33-cc59-495a-ba1f-e51c47e0d37e-server-conf" (OuterVolumeSpecName: "server-conf") pod "486d0b33-cc59-495a-ba1f-e51c47e0d37e" (UID: "486d0b33-cc59-495a-ba1f-e51c47e0d37e"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.264174 4779 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.293815 4779 generic.go:334] "Generic (PLEG): container finished" podID="486d0b33-cc59-495a-ba1f-e51c47e0d37e" containerID="49b506c13222ec7c43ad84124ec49e368724a10ad38c3fa28aa5a33a5a360647" exitCode=0 Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.293879 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"486d0b33-cc59-495a-ba1f-e51c47e0d37e","Type":"ContainerDied","Data":"49b506c13222ec7c43ad84124ec49e368724a10ad38c3fa28aa5a33a5a360647"} Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.293915 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"486d0b33-cc59-495a-ba1f-e51c47e0d37e","Type":"ContainerDied","Data":"1ba9d72ed5e744bd85732a0b99869643440eeff5f16e16d5c32a71aa5427148d"} Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.293931 4779 scope.go:117] "RemoveContainer" containerID="49b506c13222ec7c43ad84124ec49e368724a10ad38c3fa28aa5a33a5a360647" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.294081 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.304868 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1c8c979a-2995-4080-a0b6-173e62faceee","Type":"ContainerDied","Data":"25b313164b8d335e28fed852f3a3ed9d335636280c71f57d26e606166bc4fcbb"} Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.304965 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.336886 4779 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.336933 4779 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/486d0b33-cc59-495a-ba1f-e51c47e0d37e-server-conf\") on node \"crc\" DevicePath \"\"" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.382225 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/486d0b33-cc59-495a-ba1f-e51c47e0d37e-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "486d0b33-cc59-495a-ba1f-e51c47e0d37e" (UID: "486d0b33-cc59-495a-ba1f-e51c47e0d37e"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.438132 4779 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/486d0b33-cc59-495a-ba1f-e51c47e0d37e-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.455750 4779 scope.go:117] "RemoveContainer" containerID="dd6084584efb63ab6484ff18173ba4f693e06a7fcd6ab961420fb6eb533c5733" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.460391 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.470023 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.489298 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 12:58:58 crc kubenswrapper[4779]: E1128 12:58:58.489687 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e" containerName="extract-content" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.489707 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e" containerName="extract-content" Nov 28 12:58:58 crc kubenswrapper[4779]: E1128 12:58:58.489747 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c8c979a-2995-4080-a0b6-173e62faceee" containerName="setup-container" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.489755 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c8c979a-2995-4080-a0b6-173e62faceee" containerName="setup-container" Nov 28 12:58:58 crc kubenswrapper[4779]: E1128 12:58:58.489773 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c8c979a-2995-4080-a0b6-173e62faceee" containerName="rabbitmq" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.489782 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c8c979a-2995-4080-a0b6-173e62faceee" containerName="rabbitmq" Nov 28 12:58:58 crc kubenswrapper[4779]: E1128 12:58:58.489791 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e" containerName="extract-utilities" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.489797 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e" containerName="extract-utilities" Nov 28 12:58:58 crc kubenswrapper[4779]: E1128 12:58:58.489805 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="486d0b33-cc59-495a-ba1f-e51c47e0d37e" containerName="rabbitmq" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.489810 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="486d0b33-cc59-495a-ba1f-e51c47e0d37e" containerName="rabbitmq" Nov 28 12:58:58 crc kubenswrapper[4779]: E1128 12:58:58.489820 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="486d0b33-cc59-495a-ba1f-e51c47e0d37e" containerName="setup-container" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.489826 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="486d0b33-cc59-495a-ba1f-e51c47e0d37e" containerName="setup-container" Nov 28 12:58:58 crc kubenswrapper[4779]: E1128 12:58:58.489835 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e" containerName="registry-server" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.489841 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e" containerName="registry-server" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.489993 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b9f3b8c-b51f-4ff5-8d7a-89f2e6e78e3e" containerName="registry-server" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.490007 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="486d0b33-cc59-495a-ba1f-e51c47e0d37e" containerName="rabbitmq" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.490023 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c8c979a-2995-4080-a0b6-173e62faceee" containerName="rabbitmq" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.490991 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.497335 4779 scope.go:117] "RemoveContainer" containerID="49b506c13222ec7c43ad84124ec49e368724a10ad38c3fa28aa5a33a5a360647" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.497732 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.498025 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.498246 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.498462 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.498631 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.498755 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-rfrf7" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.499530 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 28 12:58:58 crc kubenswrapper[4779]: E1128 12:58:58.499716 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49b506c13222ec7c43ad84124ec49e368724a10ad38c3fa28aa5a33a5a360647\": container with ID starting with 49b506c13222ec7c43ad84124ec49e368724a10ad38c3fa28aa5a33a5a360647 not found: ID does not exist" containerID="49b506c13222ec7c43ad84124ec49e368724a10ad38c3fa28aa5a33a5a360647" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.499754 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49b506c13222ec7c43ad84124ec49e368724a10ad38c3fa28aa5a33a5a360647"} err="failed to get container status \"49b506c13222ec7c43ad84124ec49e368724a10ad38c3fa28aa5a33a5a360647\": rpc error: code = NotFound desc = could not find container \"49b506c13222ec7c43ad84124ec49e368724a10ad38c3fa28aa5a33a5a360647\": container with ID starting with 49b506c13222ec7c43ad84124ec49e368724a10ad38c3fa28aa5a33a5a360647 not found: ID does not exist" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.499776 4779 scope.go:117] "RemoveContainer" containerID="dd6084584efb63ab6484ff18173ba4f693e06a7fcd6ab961420fb6eb533c5733" Nov 28 12:58:58 crc kubenswrapper[4779]: E1128 12:58:58.500579 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd6084584efb63ab6484ff18173ba4f693e06a7fcd6ab961420fb6eb533c5733\": container with ID starting with dd6084584efb63ab6484ff18173ba4f693e06a7fcd6ab961420fb6eb533c5733 not found: ID does not exist" containerID="dd6084584efb63ab6484ff18173ba4f693e06a7fcd6ab961420fb6eb533c5733" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.500610 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd6084584efb63ab6484ff18173ba4f693e06a7fcd6ab961420fb6eb533c5733"} err="failed to get container status \"dd6084584efb63ab6484ff18173ba4f693e06a7fcd6ab961420fb6eb533c5733\": rpc error: code = NotFound desc = could not find container \"dd6084584efb63ab6484ff18173ba4f693e06a7fcd6ab961420fb6eb533c5733\": container with ID starting with dd6084584efb63ab6484ff18173ba4f693e06a7fcd6ab961420fb6eb533c5733 not found: ID does not exist" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.500631 4779 scope.go:117] "RemoveContainer" containerID="c5011532b76ebd7f52be3d1adec88fce96e7c546eec695918b4b16e74e7c8d0e" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.519614 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.534576 4779 scope.go:117] "RemoveContainer" containerID="83a10acad2ad96fbf02da7dec091f283aa84123110fb7c2a468f72da9c94c337" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.538929 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b0a12679-627a-4310-a9f7-93731231b12e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b0a12679-627a-4310-a9f7-93731231b12e\") " pod="openstack/rabbitmq-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.538988 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b0a12679-627a-4310-a9f7-93731231b12e-config-data\") pod \"rabbitmq-server-0\" (UID: \"b0a12679-627a-4310-a9f7-93731231b12e\") " pod="openstack/rabbitmq-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.539017 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8c5l\" (UniqueName: \"kubernetes.io/projected/b0a12679-627a-4310-a9f7-93731231b12e-kube-api-access-b8c5l\") pod \"rabbitmq-server-0\" (UID: \"b0a12679-627a-4310-a9f7-93731231b12e\") " pod="openstack/rabbitmq-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.539040 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b0a12679-627a-4310-a9f7-93731231b12e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b0a12679-627a-4310-a9f7-93731231b12e\") " pod="openstack/rabbitmq-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.539124 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b0a12679-627a-4310-a9f7-93731231b12e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b0a12679-627a-4310-a9f7-93731231b12e\") " pod="openstack/rabbitmq-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.539309 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b0a12679-627a-4310-a9f7-93731231b12e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b0a12679-627a-4310-a9f7-93731231b12e\") " pod="openstack/rabbitmq-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.539350 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b0a12679-627a-4310-a9f7-93731231b12e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b0a12679-627a-4310-a9f7-93731231b12e\") " pod="openstack/rabbitmq-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.539476 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b0a12679-627a-4310-a9f7-93731231b12e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b0a12679-627a-4310-a9f7-93731231b12e\") " pod="openstack/rabbitmq-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.539527 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"b0a12679-627a-4310-a9f7-93731231b12e\") " pod="openstack/rabbitmq-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.539571 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b0a12679-627a-4310-a9f7-93731231b12e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b0a12679-627a-4310-a9f7-93731231b12e\") " pod="openstack/rabbitmq-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.539602 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b0a12679-627a-4310-a9f7-93731231b12e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b0a12679-627a-4310-a9f7-93731231b12e\") " pod="openstack/rabbitmq-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.626972 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.634654 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.641240 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b0a12679-627a-4310-a9f7-93731231b12e-config-data\") pod \"rabbitmq-server-0\" (UID: \"b0a12679-627a-4310-a9f7-93731231b12e\") " pod="openstack/rabbitmq-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.641311 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8c5l\" (UniqueName: \"kubernetes.io/projected/b0a12679-627a-4310-a9f7-93731231b12e-kube-api-access-b8c5l\") pod \"rabbitmq-server-0\" (UID: \"b0a12679-627a-4310-a9f7-93731231b12e\") " pod="openstack/rabbitmq-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.641353 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b0a12679-627a-4310-a9f7-93731231b12e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b0a12679-627a-4310-a9f7-93731231b12e\") " pod="openstack/rabbitmq-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.641385 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b0a12679-627a-4310-a9f7-93731231b12e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b0a12679-627a-4310-a9f7-93731231b12e\") " pod="openstack/rabbitmq-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.641453 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b0a12679-627a-4310-a9f7-93731231b12e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b0a12679-627a-4310-a9f7-93731231b12e\") " pod="openstack/rabbitmq-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.641480 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b0a12679-627a-4310-a9f7-93731231b12e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b0a12679-627a-4310-a9f7-93731231b12e\") " pod="openstack/rabbitmq-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.641536 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b0a12679-627a-4310-a9f7-93731231b12e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b0a12679-627a-4310-a9f7-93731231b12e\") " pod="openstack/rabbitmq-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.641574 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"b0a12679-627a-4310-a9f7-93731231b12e\") " pod="openstack/rabbitmq-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.641598 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b0a12679-627a-4310-a9f7-93731231b12e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b0a12679-627a-4310-a9f7-93731231b12e\") " pod="openstack/rabbitmq-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.641625 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b0a12679-627a-4310-a9f7-93731231b12e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b0a12679-627a-4310-a9f7-93731231b12e\") " pod="openstack/rabbitmq-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.641688 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b0a12679-627a-4310-a9f7-93731231b12e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b0a12679-627a-4310-a9f7-93731231b12e\") " pod="openstack/rabbitmq-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.642121 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b0a12679-627a-4310-a9f7-93731231b12e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b0a12679-627a-4310-a9f7-93731231b12e\") " pod="openstack/rabbitmq-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.642186 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b0a12679-627a-4310-a9f7-93731231b12e-config-data\") pod \"rabbitmq-server-0\" (UID: \"b0a12679-627a-4310-a9f7-93731231b12e\") " pod="openstack/rabbitmq-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.642190 4779 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"b0a12679-627a-4310-a9f7-93731231b12e\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.642411 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b0a12679-627a-4310-a9f7-93731231b12e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b0a12679-627a-4310-a9f7-93731231b12e\") " pod="openstack/rabbitmq-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.644022 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b0a12679-627a-4310-a9f7-93731231b12e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b0a12679-627a-4310-a9f7-93731231b12e\") " pod="openstack/rabbitmq-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.648222 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b0a12679-627a-4310-a9f7-93731231b12e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b0a12679-627a-4310-a9f7-93731231b12e\") " pod="openstack/rabbitmq-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.648276 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b0a12679-627a-4310-a9f7-93731231b12e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b0a12679-627a-4310-a9f7-93731231b12e\") " pod="openstack/rabbitmq-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.649761 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b0a12679-627a-4310-a9f7-93731231b12e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b0a12679-627a-4310-a9f7-93731231b12e\") " pod="openstack/rabbitmq-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.651757 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b0a12679-627a-4310-a9f7-93731231b12e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b0a12679-627a-4310-a9f7-93731231b12e\") " pod="openstack/rabbitmq-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.661729 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.663796 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.664680 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8c5l\" (UniqueName: \"kubernetes.io/projected/b0a12679-627a-4310-a9f7-93731231b12e-kube-api-access-b8c5l\") pod \"rabbitmq-server-0\" (UID: \"b0a12679-627a-4310-a9f7-93731231b12e\") " pod="openstack/rabbitmq-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.665751 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b0a12679-627a-4310-a9f7-93731231b12e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b0a12679-627a-4310-a9f7-93731231b12e\") " pod="openstack/rabbitmq-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.669574 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.669731 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.669853 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.669909 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.670002 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.670035 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-hd9sn" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.670141 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.691593 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.703368 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"b0a12679-627a-4310-a9f7-93731231b12e\") " pod="openstack/rabbitmq-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.744148 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/80c2f0f7-d979-400e-b9fe-9369c3fc8ec5-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c2f0f7-d979-400e-b9fe-9369c3fc8ec5\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.744195 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/80c2f0f7-d979-400e-b9fe-9369c3fc8ec5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c2f0f7-d979-400e-b9fe-9369c3fc8ec5\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.744219 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c2f0f7-d979-400e-b9fe-9369c3fc8ec5\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.744316 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/80c2f0f7-d979-400e-b9fe-9369c3fc8ec5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c2f0f7-d979-400e-b9fe-9369c3fc8ec5\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.744380 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/80c2f0f7-d979-400e-b9fe-9369c3fc8ec5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c2f0f7-d979-400e-b9fe-9369c3fc8ec5\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.744510 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/80c2f0f7-d979-400e-b9fe-9369c3fc8ec5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c2f0f7-d979-400e-b9fe-9369c3fc8ec5\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.744591 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/80c2f0f7-d979-400e-b9fe-9369c3fc8ec5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c2f0f7-d979-400e-b9fe-9369c3fc8ec5\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.744641 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/80c2f0f7-d979-400e-b9fe-9369c3fc8ec5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c2f0f7-d979-400e-b9fe-9369c3fc8ec5\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.744682 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/80c2f0f7-d979-400e-b9fe-9369c3fc8ec5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c2f0f7-d979-400e-b9fe-9369c3fc8ec5\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.744776 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ml9q\" (UniqueName: \"kubernetes.io/projected/80c2f0f7-d979-400e-b9fe-9369c3fc8ec5-kube-api-access-2ml9q\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c2f0f7-d979-400e-b9fe-9369c3fc8ec5\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.744823 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/80c2f0f7-d979-400e-b9fe-9369c3fc8ec5-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c2f0f7-d979-400e-b9fe-9369c3fc8ec5\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.839359 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.846691 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/80c2f0f7-d979-400e-b9fe-9369c3fc8ec5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c2f0f7-d979-400e-b9fe-9369c3fc8ec5\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.846825 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/80c2f0f7-d979-400e-b9fe-9369c3fc8ec5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c2f0f7-d979-400e-b9fe-9369c3fc8ec5\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.846878 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/80c2f0f7-d979-400e-b9fe-9369c3fc8ec5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c2f0f7-d979-400e-b9fe-9369c3fc8ec5\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.846921 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/80c2f0f7-d979-400e-b9fe-9369c3fc8ec5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c2f0f7-d979-400e-b9fe-9369c3fc8ec5\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.847048 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ml9q\" (UniqueName: \"kubernetes.io/projected/80c2f0f7-d979-400e-b9fe-9369c3fc8ec5-kube-api-access-2ml9q\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c2f0f7-d979-400e-b9fe-9369c3fc8ec5\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.847122 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/80c2f0f7-d979-400e-b9fe-9369c3fc8ec5-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c2f0f7-d979-400e-b9fe-9369c3fc8ec5\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.847184 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/80c2f0f7-d979-400e-b9fe-9369c3fc8ec5-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c2f0f7-d979-400e-b9fe-9369c3fc8ec5\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.847204 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/80c2f0f7-d979-400e-b9fe-9369c3fc8ec5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c2f0f7-d979-400e-b9fe-9369c3fc8ec5\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.847229 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c2f0f7-d979-400e-b9fe-9369c3fc8ec5\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.847307 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/80c2f0f7-d979-400e-b9fe-9369c3fc8ec5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c2f0f7-d979-400e-b9fe-9369c3fc8ec5\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.847341 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/80c2f0f7-d979-400e-b9fe-9369c3fc8ec5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c2f0f7-d979-400e-b9fe-9369c3fc8ec5\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.848759 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/80c2f0f7-d979-400e-b9fe-9369c3fc8ec5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c2f0f7-d979-400e-b9fe-9369c3fc8ec5\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.850612 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/80c2f0f7-d979-400e-b9fe-9369c3fc8ec5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c2f0f7-d979-400e-b9fe-9369c3fc8ec5\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.852293 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/80c2f0f7-d979-400e-b9fe-9369c3fc8ec5-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c2f0f7-d979-400e-b9fe-9369c3fc8ec5\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.853308 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/80c2f0f7-d979-400e-b9fe-9369c3fc8ec5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c2f0f7-d979-400e-b9fe-9369c3fc8ec5\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.853681 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/80c2f0f7-d979-400e-b9fe-9369c3fc8ec5-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c2f0f7-d979-400e-b9fe-9369c3fc8ec5\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.854455 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/80c2f0f7-d979-400e-b9fe-9369c3fc8ec5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c2f0f7-d979-400e-b9fe-9369c3fc8ec5\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.854924 4779 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c2f0f7-d979-400e-b9fe-9369c3fc8ec5\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.855718 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/80c2f0f7-d979-400e-b9fe-9369c3fc8ec5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c2f0f7-d979-400e-b9fe-9369c3fc8ec5\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.856846 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/80c2f0f7-d979-400e-b9fe-9369c3fc8ec5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c2f0f7-d979-400e-b9fe-9369c3fc8ec5\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.857438 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/80c2f0f7-d979-400e-b9fe-9369c3fc8ec5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c2f0f7-d979-400e-b9fe-9369c3fc8ec5\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.876129 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ml9q\" (UniqueName: \"kubernetes.io/projected/80c2f0f7-d979-400e-b9fe-9369c3fc8ec5-kube-api-access-2ml9q\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c2f0f7-d979-400e-b9fe-9369c3fc8ec5\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:58:58 crc kubenswrapper[4779]: I1128 12:58:58.890749 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"80c2f0f7-d979-400e-b9fe-9369c3fc8ec5\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:58:59 crc kubenswrapper[4779]: I1128 12:58:58.986820 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:58:59 crc kubenswrapper[4779]: W1128 12:58:59.278598 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0a12679_627a_4310_a9f7_93731231b12e.slice/crio-cfbbeff9625661bf753ba98439df02df6bec743a8c23b91eb23d3c3d1e73c61c WatchSource:0}: Error finding container cfbbeff9625661bf753ba98439df02df6bec743a8c23b91eb23d3c3d1e73c61c: Status 404 returned error can't find the container with id cfbbeff9625661bf753ba98439df02df6bec743a8c23b91eb23d3c3d1e73c61c Nov 28 12:58:59 crc kubenswrapper[4779]: I1128 12:58:59.279570 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 12:58:59 crc kubenswrapper[4779]: I1128 12:58:59.356698 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b0a12679-627a-4310-a9f7-93731231b12e","Type":"ContainerStarted","Data":"cfbbeff9625661bf753ba98439df02df6bec743a8c23b91eb23d3c3d1e73c61c"} Nov 28 12:58:59 crc kubenswrapper[4779]: I1128 12:58:59.458944 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 12:58:59 crc kubenswrapper[4779]: I1128 12:58:59.744508 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c8c979a-2995-4080-a0b6-173e62faceee" path="/var/lib/kubelet/pods/1c8c979a-2995-4080-a0b6-173e62faceee/volumes" Nov 28 12:58:59 crc kubenswrapper[4779]: I1128 12:58:59.745909 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="486d0b33-cc59-495a-ba1f-e51c47e0d37e" path="/var/lib/kubelet/pods/486d0b33-cc59-495a-ba1f-e51c47e0d37e/volumes" Nov 28 12:59:00 crc kubenswrapper[4779]: I1128 12:59:00.334042 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68df85789f-bnhts"] Nov 28 12:59:00 crc kubenswrapper[4779]: I1128 12:59:00.335457 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68df85789f-bnhts" Nov 28 12:59:00 crc kubenswrapper[4779]: I1128 12:59:00.337003 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Nov 28 12:59:00 crc kubenswrapper[4779]: I1128 12:59:00.354637 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68df85789f-bnhts"] Nov 28 12:59:00 crc kubenswrapper[4779]: I1128 12:59:00.371202 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"80c2f0f7-d979-400e-b9fe-9369c3fc8ec5","Type":"ContainerStarted","Data":"de7f4d0ff0f69bdc22329d79f584b6ed9b9b1d3d91a5f5105e6bb232017d2fb8"} Nov 28 12:59:00 crc kubenswrapper[4779]: I1128 12:59:00.373511 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/220dc915-078f-47cf-9bb3-06848d055a0e-openstack-edpm-ipam\") pod \"dnsmasq-dns-68df85789f-bnhts\" (UID: \"220dc915-078f-47cf-9bb3-06848d055a0e\") " pod="openstack/dnsmasq-dns-68df85789f-bnhts" Nov 28 12:59:00 crc kubenswrapper[4779]: I1128 12:59:00.373672 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/220dc915-078f-47cf-9bb3-06848d055a0e-ovsdbserver-sb\") pod \"dnsmasq-dns-68df85789f-bnhts\" (UID: \"220dc915-078f-47cf-9bb3-06848d055a0e\") " pod="openstack/dnsmasq-dns-68df85789f-bnhts" Nov 28 12:59:00 crc kubenswrapper[4779]: I1128 12:59:00.373780 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/220dc915-078f-47cf-9bb3-06848d055a0e-dns-svc\") pod \"dnsmasq-dns-68df85789f-bnhts\" (UID: \"220dc915-078f-47cf-9bb3-06848d055a0e\") " pod="openstack/dnsmasq-dns-68df85789f-bnhts" Nov 28 12:59:00 crc kubenswrapper[4779]: I1128 12:59:00.373855 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/220dc915-078f-47cf-9bb3-06848d055a0e-ovsdbserver-nb\") pod \"dnsmasq-dns-68df85789f-bnhts\" (UID: \"220dc915-078f-47cf-9bb3-06848d055a0e\") " pod="openstack/dnsmasq-dns-68df85789f-bnhts" Nov 28 12:59:00 crc kubenswrapper[4779]: I1128 12:59:00.373918 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/220dc915-078f-47cf-9bb3-06848d055a0e-dns-swift-storage-0\") pod \"dnsmasq-dns-68df85789f-bnhts\" (UID: \"220dc915-078f-47cf-9bb3-06848d055a0e\") " pod="openstack/dnsmasq-dns-68df85789f-bnhts" Nov 28 12:59:00 crc kubenswrapper[4779]: I1128 12:59:00.374023 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/220dc915-078f-47cf-9bb3-06848d055a0e-config\") pod \"dnsmasq-dns-68df85789f-bnhts\" (UID: \"220dc915-078f-47cf-9bb3-06848d055a0e\") " pod="openstack/dnsmasq-dns-68df85789f-bnhts" Nov 28 12:59:00 crc kubenswrapper[4779]: I1128 12:59:00.374706 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rmbr\" (UniqueName: \"kubernetes.io/projected/220dc915-078f-47cf-9bb3-06848d055a0e-kube-api-access-5rmbr\") pod \"dnsmasq-dns-68df85789f-bnhts\" (UID: \"220dc915-078f-47cf-9bb3-06848d055a0e\") " pod="openstack/dnsmasq-dns-68df85789f-bnhts" Nov 28 12:59:00 crc kubenswrapper[4779]: I1128 12:59:00.476462 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/220dc915-078f-47cf-9bb3-06848d055a0e-config\") pod \"dnsmasq-dns-68df85789f-bnhts\" (UID: \"220dc915-078f-47cf-9bb3-06848d055a0e\") " pod="openstack/dnsmasq-dns-68df85789f-bnhts" Nov 28 12:59:00 crc kubenswrapper[4779]: I1128 12:59:00.476791 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rmbr\" (UniqueName: \"kubernetes.io/projected/220dc915-078f-47cf-9bb3-06848d055a0e-kube-api-access-5rmbr\") pod \"dnsmasq-dns-68df85789f-bnhts\" (UID: \"220dc915-078f-47cf-9bb3-06848d055a0e\") " pod="openstack/dnsmasq-dns-68df85789f-bnhts" Nov 28 12:59:00 crc kubenswrapper[4779]: I1128 12:59:00.476853 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/220dc915-078f-47cf-9bb3-06848d055a0e-openstack-edpm-ipam\") pod \"dnsmasq-dns-68df85789f-bnhts\" (UID: \"220dc915-078f-47cf-9bb3-06848d055a0e\") " pod="openstack/dnsmasq-dns-68df85789f-bnhts" Nov 28 12:59:00 crc kubenswrapper[4779]: I1128 12:59:00.476955 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/220dc915-078f-47cf-9bb3-06848d055a0e-ovsdbserver-sb\") pod \"dnsmasq-dns-68df85789f-bnhts\" (UID: \"220dc915-078f-47cf-9bb3-06848d055a0e\") " pod="openstack/dnsmasq-dns-68df85789f-bnhts" Nov 28 12:59:00 crc kubenswrapper[4779]: I1128 12:59:00.477610 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/220dc915-078f-47cf-9bb3-06848d055a0e-config\") pod \"dnsmasq-dns-68df85789f-bnhts\" (UID: \"220dc915-078f-47cf-9bb3-06848d055a0e\") " pod="openstack/dnsmasq-dns-68df85789f-bnhts" Nov 28 12:59:00 crc kubenswrapper[4779]: I1128 12:59:00.477651 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/220dc915-078f-47cf-9bb3-06848d055a0e-openstack-edpm-ipam\") pod \"dnsmasq-dns-68df85789f-bnhts\" (UID: \"220dc915-078f-47cf-9bb3-06848d055a0e\") " pod="openstack/dnsmasq-dns-68df85789f-bnhts" Nov 28 12:59:00 crc kubenswrapper[4779]: I1128 12:59:00.478376 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/220dc915-078f-47cf-9bb3-06848d055a0e-dns-svc\") pod \"dnsmasq-dns-68df85789f-bnhts\" (UID: \"220dc915-078f-47cf-9bb3-06848d055a0e\") " pod="openstack/dnsmasq-dns-68df85789f-bnhts" Nov 28 12:59:00 crc kubenswrapper[4779]: I1128 12:59:00.478456 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/220dc915-078f-47cf-9bb3-06848d055a0e-ovsdbserver-nb\") pod \"dnsmasq-dns-68df85789f-bnhts\" (UID: \"220dc915-078f-47cf-9bb3-06848d055a0e\") " pod="openstack/dnsmasq-dns-68df85789f-bnhts" Nov 28 12:59:00 crc kubenswrapper[4779]: I1128 12:59:00.478500 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/220dc915-078f-47cf-9bb3-06848d055a0e-dns-swift-storage-0\") pod \"dnsmasq-dns-68df85789f-bnhts\" (UID: \"220dc915-078f-47cf-9bb3-06848d055a0e\") " pod="openstack/dnsmasq-dns-68df85789f-bnhts" Nov 28 12:59:00 crc kubenswrapper[4779]: I1128 12:59:00.479308 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/220dc915-078f-47cf-9bb3-06848d055a0e-dns-svc\") pod \"dnsmasq-dns-68df85789f-bnhts\" (UID: \"220dc915-078f-47cf-9bb3-06848d055a0e\") " pod="openstack/dnsmasq-dns-68df85789f-bnhts" Nov 28 12:59:00 crc kubenswrapper[4779]: I1128 12:59:00.479309 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/220dc915-078f-47cf-9bb3-06848d055a0e-ovsdbserver-nb\") pod \"dnsmasq-dns-68df85789f-bnhts\" (UID: \"220dc915-078f-47cf-9bb3-06848d055a0e\") " pod="openstack/dnsmasq-dns-68df85789f-bnhts" Nov 28 12:59:00 crc kubenswrapper[4779]: I1128 12:59:00.479446 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/220dc915-078f-47cf-9bb3-06848d055a0e-dns-swift-storage-0\") pod \"dnsmasq-dns-68df85789f-bnhts\" (UID: \"220dc915-078f-47cf-9bb3-06848d055a0e\") " pod="openstack/dnsmasq-dns-68df85789f-bnhts" Nov 28 12:59:00 crc kubenswrapper[4779]: I1128 12:59:00.480263 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/220dc915-078f-47cf-9bb3-06848d055a0e-ovsdbserver-sb\") pod \"dnsmasq-dns-68df85789f-bnhts\" (UID: \"220dc915-078f-47cf-9bb3-06848d055a0e\") " pod="openstack/dnsmasq-dns-68df85789f-bnhts" Nov 28 12:59:00 crc kubenswrapper[4779]: I1128 12:59:00.679289 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rmbr\" (UniqueName: \"kubernetes.io/projected/220dc915-078f-47cf-9bb3-06848d055a0e-kube-api-access-5rmbr\") pod \"dnsmasq-dns-68df85789f-bnhts\" (UID: \"220dc915-078f-47cf-9bb3-06848d055a0e\") " pod="openstack/dnsmasq-dns-68df85789f-bnhts" Nov 28 12:59:00 crc kubenswrapper[4779]: I1128 12:59:00.953837 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68df85789f-bnhts" Nov 28 12:59:01 crc kubenswrapper[4779]: I1128 12:59:01.398039 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b0a12679-627a-4310-a9f7-93731231b12e","Type":"ContainerStarted","Data":"4cbb091cc7a999db3551c8aeaa770d426bae97d487bee3a5b05096b5c930ce32"} Nov 28 12:59:01 crc kubenswrapper[4779]: I1128 12:59:01.688957 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68df85789f-bnhts"] Nov 28 12:59:02 crc kubenswrapper[4779]: I1128 12:59:02.410558 4779 generic.go:334] "Generic (PLEG): container finished" podID="220dc915-078f-47cf-9bb3-06848d055a0e" containerID="be22d4b0803443032e1ca703af12b0592fc7df249ea0f3237793a86a15ac5371" exitCode=0 Nov 28 12:59:02 crc kubenswrapper[4779]: I1128 12:59:02.410610 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68df85789f-bnhts" event={"ID":"220dc915-078f-47cf-9bb3-06848d055a0e","Type":"ContainerDied","Data":"be22d4b0803443032e1ca703af12b0592fc7df249ea0f3237793a86a15ac5371"} Nov 28 12:59:02 crc kubenswrapper[4779]: I1128 12:59:02.410976 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68df85789f-bnhts" event={"ID":"220dc915-078f-47cf-9bb3-06848d055a0e","Type":"ContainerStarted","Data":"8a5f057a21a94477eb0cffd2e3c922839c48d49acd28e45bc7abae2a63c50920"} Nov 28 12:59:02 crc kubenswrapper[4779]: I1128 12:59:02.413045 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"80c2f0f7-d979-400e-b9fe-9369c3fc8ec5","Type":"ContainerStarted","Data":"fb7d85fa0346f55ebf2349961028e8477e8f966752e77a7d828f95e993827be6"} Nov 28 12:59:03 crc kubenswrapper[4779]: I1128 12:59:03.426636 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68df85789f-bnhts" event={"ID":"220dc915-078f-47cf-9bb3-06848d055a0e","Type":"ContainerStarted","Data":"9866efdf21e33821789130b767c2e9c27130e8430938fc0a88852ae56dd6553f"} Nov 28 12:59:03 crc kubenswrapper[4779]: I1128 12:59:03.463179 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-68df85789f-bnhts" podStartSLOduration=3.463161167 podStartE2EDuration="3.463161167s" podCreationTimestamp="2025-11-28 12:59:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:59:03.4550614 +0000 UTC m=+1404.020736784" watchObservedRunningTime="2025-11-28 12:59:03.463161167 +0000 UTC m=+1404.028836521" Nov 28 12:59:04 crc kubenswrapper[4779]: I1128 12:59:04.437525 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-68df85789f-bnhts" Nov 28 12:59:10 crc kubenswrapper[4779]: I1128 12:59:10.955433 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-68df85789f-bnhts" Nov 28 12:59:11 crc kubenswrapper[4779]: I1128 12:59:11.043452 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79b5d74c8c-4kf9h"] Nov 28 12:59:11 crc kubenswrapper[4779]: I1128 12:59:11.043675 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79b5d74c8c-4kf9h" podUID="4259e6bd-29c2-44a9-a7d1-eccc321fd8a5" containerName="dnsmasq-dns" containerID="cri-o://543a19a78f763af3433632c70f3f4d2126135d5c0f2582a67c019269a3b73a9f" gracePeriod=10 Nov 28 12:59:11 crc kubenswrapper[4779]: I1128 12:59:11.283389 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bb85b8995-t8mt8"] Nov 28 12:59:11 crc kubenswrapper[4779]: I1128 12:59:11.285012 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bb85b8995-t8mt8" Nov 28 12:59:11 crc kubenswrapper[4779]: I1128 12:59:11.307495 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bb85b8995-t8mt8"] Nov 28 12:59:11 crc kubenswrapper[4779]: I1128 12:59:11.401180 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d53ab8b-7d5c-4b0f-9cfa-992ed1fd2c0f-ovsdbserver-nb\") pod \"dnsmasq-dns-bb85b8995-t8mt8\" (UID: \"4d53ab8b-7d5c-4b0f-9cfa-992ed1fd2c0f\") " pod="openstack/dnsmasq-dns-bb85b8995-t8mt8" Nov 28 12:59:11 crc kubenswrapper[4779]: I1128 12:59:11.401228 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d53ab8b-7d5c-4b0f-9cfa-992ed1fd2c0f-dns-svc\") pod \"dnsmasq-dns-bb85b8995-t8mt8\" (UID: \"4d53ab8b-7d5c-4b0f-9cfa-992ed1fd2c0f\") " pod="openstack/dnsmasq-dns-bb85b8995-t8mt8" Nov 28 12:59:11 crc kubenswrapper[4779]: I1128 12:59:11.401265 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d53ab8b-7d5c-4b0f-9cfa-992ed1fd2c0f-config\") pod \"dnsmasq-dns-bb85b8995-t8mt8\" (UID: \"4d53ab8b-7d5c-4b0f-9cfa-992ed1fd2c0f\") " pod="openstack/dnsmasq-dns-bb85b8995-t8mt8" Nov 28 12:59:11 crc kubenswrapper[4779]: I1128 12:59:11.401464 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4d53ab8b-7d5c-4b0f-9cfa-992ed1fd2c0f-ovsdbserver-sb\") pod \"dnsmasq-dns-bb85b8995-t8mt8\" (UID: \"4d53ab8b-7d5c-4b0f-9cfa-992ed1fd2c0f\") " pod="openstack/dnsmasq-dns-bb85b8995-t8mt8" Nov 28 12:59:11 crc kubenswrapper[4779]: I1128 12:59:11.401618 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4d53ab8b-7d5c-4b0f-9cfa-992ed1fd2c0f-openstack-edpm-ipam\") pod \"dnsmasq-dns-bb85b8995-t8mt8\" (UID: \"4d53ab8b-7d5c-4b0f-9cfa-992ed1fd2c0f\") " pod="openstack/dnsmasq-dns-bb85b8995-t8mt8" Nov 28 12:59:11 crc kubenswrapper[4779]: I1128 12:59:11.401767 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5h4w\" (UniqueName: \"kubernetes.io/projected/4d53ab8b-7d5c-4b0f-9cfa-992ed1fd2c0f-kube-api-access-w5h4w\") pod \"dnsmasq-dns-bb85b8995-t8mt8\" (UID: \"4d53ab8b-7d5c-4b0f-9cfa-992ed1fd2c0f\") " pod="openstack/dnsmasq-dns-bb85b8995-t8mt8" Nov 28 12:59:11 crc kubenswrapper[4779]: I1128 12:59:11.401824 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4d53ab8b-7d5c-4b0f-9cfa-992ed1fd2c0f-dns-swift-storage-0\") pod \"dnsmasq-dns-bb85b8995-t8mt8\" (UID: \"4d53ab8b-7d5c-4b0f-9cfa-992ed1fd2c0f\") " pod="openstack/dnsmasq-dns-bb85b8995-t8mt8" Nov 28 12:59:11 crc kubenswrapper[4779]: I1128 12:59:11.503553 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d53ab8b-7d5c-4b0f-9cfa-992ed1fd2c0f-ovsdbserver-nb\") pod \"dnsmasq-dns-bb85b8995-t8mt8\" (UID: \"4d53ab8b-7d5c-4b0f-9cfa-992ed1fd2c0f\") " pod="openstack/dnsmasq-dns-bb85b8995-t8mt8" Nov 28 12:59:11 crc kubenswrapper[4779]: I1128 12:59:11.503610 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d53ab8b-7d5c-4b0f-9cfa-992ed1fd2c0f-dns-svc\") pod \"dnsmasq-dns-bb85b8995-t8mt8\" (UID: \"4d53ab8b-7d5c-4b0f-9cfa-992ed1fd2c0f\") " pod="openstack/dnsmasq-dns-bb85b8995-t8mt8" Nov 28 12:59:11 crc kubenswrapper[4779]: I1128 12:59:11.503642 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d53ab8b-7d5c-4b0f-9cfa-992ed1fd2c0f-config\") pod \"dnsmasq-dns-bb85b8995-t8mt8\" (UID: \"4d53ab8b-7d5c-4b0f-9cfa-992ed1fd2c0f\") " pod="openstack/dnsmasq-dns-bb85b8995-t8mt8" Nov 28 12:59:11 crc kubenswrapper[4779]: I1128 12:59:11.503668 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4d53ab8b-7d5c-4b0f-9cfa-992ed1fd2c0f-ovsdbserver-sb\") pod \"dnsmasq-dns-bb85b8995-t8mt8\" (UID: \"4d53ab8b-7d5c-4b0f-9cfa-992ed1fd2c0f\") " pod="openstack/dnsmasq-dns-bb85b8995-t8mt8" Nov 28 12:59:11 crc kubenswrapper[4779]: I1128 12:59:11.503699 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4d53ab8b-7d5c-4b0f-9cfa-992ed1fd2c0f-openstack-edpm-ipam\") pod \"dnsmasq-dns-bb85b8995-t8mt8\" (UID: \"4d53ab8b-7d5c-4b0f-9cfa-992ed1fd2c0f\") " pod="openstack/dnsmasq-dns-bb85b8995-t8mt8" Nov 28 12:59:11 crc kubenswrapper[4779]: I1128 12:59:11.503740 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5h4w\" (UniqueName: \"kubernetes.io/projected/4d53ab8b-7d5c-4b0f-9cfa-992ed1fd2c0f-kube-api-access-w5h4w\") pod \"dnsmasq-dns-bb85b8995-t8mt8\" (UID: \"4d53ab8b-7d5c-4b0f-9cfa-992ed1fd2c0f\") " pod="openstack/dnsmasq-dns-bb85b8995-t8mt8" Nov 28 12:59:11 crc kubenswrapper[4779]: I1128 12:59:11.503761 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4d53ab8b-7d5c-4b0f-9cfa-992ed1fd2c0f-dns-swift-storage-0\") pod \"dnsmasq-dns-bb85b8995-t8mt8\" (UID: \"4d53ab8b-7d5c-4b0f-9cfa-992ed1fd2c0f\") " pod="openstack/dnsmasq-dns-bb85b8995-t8mt8" Nov 28 12:59:11 crc kubenswrapper[4779]: I1128 12:59:11.504711 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4d53ab8b-7d5c-4b0f-9cfa-992ed1fd2c0f-dns-swift-storage-0\") pod \"dnsmasq-dns-bb85b8995-t8mt8\" (UID: \"4d53ab8b-7d5c-4b0f-9cfa-992ed1fd2c0f\") " pod="openstack/dnsmasq-dns-bb85b8995-t8mt8" Nov 28 12:59:11 crc kubenswrapper[4779]: I1128 12:59:11.504777 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d53ab8b-7d5c-4b0f-9cfa-992ed1fd2c0f-ovsdbserver-nb\") pod \"dnsmasq-dns-bb85b8995-t8mt8\" (UID: \"4d53ab8b-7d5c-4b0f-9cfa-992ed1fd2c0f\") " pod="openstack/dnsmasq-dns-bb85b8995-t8mt8" Nov 28 12:59:11 crc kubenswrapper[4779]: I1128 12:59:11.505831 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4d53ab8b-7d5c-4b0f-9cfa-992ed1fd2c0f-ovsdbserver-sb\") pod \"dnsmasq-dns-bb85b8995-t8mt8\" (UID: \"4d53ab8b-7d5c-4b0f-9cfa-992ed1fd2c0f\") " pod="openstack/dnsmasq-dns-bb85b8995-t8mt8" Nov 28 12:59:11 crc kubenswrapper[4779]: I1128 12:59:11.505924 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4d53ab8b-7d5c-4b0f-9cfa-992ed1fd2c0f-openstack-edpm-ipam\") pod \"dnsmasq-dns-bb85b8995-t8mt8\" (UID: \"4d53ab8b-7d5c-4b0f-9cfa-992ed1fd2c0f\") " pod="openstack/dnsmasq-dns-bb85b8995-t8mt8" Nov 28 12:59:11 crc kubenswrapper[4779]: I1128 12:59:11.505929 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d53ab8b-7d5c-4b0f-9cfa-992ed1fd2c0f-config\") pod \"dnsmasq-dns-bb85b8995-t8mt8\" (UID: \"4d53ab8b-7d5c-4b0f-9cfa-992ed1fd2c0f\") " pod="openstack/dnsmasq-dns-bb85b8995-t8mt8" Nov 28 12:59:11 crc kubenswrapper[4779]: I1128 12:59:11.506022 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d53ab8b-7d5c-4b0f-9cfa-992ed1fd2c0f-dns-svc\") pod \"dnsmasq-dns-bb85b8995-t8mt8\" (UID: \"4d53ab8b-7d5c-4b0f-9cfa-992ed1fd2c0f\") " pod="openstack/dnsmasq-dns-bb85b8995-t8mt8" Nov 28 12:59:11 crc kubenswrapper[4779]: I1128 12:59:11.522861 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5h4w\" (UniqueName: \"kubernetes.io/projected/4d53ab8b-7d5c-4b0f-9cfa-992ed1fd2c0f-kube-api-access-w5h4w\") pod \"dnsmasq-dns-bb85b8995-t8mt8\" (UID: \"4d53ab8b-7d5c-4b0f-9cfa-992ed1fd2c0f\") " pod="openstack/dnsmasq-dns-bb85b8995-t8mt8" Nov 28 12:59:11 crc kubenswrapper[4779]: I1128 12:59:11.611850 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bb85b8995-t8mt8" Nov 28 12:59:11 crc kubenswrapper[4779]: I1128 12:59:11.888874 4779 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-79b5d74c8c-4kf9h" podUID="4259e6bd-29c2-44a9-a7d1-eccc321fd8a5" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.208:5353: connect: connection refused" Nov 28 12:59:12 crc kubenswrapper[4779]: I1128 12:59:12.038920 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bb85b8995-t8mt8"] Nov 28 12:59:12 crc kubenswrapper[4779]: W1128 12:59:12.094550 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d53ab8b_7d5c_4b0f_9cfa_992ed1fd2c0f.slice/crio-c71ac9b0ef1163e4e689761a401544cc72c083ed249178358807d5c8f8899987 WatchSource:0}: Error finding container c71ac9b0ef1163e4e689761a401544cc72c083ed249178358807d5c8f8899987: Status 404 returned error can't find the container with id c71ac9b0ef1163e4e689761a401544cc72c083ed249178358807d5c8f8899987 Nov 28 12:59:12 crc kubenswrapper[4779]: I1128 12:59:12.528113 4779 generic.go:334] "Generic (PLEG): container finished" podID="4d53ab8b-7d5c-4b0f-9cfa-992ed1fd2c0f" containerID="b5a6ba1156568af1c66eab95261dd00326e3324032ce2bdc264b3231eca66429" exitCode=0 Nov 28 12:59:12 crc kubenswrapper[4779]: I1128 12:59:12.528363 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb85b8995-t8mt8" event={"ID":"4d53ab8b-7d5c-4b0f-9cfa-992ed1fd2c0f","Type":"ContainerDied","Data":"b5a6ba1156568af1c66eab95261dd00326e3324032ce2bdc264b3231eca66429"} Nov 28 12:59:12 crc kubenswrapper[4779]: I1128 12:59:12.528485 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb85b8995-t8mt8" event={"ID":"4d53ab8b-7d5c-4b0f-9cfa-992ed1fd2c0f","Type":"ContainerStarted","Data":"c71ac9b0ef1163e4e689761a401544cc72c083ed249178358807d5c8f8899987"} Nov 28 12:59:12 crc kubenswrapper[4779]: I1128 12:59:12.531478 4779 generic.go:334] "Generic (PLEG): container finished" podID="4259e6bd-29c2-44a9-a7d1-eccc321fd8a5" containerID="543a19a78f763af3433632c70f3f4d2126135d5c0f2582a67c019269a3b73a9f" exitCode=0 Nov 28 12:59:12 crc kubenswrapper[4779]: I1128 12:59:12.531526 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79b5d74c8c-4kf9h" event={"ID":"4259e6bd-29c2-44a9-a7d1-eccc321fd8a5","Type":"ContainerDied","Data":"543a19a78f763af3433632c70f3f4d2126135d5c0f2582a67c019269a3b73a9f"} Nov 28 12:59:12 crc kubenswrapper[4779]: I1128 12:59:12.699369 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79b5d74c8c-4kf9h" Nov 28 12:59:12 crc kubenswrapper[4779]: I1128 12:59:12.725811 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4259e6bd-29c2-44a9-a7d1-eccc321fd8a5-ovsdbserver-sb\") pod \"4259e6bd-29c2-44a9-a7d1-eccc321fd8a5\" (UID: \"4259e6bd-29c2-44a9-a7d1-eccc321fd8a5\") " Nov 28 12:59:12 crc kubenswrapper[4779]: I1128 12:59:12.726137 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4259e6bd-29c2-44a9-a7d1-eccc321fd8a5-dns-svc\") pod \"4259e6bd-29c2-44a9-a7d1-eccc321fd8a5\" (UID: \"4259e6bd-29c2-44a9-a7d1-eccc321fd8a5\") " Nov 28 12:59:12 crc kubenswrapper[4779]: I1128 12:59:12.726211 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqhvf\" (UniqueName: \"kubernetes.io/projected/4259e6bd-29c2-44a9-a7d1-eccc321fd8a5-kube-api-access-qqhvf\") pod \"4259e6bd-29c2-44a9-a7d1-eccc321fd8a5\" (UID: \"4259e6bd-29c2-44a9-a7d1-eccc321fd8a5\") " Nov 28 12:59:12 crc kubenswrapper[4779]: I1128 12:59:12.726296 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4259e6bd-29c2-44a9-a7d1-eccc321fd8a5-config\") pod \"4259e6bd-29c2-44a9-a7d1-eccc321fd8a5\" (UID: \"4259e6bd-29c2-44a9-a7d1-eccc321fd8a5\") " Nov 28 12:59:12 crc kubenswrapper[4779]: I1128 12:59:12.726410 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4259e6bd-29c2-44a9-a7d1-eccc321fd8a5-ovsdbserver-nb\") pod \"4259e6bd-29c2-44a9-a7d1-eccc321fd8a5\" (UID: \"4259e6bd-29c2-44a9-a7d1-eccc321fd8a5\") " Nov 28 12:59:12 crc kubenswrapper[4779]: I1128 12:59:12.726707 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4259e6bd-29c2-44a9-a7d1-eccc321fd8a5-dns-swift-storage-0\") pod \"4259e6bd-29c2-44a9-a7d1-eccc321fd8a5\" (UID: \"4259e6bd-29c2-44a9-a7d1-eccc321fd8a5\") " Nov 28 12:59:12 crc kubenswrapper[4779]: I1128 12:59:12.732947 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4259e6bd-29c2-44a9-a7d1-eccc321fd8a5-kube-api-access-qqhvf" (OuterVolumeSpecName: "kube-api-access-qqhvf") pod "4259e6bd-29c2-44a9-a7d1-eccc321fd8a5" (UID: "4259e6bd-29c2-44a9-a7d1-eccc321fd8a5"). InnerVolumeSpecName "kube-api-access-qqhvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:59:12 crc kubenswrapper[4779]: I1128 12:59:12.807015 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4259e6bd-29c2-44a9-a7d1-eccc321fd8a5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4259e6bd-29c2-44a9-a7d1-eccc321fd8a5" (UID: "4259e6bd-29c2-44a9-a7d1-eccc321fd8a5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:59:12 crc kubenswrapper[4779]: I1128 12:59:12.811121 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4259e6bd-29c2-44a9-a7d1-eccc321fd8a5-config" (OuterVolumeSpecName: "config") pod "4259e6bd-29c2-44a9-a7d1-eccc321fd8a5" (UID: "4259e6bd-29c2-44a9-a7d1-eccc321fd8a5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:59:12 crc kubenswrapper[4779]: I1128 12:59:12.816814 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4259e6bd-29c2-44a9-a7d1-eccc321fd8a5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4259e6bd-29c2-44a9-a7d1-eccc321fd8a5" (UID: "4259e6bd-29c2-44a9-a7d1-eccc321fd8a5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:59:12 crc kubenswrapper[4779]: I1128 12:59:12.826488 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4259e6bd-29c2-44a9-a7d1-eccc321fd8a5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4259e6bd-29c2-44a9-a7d1-eccc321fd8a5" (UID: "4259e6bd-29c2-44a9-a7d1-eccc321fd8a5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:59:12 crc kubenswrapper[4779]: I1128 12:59:12.830322 4779 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4259e6bd-29c2-44a9-a7d1-eccc321fd8a5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 12:59:12 crc kubenswrapper[4779]: I1128 12:59:12.830352 4779 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4259e6bd-29c2-44a9-a7d1-eccc321fd8a5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 12:59:12 crc kubenswrapper[4779]: I1128 12:59:12.830363 4779 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4259e6bd-29c2-44a9-a7d1-eccc321fd8a5-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 12:59:12 crc kubenswrapper[4779]: I1128 12:59:12.830374 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqhvf\" (UniqueName: \"kubernetes.io/projected/4259e6bd-29c2-44a9-a7d1-eccc321fd8a5-kube-api-access-qqhvf\") on node \"crc\" DevicePath \"\"" Nov 28 12:59:12 crc kubenswrapper[4779]: I1128 12:59:12.830384 4779 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4259e6bd-29c2-44a9-a7d1-eccc321fd8a5-config\") on node \"crc\" DevicePath \"\"" Nov 28 12:59:12 crc kubenswrapper[4779]: I1128 12:59:12.833969 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4259e6bd-29c2-44a9-a7d1-eccc321fd8a5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4259e6bd-29c2-44a9-a7d1-eccc321fd8a5" (UID: "4259e6bd-29c2-44a9-a7d1-eccc321fd8a5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:59:12 crc kubenswrapper[4779]: I1128 12:59:12.932219 4779 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4259e6bd-29c2-44a9-a7d1-eccc321fd8a5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 12:59:13 crc kubenswrapper[4779]: I1128 12:59:13.546502 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb85b8995-t8mt8" event={"ID":"4d53ab8b-7d5c-4b0f-9cfa-992ed1fd2c0f","Type":"ContainerStarted","Data":"ca57c93974e4159baf9198ab8e717b566ea57d5946de461b784d48511fe795c4"} Nov 28 12:59:13 crc kubenswrapper[4779]: I1128 12:59:13.546591 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bb85b8995-t8mt8" Nov 28 12:59:13 crc kubenswrapper[4779]: I1128 12:59:13.553150 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79b5d74c8c-4kf9h" event={"ID":"4259e6bd-29c2-44a9-a7d1-eccc321fd8a5","Type":"ContainerDied","Data":"4d64f54318daeb319dfcde38aa6acd4e8c8ee288d26e59826c851202c83a6671"} Nov 28 12:59:13 crc kubenswrapper[4779]: I1128 12:59:13.553221 4779 scope.go:117] "RemoveContainer" containerID="543a19a78f763af3433632c70f3f4d2126135d5c0f2582a67c019269a3b73a9f" Nov 28 12:59:13 crc kubenswrapper[4779]: I1128 12:59:13.553434 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79b5d74c8c-4kf9h" Nov 28 12:59:13 crc kubenswrapper[4779]: I1128 12:59:13.572013 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bb85b8995-t8mt8" podStartSLOduration=2.5719918980000003 podStartE2EDuration="2.571991898s" podCreationTimestamp="2025-11-28 12:59:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:59:13.570289082 +0000 UTC m=+1414.135964466" watchObservedRunningTime="2025-11-28 12:59:13.571991898 +0000 UTC m=+1414.137667262" Nov 28 12:59:13 crc kubenswrapper[4779]: I1128 12:59:13.610379 4779 scope.go:117] "RemoveContainer" containerID="a3bcb45235ed02510acf16dd7e98dfda8ec085d10d3fb48c20d9a6ef0e6a6273" Nov 28 12:59:13 crc kubenswrapper[4779]: I1128 12:59:13.618472 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79b5d74c8c-4kf9h"] Nov 28 12:59:13 crc kubenswrapper[4779]: I1128 12:59:13.628670 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79b5d74c8c-4kf9h"] Nov 28 12:59:13 crc kubenswrapper[4779]: I1128 12:59:13.736678 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4259e6bd-29c2-44a9-a7d1-eccc321fd8a5" path="/var/lib/kubelet/pods/4259e6bd-29c2-44a9-a7d1-eccc321fd8a5/volumes" Nov 28 12:59:21 crc kubenswrapper[4779]: I1128 12:59:21.614468 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bb85b8995-t8mt8" Nov 28 12:59:21 crc kubenswrapper[4779]: I1128 12:59:21.711975 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68df85789f-bnhts"] Nov 28 12:59:21 crc kubenswrapper[4779]: I1128 12:59:21.712730 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-68df85789f-bnhts" podUID="220dc915-078f-47cf-9bb3-06848d055a0e" containerName="dnsmasq-dns" containerID="cri-o://9866efdf21e33821789130b767c2e9c27130e8430938fc0a88852ae56dd6553f" gracePeriod=10 Nov 28 12:59:22 crc kubenswrapper[4779]: I1128 12:59:22.466515 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68df85789f-bnhts" Nov 28 12:59:22 crc kubenswrapper[4779]: I1128 12:59:22.590620 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/220dc915-078f-47cf-9bb3-06848d055a0e-openstack-edpm-ipam\") pod \"220dc915-078f-47cf-9bb3-06848d055a0e\" (UID: \"220dc915-078f-47cf-9bb3-06848d055a0e\") " Nov 28 12:59:22 crc kubenswrapper[4779]: I1128 12:59:22.590783 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/220dc915-078f-47cf-9bb3-06848d055a0e-ovsdbserver-nb\") pod \"220dc915-078f-47cf-9bb3-06848d055a0e\" (UID: \"220dc915-078f-47cf-9bb3-06848d055a0e\") " Nov 28 12:59:22 crc kubenswrapper[4779]: I1128 12:59:22.590940 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/220dc915-078f-47cf-9bb3-06848d055a0e-config\") pod \"220dc915-078f-47cf-9bb3-06848d055a0e\" (UID: \"220dc915-078f-47cf-9bb3-06848d055a0e\") " Nov 28 12:59:22 crc kubenswrapper[4779]: I1128 12:59:22.590985 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rmbr\" (UniqueName: \"kubernetes.io/projected/220dc915-078f-47cf-9bb3-06848d055a0e-kube-api-access-5rmbr\") pod \"220dc915-078f-47cf-9bb3-06848d055a0e\" (UID: \"220dc915-078f-47cf-9bb3-06848d055a0e\") " Nov 28 12:59:22 crc kubenswrapper[4779]: I1128 12:59:22.591029 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/220dc915-078f-47cf-9bb3-06848d055a0e-dns-swift-storage-0\") pod \"220dc915-078f-47cf-9bb3-06848d055a0e\" (UID: \"220dc915-078f-47cf-9bb3-06848d055a0e\") " Nov 28 12:59:22 crc kubenswrapper[4779]: I1128 12:59:22.591073 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/220dc915-078f-47cf-9bb3-06848d055a0e-dns-svc\") pod \"220dc915-078f-47cf-9bb3-06848d055a0e\" (UID: \"220dc915-078f-47cf-9bb3-06848d055a0e\") " Nov 28 12:59:22 crc kubenswrapper[4779]: I1128 12:59:22.591179 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/220dc915-078f-47cf-9bb3-06848d055a0e-ovsdbserver-sb\") pod \"220dc915-078f-47cf-9bb3-06848d055a0e\" (UID: \"220dc915-078f-47cf-9bb3-06848d055a0e\") " Nov 28 12:59:22 crc kubenswrapper[4779]: I1128 12:59:22.609409 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/220dc915-078f-47cf-9bb3-06848d055a0e-kube-api-access-5rmbr" (OuterVolumeSpecName: "kube-api-access-5rmbr") pod "220dc915-078f-47cf-9bb3-06848d055a0e" (UID: "220dc915-078f-47cf-9bb3-06848d055a0e"). InnerVolumeSpecName "kube-api-access-5rmbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:59:22 crc kubenswrapper[4779]: I1128 12:59:22.644845 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/220dc915-078f-47cf-9bb3-06848d055a0e-config" (OuterVolumeSpecName: "config") pod "220dc915-078f-47cf-9bb3-06848d055a0e" (UID: "220dc915-078f-47cf-9bb3-06848d055a0e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:59:22 crc kubenswrapper[4779]: I1128 12:59:22.654298 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/220dc915-078f-47cf-9bb3-06848d055a0e-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "220dc915-078f-47cf-9bb3-06848d055a0e" (UID: "220dc915-078f-47cf-9bb3-06848d055a0e"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:59:22 crc kubenswrapper[4779]: I1128 12:59:22.661852 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/220dc915-078f-47cf-9bb3-06848d055a0e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "220dc915-078f-47cf-9bb3-06848d055a0e" (UID: "220dc915-078f-47cf-9bb3-06848d055a0e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:59:22 crc kubenswrapper[4779]: I1128 12:59:22.679057 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/220dc915-078f-47cf-9bb3-06848d055a0e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "220dc915-078f-47cf-9bb3-06848d055a0e" (UID: "220dc915-078f-47cf-9bb3-06848d055a0e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:59:22 crc kubenswrapper[4779]: I1128 12:59:22.687007 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/220dc915-078f-47cf-9bb3-06848d055a0e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "220dc915-078f-47cf-9bb3-06848d055a0e" (UID: "220dc915-078f-47cf-9bb3-06848d055a0e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:59:22 crc kubenswrapper[4779]: I1128 12:59:22.689086 4779 generic.go:334] "Generic (PLEG): container finished" podID="220dc915-078f-47cf-9bb3-06848d055a0e" containerID="9866efdf21e33821789130b767c2e9c27130e8430938fc0a88852ae56dd6553f" exitCode=0 Nov 28 12:59:22 crc kubenswrapper[4779]: I1128 12:59:22.689162 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68df85789f-bnhts" event={"ID":"220dc915-078f-47cf-9bb3-06848d055a0e","Type":"ContainerDied","Data":"9866efdf21e33821789130b767c2e9c27130e8430938fc0a88852ae56dd6553f"} Nov 28 12:59:22 crc kubenswrapper[4779]: I1128 12:59:22.689217 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68df85789f-bnhts" event={"ID":"220dc915-078f-47cf-9bb3-06848d055a0e","Type":"ContainerDied","Data":"8a5f057a21a94477eb0cffd2e3c922839c48d49acd28e45bc7abae2a63c50920"} Nov 28 12:59:22 crc kubenswrapper[4779]: I1128 12:59:22.689239 4779 scope.go:117] "RemoveContainer" containerID="9866efdf21e33821789130b767c2e9c27130e8430938fc0a88852ae56dd6553f" Nov 28 12:59:22 crc kubenswrapper[4779]: I1128 12:59:22.689496 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68df85789f-bnhts" Nov 28 12:59:22 crc kubenswrapper[4779]: I1128 12:59:22.698725 4779 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/220dc915-078f-47cf-9bb3-06848d055a0e-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 28 12:59:22 crc kubenswrapper[4779]: I1128 12:59:22.698758 4779 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/220dc915-078f-47cf-9bb3-06848d055a0e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 12:59:22 crc kubenswrapper[4779]: I1128 12:59:22.698789 4779 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/220dc915-078f-47cf-9bb3-06848d055a0e-config\") on node \"crc\" DevicePath \"\"" Nov 28 12:59:22 crc kubenswrapper[4779]: I1128 12:59:22.698804 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rmbr\" (UniqueName: \"kubernetes.io/projected/220dc915-078f-47cf-9bb3-06848d055a0e-kube-api-access-5rmbr\") on node \"crc\" DevicePath \"\"" Nov 28 12:59:22 crc kubenswrapper[4779]: I1128 12:59:22.698815 4779 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/220dc915-078f-47cf-9bb3-06848d055a0e-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 12:59:22 crc kubenswrapper[4779]: I1128 12:59:22.698824 4779 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/220dc915-078f-47cf-9bb3-06848d055a0e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 12:59:22 crc kubenswrapper[4779]: I1128 12:59:22.705265 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/220dc915-078f-47cf-9bb3-06848d055a0e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "220dc915-078f-47cf-9bb3-06848d055a0e" (UID: "220dc915-078f-47cf-9bb3-06848d055a0e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:59:22 crc kubenswrapper[4779]: I1128 12:59:22.730242 4779 scope.go:117] "RemoveContainer" containerID="be22d4b0803443032e1ca703af12b0592fc7df249ea0f3237793a86a15ac5371" Nov 28 12:59:22 crc kubenswrapper[4779]: I1128 12:59:22.773339 4779 scope.go:117] "RemoveContainer" containerID="9866efdf21e33821789130b767c2e9c27130e8430938fc0a88852ae56dd6553f" Nov 28 12:59:22 crc kubenswrapper[4779]: E1128 12:59:22.774551 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9866efdf21e33821789130b767c2e9c27130e8430938fc0a88852ae56dd6553f\": container with ID starting with 9866efdf21e33821789130b767c2e9c27130e8430938fc0a88852ae56dd6553f not found: ID does not exist" containerID="9866efdf21e33821789130b767c2e9c27130e8430938fc0a88852ae56dd6553f" Nov 28 12:59:22 crc kubenswrapper[4779]: I1128 12:59:22.774596 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9866efdf21e33821789130b767c2e9c27130e8430938fc0a88852ae56dd6553f"} err="failed to get container status \"9866efdf21e33821789130b767c2e9c27130e8430938fc0a88852ae56dd6553f\": rpc error: code = NotFound desc = could not find container \"9866efdf21e33821789130b767c2e9c27130e8430938fc0a88852ae56dd6553f\": container with ID starting with 9866efdf21e33821789130b767c2e9c27130e8430938fc0a88852ae56dd6553f not found: ID does not exist" Nov 28 12:59:22 crc kubenswrapper[4779]: I1128 12:59:22.774624 4779 scope.go:117] "RemoveContainer" containerID="be22d4b0803443032e1ca703af12b0592fc7df249ea0f3237793a86a15ac5371" Nov 28 12:59:22 crc kubenswrapper[4779]: E1128 12:59:22.776619 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be22d4b0803443032e1ca703af12b0592fc7df249ea0f3237793a86a15ac5371\": container with ID starting with be22d4b0803443032e1ca703af12b0592fc7df249ea0f3237793a86a15ac5371 not found: ID does not exist" containerID="be22d4b0803443032e1ca703af12b0592fc7df249ea0f3237793a86a15ac5371" Nov 28 12:59:22 crc kubenswrapper[4779]: I1128 12:59:22.776639 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be22d4b0803443032e1ca703af12b0592fc7df249ea0f3237793a86a15ac5371"} err="failed to get container status \"be22d4b0803443032e1ca703af12b0592fc7df249ea0f3237793a86a15ac5371\": rpc error: code = NotFound desc = could not find container \"be22d4b0803443032e1ca703af12b0592fc7df249ea0f3237793a86a15ac5371\": container with ID starting with be22d4b0803443032e1ca703af12b0592fc7df249ea0f3237793a86a15ac5371 not found: ID does not exist" Nov 28 12:59:22 crc kubenswrapper[4779]: I1128 12:59:22.800308 4779 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/220dc915-078f-47cf-9bb3-06848d055a0e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 12:59:23 crc kubenswrapper[4779]: I1128 12:59:23.034562 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68df85789f-bnhts"] Nov 28 12:59:23 crc kubenswrapper[4779]: I1128 12:59:23.044658 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68df85789f-bnhts"] Nov 28 12:59:23 crc kubenswrapper[4779]: I1128 12:59:23.748767 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="220dc915-078f-47cf-9bb3-06848d055a0e" path="/var/lib/kubelet/pods/220dc915-078f-47cf-9bb3-06848d055a0e/volumes" Nov 28 12:59:33 crc kubenswrapper[4779]: I1128 12:59:33.822146 4779 generic.go:334] "Generic (PLEG): container finished" podID="80c2f0f7-d979-400e-b9fe-9369c3fc8ec5" containerID="fb7d85fa0346f55ebf2349961028e8477e8f966752e77a7d828f95e993827be6" exitCode=0 Nov 28 12:59:33 crc kubenswrapper[4779]: I1128 12:59:33.822206 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"80c2f0f7-d979-400e-b9fe-9369c3fc8ec5","Type":"ContainerDied","Data":"fb7d85fa0346f55ebf2349961028e8477e8f966752e77a7d828f95e993827be6"} Nov 28 12:59:33 crc kubenswrapper[4779]: I1128 12:59:33.826559 4779 generic.go:334] "Generic (PLEG): container finished" podID="b0a12679-627a-4310-a9f7-93731231b12e" containerID="4cbb091cc7a999db3551c8aeaa770d426bae97d487bee3a5b05096b5c930ce32" exitCode=0 Nov 28 12:59:33 crc kubenswrapper[4779]: I1128 12:59:33.826598 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b0a12679-627a-4310-a9f7-93731231b12e","Type":"ContainerDied","Data":"4cbb091cc7a999db3551c8aeaa770d426bae97d487bee3a5b05096b5c930ce32"} Nov 28 12:59:34 crc kubenswrapper[4779]: I1128 12:59:34.799544 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q474j"] Nov 28 12:59:34 crc kubenswrapper[4779]: E1128 12:59:34.800282 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4259e6bd-29c2-44a9-a7d1-eccc321fd8a5" containerName="dnsmasq-dns" Nov 28 12:59:34 crc kubenswrapper[4779]: I1128 12:59:34.800303 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="4259e6bd-29c2-44a9-a7d1-eccc321fd8a5" containerName="dnsmasq-dns" Nov 28 12:59:34 crc kubenswrapper[4779]: E1128 12:59:34.800324 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="220dc915-078f-47cf-9bb3-06848d055a0e" containerName="dnsmasq-dns" Nov 28 12:59:34 crc kubenswrapper[4779]: I1128 12:59:34.800331 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="220dc915-078f-47cf-9bb3-06848d055a0e" containerName="dnsmasq-dns" Nov 28 12:59:34 crc kubenswrapper[4779]: E1128 12:59:34.800356 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4259e6bd-29c2-44a9-a7d1-eccc321fd8a5" containerName="init" Nov 28 12:59:34 crc kubenswrapper[4779]: I1128 12:59:34.800364 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="4259e6bd-29c2-44a9-a7d1-eccc321fd8a5" containerName="init" Nov 28 12:59:34 crc kubenswrapper[4779]: E1128 12:59:34.800383 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="220dc915-078f-47cf-9bb3-06848d055a0e" containerName="init" Nov 28 12:59:34 crc kubenswrapper[4779]: I1128 12:59:34.800390 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="220dc915-078f-47cf-9bb3-06848d055a0e" containerName="init" Nov 28 12:59:34 crc kubenswrapper[4779]: I1128 12:59:34.800611 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="220dc915-078f-47cf-9bb3-06848d055a0e" containerName="dnsmasq-dns" Nov 28 12:59:34 crc kubenswrapper[4779]: I1128 12:59:34.800639 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="4259e6bd-29c2-44a9-a7d1-eccc321fd8a5" containerName="dnsmasq-dns" Nov 28 12:59:34 crc kubenswrapper[4779]: I1128 12:59:34.801399 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q474j" Nov 28 12:59:34 crc kubenswrapper[4779]: I1128 12:59:34.805409 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 12:59:34 crc kubenswrapper[4779]: I1128 12:59:34.805517 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 12:59:34 crc kubenswrapper[4779]: I1128 12:59:34.805779 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 12:59:34 crc kubenswrapper[4779]: I1128 12:59:34.805830 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zfcth" Nov 28 12:59:34 crc kubenswrapper[4779]: I1128 12:59:34.809429 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q474j"] Nov 28 12:59:34 crc kubenswrapper[4779]: I1128 12:59:34.837086 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b0a12679-627a-4310-a9f7-93731231b12e","Type":"ContainerStarted","Data":"67570af97f14e93c8308d0037fd4dd1b743d2e26f2bc1060fccc95545a5bb808"} Nov 28 12:59:34 crc kubenswrapper[4779]: I1128 12:59:34.838053 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 28 12:59:34 crc kubenswrapper[4779]: I1128 12:59:34.840449 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"80c2f0f7-d979-400e-b9fe-9369c3fc8ec5","Type":"ContainerStarted","Data":"f4d9bb8627bcd53b643d50ca1bea12e446426ad02fd9e215551923e74f5a5b48"} Nov 28 12:59:34 crc kubenswrapper[4779]: I1128 12:59:34.841058 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:59:34 crc kubenswrapper[4779]: I1128 12:59:34.865682 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.86566403 podStartE2EDuration="36.86566403s" podCreationTimestamp="2025-11-28 12:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:59:34.864885369 +0000 UTC m=+1435.430560763" watchObservedRunningTime="2025-11-28 12:59:34.86566403 +0000 UTC m=+1435.431339384" Nov 28 12:59:34 crc kubenswrapper[4779]: I1128 12:59:34.893020 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.893003513 podStartE2EDuration="36.893003513s" podCreationTimestamp="2025-11-28 12:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:59:34.888558514 +0000 UTC m=+1435.454233918" watchObservedRunningTime="2025-11-28 12:59:34.893003513 +0000 UTC m=+1435.458678857" Nov 28 12:59:34 crc kubenswrapper[4779]: I1128 12:59:34.946164 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/833a68ba-d01e-49d6-9055-0e4342fd3305-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-q474j\" (UID: \"833a68ba-d01e-49d6-9055-0e4342fd3305\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q474j" Nov 28 12:59:34 crc kubenswrapper[4779]: I1128 12:59:34.946251 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/833a68ba-d01e-49d6-9055-0e4342fd3305-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-q474j\" (UID: \"833a68ba-d01e-49d6-9055-0e4342fd3305\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q474j" Nov 28 12:59:34 crc kubenswrapper[4779]: I1128 12:59:34.946324 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79bz6\" (UniqueName: \"kubernetes.io/projected/833a68ba-d01e-49d6-9055-0e4342fd3305-kube-api-access-79bz6\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-q474j\" (UID: \"833a68ba-d01e-49d6-9055-0e4342fd3305\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q474j" Nov 28 12:59:34 crc kubenswrapper[4779]: I1128 12:59:34.946393 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/833a68ba-d01e-49d6-9055-0e4342fd3305-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-q474j\" (UID: \"833a68ba-d01e-49d6-9055-0e4342fd3305\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q474j" Nov 28 12:59:35 crc kubenswrapper[4779]: I1128 12:59:35.048374 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/833a68ba-d01e-49d6-9055-0e4342fd3305-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-q474j\" (UID: \"833a68ba-d01e-49d6-9055-0e4342fd3305\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q474j" Nov 28 12:59:35 crc kubenswrapper[4779]: I1128 12:59:35.048692 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/833a68ba-d01e-49d6-9055-0e4342fd3305-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-q474j\" (UID: \"833a68ba-d01e-49d6-9055-0e4342fd3305\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q474j" Nov 28 12:59:35 crc kubenswrapper[4779]: I1128 12:59:35.048720 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79bz6\" (UniqueName: \"kubernetes.io/projected/833a68ba-d01e-49d6-9055-0e4342fd3305-kube-api-access-79bz6\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-q474j\" (UID: \"833a68ba-d01e-49d6-9055-0e4342fd3305\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q474j" Nov 28 12:59:35 crc kubenswrapper[4779]: I1128 12:59:35.048763 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/833a68ba-d01e-49d6-9055-0e4342fd3305-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-q474j\" (UID: \"833a68ba-d01e-49d6-9055-0e4342fd3305\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q474j" Nov 28 12:59:35 crc kubenswrapper[4779]: I1128 12:59:35.054642 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/833a68ba-d01e-49d6-9055-0e4342fd3305-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-q474j\" (UID: \"833a68ba-d01e-49d6-9055-0e4342fd3305\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q474j" Nov 28 12:59:35 crc kubenswrapper[4779]: I1128 12:59:35.055946 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/833a68ba-d01e-49d6-9055-0e4342fd3305-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-q474j\" (UID: \"833a68ba-d01e-49d6-9055-0e4342fd3305\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q474j" Nov 28 12:59:35 crc kubenswrapper[4779]: I1128 12:59:35.062254 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/833a68ba-d01e-49d6-9055-0e4342fd3305-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-q474j\" (UID: \"833a68ba-d01e-49d6-9055-0e4342fd3305\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q474j" Nov 28 12:59:35 crc kubenswrapper[4779]: I1128 12:59:35.069986 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79bz6\" (UniqueName: \"kubernetes.io/projected/833a68ba-d01e-49d6-9055-0e4342fd3305-kube-api-access-79bz6\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-q474j\" (UID: \"833a68ba-d01e-49d6-9055-0e4342fd3305\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q474j" Nov 28 12:59:35 crc kubenswrapper[4779]: I1128 12:59:35.121083 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q474j" Nov 28 12:59:35 crc kubenswrapper[4779]: I1128 12:59:35.739438 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q474j"] Nov 28 12:59:35 crc kubenswrapper[4779]: I1128 12:59:35.849994 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q474j" event={"ID":"833a68ba-d01e-49d6-9055-0e4342fd3305","Type":"ContainerStarted","Data":"18001fd37170f4ce75e3ec5e560947309c3b21522ce99967cd7f1746973bb013"} Nov 28 12:59:44 crc kubenswrapper[4779]: I1128 12:59:44.729421 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 12:59:45 crc kubenswrapper[4779]: I1128 12:59:45.956011 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q474j" event={"ID":"833a68ba-d01e-49d6-9055-0e4342fd3305","Type":"ContainerStarted","Data":"568365046af36f5c55048338bb59ef2e6613fdeea865db3f2d923fbf0f4e16c3"} Nov 28 12:59:45 crc kubenswrapper[4779]: I1128 12:59:45.996917 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q474j" podStartSLOduration=2.995524011 podStartE2EDuration="11.996887607s" podCreationTimestamp="2025-11-28 12:59:34 +0000 UTC" firstStartedPulling="2025-11-28 12:59:35.723671951 +0000 UTC m=+1436.289347305" lastFinishedPulling="2025-11-28 12:59:44.725035507 +0000 UTC m=+1445.290710901" observedRunningTime="2025-11-28 12:59:45.976618704 +0000 UTC m=+1446.542294068" watchObservedRunningTime="2025-11-28 12:59:45.996887607 +0000 UTC m=+1446.562563001" Nov 28 12:59:48 crc kubenswrapper[4779]: I1128 12:59:48.845562 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 28 12:59:48 crc kubenswrapper[4779]: I1128 12:59:48.990313 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 28 12:59:55 crc kubenswrapper[4779]: E1128 12:59:55.565192 4779 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod833a68ba_d01e_49d6_9055_0e4342fd3305.slice/crio-conmon-568365046af36f5c55048338bb59ef2e6613fdeea865db3f2d923fbf0f4e16c3.scope\": RecentStats: unable to find data in memory cache]" Nov 28 12:59:56 crc kubenswrapper[4779]: I1128 12:59:56.077165 4779 generic.go:334] "Generic (PLEG): container finished" podID="833a68ba-d01e-49d6-9055-0e4342fd3305" containerID="568365046af36f5c55048338bb59ef2e6613fdeea865db3f2d923fbf0f4e16c3" exitCode=0 Nov 28 12:59:56 crc kubenswrapper[4779]: I1128 12:59:56.077286 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q474j" event={"ID":"833a68ba-d01e-49d6-9055-0e4342fd3305","Type":"ContainerDied","Data":"568365046af36f5c55048338bb59ef2e6613fdeea865db3f2d923fbf0f4e16c3"} Nov 28 12:59:57 crc kubenswrapper[4779]: I1128 12:59:57.656339 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q474j" Nov 28 12:59:57 crc kubenswrapper[4779]: I1128 12:59:57.839184 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/833a68ba-d01e-49d6-9055-0e4342fd3305-ssh-key\") pod \"833a68ba-d01e-49d6-9055-0e4342fd3305\" (UID: \"833a68ba-d01e-49d6-9055-0e4342fd3305\") " Nov 28 12:59:57 crc kubenswrapper[4779]: I1128 12:59:57.839249 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79bz6\" (UniqueName: \"kubernetes.io/projected/833a68ba-d01e-49d6-9055-0e4342fd3305-kube-api-access-79bz6\") pod \"833a68ba-d01e-49d6-9055-0e4342fd3305\" (UID: \"833a68ba-d01e-49d6-9055-0e4342fd3305\") " Nov 28 12:59:57 crc kubenswrapper[4779]: I1128 12:59:57.839292 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/833a68ba-d01e-49d6-9055-0e4342fd3305-repo-setup-combined-ca-bundle\") pod \"833a68ba-d01e-49d6-9055-0e4342fd3305\" (UID: \"833a68ba-d01e-49d6-9055-0e4342fd3305\") " Nov 28 12:59:57 crc kubenswrapper[4779]: I1128 12:59:57.839402 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/833a68ba-d01e-49d6-9055-0e4342fd3305-inventory\") pod \"833a68ba-d01e-49d6-9055-0e4342fd3305\" (UID: \"833a68ba-d01e-49d6-9055-0e4342fd3305\") " Nov 28 12:59:57 crc kubenswrapper[4779]: I1128 12:59:57.844873 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/833a68ba-d01e-49d6-9055-0e4342fd3305-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "833a68ba-d01e-49d6-9055-0e4342fd3305" (UID: "833a68ba-d01e-49d6-9055-0e4342fd3305"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:59:57 crc kubenswrapper[4779]: I1128 12:59:57.845113 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/833a68ba-d01e-49d6-9055-0e4342fd3305-kube-api-access-79bz6" (OuterVolumeSpecName: "kube-api-access-79bz6") pod "833a68ba-d01e-49d6-9055-0e4342fd3305" (UID: "833a68ba-d01e-49d6-9055-0e4342fd3305"). InnerVolumeSpecName "kube-api-access-79bz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:59:57 crc kubenswrapper[4779]: I1128 12:59:57.866610 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/833a68ba-d01e-49d6-9055-0e4342fd3305-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "833a68ba-d01e-49d6-9055-0e4342fd3305" (UID: "833a68ba-d01e-49d6-9055-0e4342fd3305"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:59:57 crc kubenswrapper[4779]: I1128 12:59:57.869879 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/833a68ba-d01e-49d6-9055-0e4342fd3305-inventory" (OuterVolumeSpecName: "inventory") pod "833a68ba-d01e-49d6-9055-0e4342fd3305" (UID: "833a68ba-d01e-49d6-9055-0e4342fd3305"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:59:57 crc kubenswrapper[4779]: I1128 12:59:57.942113 4779 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/833a68ba-d01e-49d6-9055-0e4342fd3305-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 12:59:57 crc kubenswrapper[4779]: I1128 12:59:57.942383 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79bz6\" (UniqueName: \"kubernetes.io/projected/833a68ba-d01e-49d6-9055-0e4342fd3305-kube-api-access-79bz6\") on node \"crc\" DevicePath \"\"" Nov 28 12:59:57 crc kubenswrapper[4779]: I1128 12:59:57.942482 4779 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/833a68ba-d01e-49d6-9055-0e4342fd3305-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:59:57 crc kubenswrapper[4779]: I1128 12:59:57.942537 4779 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/833a68ba-d01e-49d6-9055-0e4342fd3305-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 12:59:58 crc kubenswrapper[4779]: I1128 12:59:58.107386 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q474j" event={"ID":"833a68ba-d01e-49d6-9055-0e4342fd3305","Type":"ContainerDied","Data":"18001fd37170f4ce75e3ec5e560947309c3b21522ce99967cd7f1746973bb013"} Nov 28 12:59:58 crc kubenswrapper[4779]: I1128 12:59:58.107461 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18001fd37170f4ce75e3ec5e560947309c3b21522ce99967cd7f1746973bb013" Nov 28 12:59:58 crc kubenswrapper[4779]: I1128 12:59:58.107488 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-q474j" Nov 28 12:59:58 crc kubenswrapper[4779]: I1128 12:59:58.220266 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-vz8vz"] Nov 28 12:59:58 crc kubenswrapper[4779]: E1128 12:59:58.221448 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="833a68ba-d01e-49d6-9055-0e4342fd3305" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 28 12:59:58 crc kubenswrapper[4779]: I1128 12:59:58.221477 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="833a68ba-d01e-49d6-9055-0e4342fd3305" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 28 12:59:58 crc kubenswrapper[4779]: I1128 12:59:58.221740 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="833a68ba-d01e-49d6-9055-0e4342fd3305" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 28 12:59:58 crc kubenswrapper[4779]: I1128 12:59:58.222575 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vz8vz" Nov 28 12:59:58 crc kubenswrapper[4779]: I1128 12:59:58.226042 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zfcth" Nov 28 12:59:58 crc kubenswrapper[4779]: I1128 12:59:58.228763 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 12:59:58 crc kubenswrapper[4779]: I1128 12:59:58.229067 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 12:59:58 crc kubenswrapper[4779]: I1128 12:59:58.229575 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 12:59:58 crc kubenswrapper[4779]: I1128 12:59:58.260489 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-vz8vz"] Nov 28 12:59:58 crc kubenswrapper[4779]: I1128 12:59:58.357061 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1bec9363-8311-40e0-ab18-fcaf7acf3dc9-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vz8vz\" (UID: \"1bec9363-8311-40e0-ab18-fcaf7acf3dc9\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vz8vz" Nov 28 12:59:58 crc kubenswrapper[4779]: I1128 12:59:58.357155 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1bec9363-8311-40e0-ab18-fcaf7acf3dc9-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vz8vz\" (UID: \"1bec9363-8311-40e0-ab18-fcaf7acf3dc9\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vz8vz" Nov 28 12:59:58 crc kubenswrapper[4779]: I1128 12:59:58.357333 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4m6c\" (UniqueName: \"kubernetes.io/projected/1bec9363-8311-40e0-ab18-fcaf7acf3dc9-kube-api-access-k4m6c\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vz8vz\" (UID: \"1bec9363-8311-40e0-ab18-fcaf7acf3dc9\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vz8vz" Nov 28 12:59:58 crc kubenswrapper[4779]: I1128 12:59:58.458831 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1bec9363-8311-40e0-ab18-fcaf7acf3dc9-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vz8vz\" (UID: \"1bec9363-8311-40e0-ab18-fcaf7acf3dc9\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vz8vz" Nov 28 12:59:58 crc kubenswrapper[4779]: I1128 12:59:58.458875 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1bec9363-8311-40e0-ab18-fcaf7acf3dc9-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vz8vz\" (UID: \"1bec9363-8311-40e0-ab18-fcaf7acf3dc9\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vz8vz" Nov 28 12:59:58 crc kubenswrapper[4779]: I1128 12:59:58.458979 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4m6c\" (UniqueName: \"kubernetes.io/projected/1bec9363-8311-40e0-ab18-fcaf7acf3dc9-kube-api-access-k4m6c\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vz8vz\" (UID: \"1bec9363-8311-40e0-ab18-fcaf7acf3dc9\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vz8vz" Nov 28 12:59:58 crc kubenswrapper[4779]: I1128 12:59:58.464754 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1bec9363-8311-40e0-ab18-fcaf7acf3dc9-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vz8vz\" (UID: \"1bec9363-8311-40e0-ab18-fcaf7acf3dc9\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vz8vz" Nov 28 12:59:58 crc kubenswrapper[4779]: I1128 12:59:58.465253 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1bec9363-8311-40e0-ab18-fcaf7acf3dc9-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vz8vz\" (UID: \"1bec9363-8311-40e0-ab18-fcaf7acf3dc9\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vz8vz" Nov 28 12:59:58 crc kubenswrapper[4779]: I1128 12:59:58.477624 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4m6c\" (UniqueName: \"kubernetes.io/projected/1bec9363-8311-40e0-ab18-fcaf7acf3dc9-kube-api-access-k4m6c\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vz8vz\" (UID: \"1bec9363-8311-40e0-ab18-fcaf7acf3dc9\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vz8vz" Nov 28 12:59:58 crc kubenswrapper[4779]: I1128 12:59:58.559450 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vz8vz" Nov 28 12:59:59 crc kubenswrapper[4779]: I1128 12:59:59.141214 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-vz8vz"] Nov 28 13:00:00 crc kubenswrapper[4779]: I1128 13:00:00.134648 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vz8vz" event={"ID":"1bec9363-8311-40e0-ab18-fcaf7acf3dc9","Type":"ContainerStarted","Data":"345e2b67cdae48acb584dcc6b1d49af3071a44822bf4bd3d534d286cc72f6951"} Nov 28 13:00:00 crc kubenswrapper[4779]: I1128 13:00:00.135043 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vz8vz" event={"ID":"1bec9363-8311-40e0-ab18-fcaf7acf3dc9","Type":"ContainerStarted","Data":"f553631b768c775858732c55fae49cd486c067ef7b2a3963fbb2e5572c8cde62"} Nov 28 13:00:00 crc kubenswrapper[4779]: I1128 13:00:00.147072 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405580-8vvdg"] Nov 28 13:00:00 crc kubenswrapper[4779]: I1128 13:00:00.148607 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405580-8vvdg" Nov 28 13:00:00 crc kubenswrapper[4779]: I1128 13:00:00.150988 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 13:00:00 crc kubenswrapper[4779]: I1128 13:00:00.152665 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 13:00:00 crc kubenswrapper[4779]: I1128 13:00:00.163227 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405580-8vvdg"] Nov 28 13:00:00 crc kubenswrapper[4779]: I1128 13:00:00.167678 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vz8vz" podStartSLOduration=1.593984448 podStartE2EDuration="2.16765598s" podCreationTimestamp="2025-11-28 12:59:58 +0000 UTC" firstStartedPulling="2025-11-28 12:59:59.152716504 +0000 UTC m=+1459.718391868" lastFinishedPulling="2025-11-28 12:59:59.726388006 +0000 UTC m=+1460.292063400" observedRunningTime="2025-11-28 13:00:00.157911489 +0000 UTC m=+1460.723586873" watchObservedRunningTime="2025-11-28 13:00:00.16765598 +0000 UTC m=+1460.733331344" Nov 28 13:00:00 crc kubenswrapper[4779]: I1128 13:00:00.197829 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgqxw\" (UniqueName: \"kubernetes.io/projected/a2652142-08f6-4c0d-ad6c-8efd85280704-kube-api-access-dgqxw\") pod \"collect-profiles-29405580-8vvdg\" (UID: \"a2652142-08f6-4c0d-ad6c-8efd85280704\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405580-8vvdg" Nov 28 13:00:00 crc kubenswrapper[4779]: I1128 13:00:00.198022 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2652142-08f6-4c0d-ad6c-8efd85280704-config-volume\") pod \"collect-profiles-29405580-8vvdg\" (UID: \"a2652142-08f6-4c0d-ad6c-8efd85280704\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405580-8vvdg" Nov 28 13:00:00 crc kubenswrapper[4779]: I1128 13:00:00.198322 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a2652142-08f6-4c0d-ad6c-8efd85280704-secret-volume\") pod \"collect-profiles-29405580-8vvdg\" (UID: \"a2652142-08f6-4c0d-ad6c-8efd85280704\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405580-8vvdg" Nov 28 13:00:00 crc kubenswrapper[4779]: I1128 13:00:00.300753 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgqxw\" (UniqueName: \"kubernetes.io/projected/a2652142-08f6-4c0d-ad6c-8efd85280704-kube-api-access-dgqxw\") pod \"collect-profiles-29405580-8vvdg\" (UID: \"a2652142-08f6-4c0d-ad6c-8efd85280704\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405580-8vvdg" Nov 28 13:00:00 crc kubenswrapper[4779]: I1128 13:00:00.300891 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2652142-08f6-4c0d-ad6c-8efd85280704-config-volume\") pod \"collect-profiles-29405580-8vvdg\" (UID: \"a2652142-08f6-4c0d-ad6c-8efd85280704\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405580-8vvdg" Nov 28 13:00:00 crc kubenswrapper[4779]: I1128 13:00:00.301216 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a2652142-08f6-4c0d-ad6c-8efd85280704-secret-volume\") pod \"collect-profiles-29405580-8vvdg\" (UID: \"a2652142-08f6-4c0d-ad6c-8efd85280704\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405580-8vvdg" Nov 28 13:00:00 crc kubenswrapper[4779]: I1128 13:00:00.303850 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2652142-08f6-4c0d-ad6c-8efd85280704-config-volume\") pod \"collect-profiles-29405580-8vvdg\" (UID: \"a2652142-08f6-4c0d-ad6c-8efd85280704\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405580-8vvdg" Nov 28 13:00:00 crc kubenswrapper[4779]: I1128 13:00:00.306287 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a2652142-08f6-4c0d-ad6c-8efd85280704-secret-volume\") pod \"collect-profiles-29405580-8vvdg\" (UID: \"a2652142-08f6-4c0d-ad6c-8efd85280704\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405580-8vvdg" Nov 28 13:00:00 crc kubenswrapper[4779]: I1128 13:00:00.323034 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgqxw\" (UniqueName: \"kubernetes.io/projected/a2652142-08f6-4c0d-ad6c-8efd85280704-kube-api-access-dgqxw\") pod \"collect-profiles-29405580-8vvdg\" (UID: \"a2652142-08f6-4c0d-ad6c-8efd85280704\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405580-8vvdg" Nov 28 13:00:00 crc kubenswrapper[4779]: I1128 13:00:00.486706 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405580-8vvdg" Nov 28 13:00:00 crc kubenswrapper[4779]: I1128 13:00:00.806879 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405580-8vvdg"] Nov 28 13:00:01 crc kubenswrapper[4779]: I1128 13:00:01.151433 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405580-8vvdg" event={"ID":"a2652142-08f6-4c0d-ad6c-8efd85280704","Type":"ContainerStarted","Data":"49b0ed9fd87451994d3a4f5594d47ef303a466e5ee608ce014c056900053945b"} Nov 28 13:00:01 crc kubenswrapper[4779]: I1128 13:00:01.151509 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405580-8vvdg" event={"ID":"a2652142-08f6-4c0d-ad6c-8efd85280704","Type":"ContainerStarted","Data":"3c45656a37a3538fdd0582a5c9dd906e868737162cc4fad2d34df55337957aaa"} Nov 28 13:00:02 crc kubenswrapper[4779]: I1128 13:00:02.163458 4779 generic.go:334] "Generic (PLEG): container finished" podID="a2652142-08f6-4c0d-ad6c-8efd85280704" containerID="49b0ed9fd87451994d3a4f5594d47ef303a466e5ee608ce014c056900053945b" exitCode=0 Nov 28 13:00:02 crc kubenswrapper[4779]: I1128 13:00:02.163539 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405580-8vvdg" event={"ID":"a2652142-08f6-4c0d-ad6c-8efd85280704","Type":"ContainerDied","Data":"49b0ed9fd87451994d3a4f5594d47ef303a466e5ee608ce014c056900053945b"} Nov 28 13:00:03 crc kubenswrapper[4779]: I1128 13:00:03.178803 4779 generic.go:334] "Generic (PLEG): container finished" podID="1bec9363-8311-40e0-ab18-fcaf7acf3dc9" containerID="345e2b67cdae48acb584dcc6b1d49af3071a44822bf4bd3d534d286cc72f6951" exitCode=0 Nov 28 13:00:03 crc kubenswrapper[4779]: I1128 13:00:03.178955 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vz8vz" event={"ID":"1bec9363-8311-40e0-ab18-fcaf7acf3dc9","Type":"ContainerDied","Data":"345e2b67cdae48acb584dcc6b1d49af3071a44822bf4bd3d534d286cc72f6951"} Nov 28 13:00:03 crc kubenswrapper[4779]: I1128 13:00:03.616911 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405580-8vvdg" Nov 28 13:00:03 crc kubenswrapper[4779]: I1128 13:00:03.674006 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2652142-08f6-4c0d-ad6c-8efd85280704-config-volume\") pod \"a2652142-08f6-4c0d-ad6c-8efd85280704\" (UID: \"a2652142-08f6-4c0d-ad6c-8efd85280704\") " Nov 28 13:00:03 crc kubenswrapper[4779]: I1128 13:00:03.674161 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgqxw\" (UniqueName: \"kubernetes.io/projected/a2652142-08f6-4c0d-ad6c-8efd85280704-kube-api-access-dgqxw\") pod \"a2652142-08f6-4c0d-ad6c-8efd85280704\" (UID: \"a2652142-08f6-4c0d-ad6c-8efd85280704\") " Nov 28 13:00:03 crc kubenswrapper[4779]: I1128 13:00:03.674213 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a2652142-08f6-4c0d-ad6c-8efd85280704-secret-volume\") pod \"a2652142-08f6-4c0d-ad6c-8efd85280704\" (UID: \"a2652142-08f6-4c0d-ad6c-8efd85280704\") " Nov 28 13:00:03 crc kubenswrapper[4779]: I1128 13:00:03.674770 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2652142-08f6-4c0d-ad6c-8efd85280704-config-volume" (OuterVolumeSpecName: "config-volume") pod "a2652142-08f6-4c0d-ad6c-8efd85280704" (UID: "a2652142-08f6-4c0d-ad6c-8efd85280704"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 13:00:03 crc kubenswrapper[4779]: I1128 13:00:03.681043 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2652142-08f6-4c0d-ad6c-8efd85280704-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a2652142-08f6-4c0d-ad6c-8efd85280704" (UID: "a2652142-08f6-4c0d-ad6c-8efd85280704"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:00:03 crc kubenswrapper[4779]: I1128 13:00:03.681186 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2652142-08f6-4c0d-ad6c-8efd85280704-kube-api-access-dgqxw" (OuterVolumeSpecName: "kube-api-access-dgqxw") pod "a2652142-08f6-4c0d-ad6c-8efd85280704" (UID: "a2652142-08f6-4c0d-ad6c-8efd85280704"). InnerVolumeSpecName "kube-api-access-dgqxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:00:03 crc kubenswrapper[4779]: I1128 13:00:03.776608 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgqxw\" (UniqueName: \"kubernetes.io/projected/a2652142-08f6-4c0d-ad6c-8efd85280704-kube-api-access-dgqxw\") on node \"crc\" DevicePath \"\"" Nov 28 13:00:03 crc kubenswrapper[4779]: I1128 13:00:03.777072 4779 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a2652142-08f6-4c0d-ad6c-8efd85280704-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 13:00:03 crc kubenswrapper[4779]: I1128 13:00:03.777082 4779 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2652142-08f6-4c0d-ad6c-8efd85280704-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 13:00:04 crc kubenswrapper[4779]: I1128 13:00:04.191169 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405580-8vvdg" event={"ID":"a2652142-08f6-4c0d-ad6c-8efd85280704","Type":"ContainerDied","Data":"3c45656a37a3538fdd0582a5c9dd906e868737162cc4fad2d34df55337957aaa"} Nov 28 13:00:04 crc kubenswrapper[4779]: I1128 13:00:04.191226 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c45656a37a3538fdd0582a5c9dd906e868737162cc4fad2d34df55337957aaa" Nov 28 13:00:04 crc kubenswrapper[4779]: I1128 13:00:04.191226 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405580-8vvdg" Nov 28 13:00:04 crc kubenswrapper[4779]: I1128 13:00:04.731731 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vz8vz" Nov 28 13:00:04 crc kubenswrapper[4779]: I1128 13:00:04.898426 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1bec9363-8311-40e0-ab18-fcaf7acf3dc9-inventory\") pod \"1bec9363-8311-40e0-ab18-fcaf7acf3dc9\" (UID: \"1bec9363-8311-40e0-ab18-fcaf7acf3dc9\") " Nov 28 13:00:04 crc kubenswrapper[4779]: I1128 13:00:04.898522 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4m6c\" (UniqueName: \"kubernetes.io/projected/1bec9363-8311-40e0-ab18-fcaf7acf3dc9-kube-api-access-k4m6c\") pod \"1bec9363-8311-40e0-ab18-fcaf7acf3dc9\" (UID: \"1bec9363-8311-40e0-ab18-fcaf7acf3dc9\") " Nov 28 13:00:04 crc kubenswrapper[4779]: I1128 13:00:04.898600 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1bec9363-8311-40e0-ab18-fcaf7acf3dc9-ssh-key\") pod \"1bec9363-8311-40e0-ab18-fcaf7acf3dc9\" (UID: \"1bec9363-8311-40e0-ab18-fcaf7acf3dc9\") " Nov 28 13:00:04 crc kubenswrapper[4779]: I1128 13:00:04.905727 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bec9363-8311-40e0-ab18-fcaf7acf3dc9-kube-api-access-k4m6c" (OuterVolumeSpecName: "kube-api-access-k4m6c") pod "1bec9363-8311-40e0-ab18-fcaf7acf3dc9" (UID: "1bec9363-8311-40e0-ab18-fcaf7acf3dc9"). InnerVolumeSpecName "kube-api-access-k4m6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:00:04 crc kubenswrapper[4779]: I1128 13:00:04.940132 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bec9363-8311-40e0-ab18-fcaf7acf3dc9-inventory" (OuterVolumeSpecName: "inventory") pod "1bec9363-8311-40e0-ab18-fcaf7acf3dc9" (UID: "1bec9363-8311-40e0-ab18-fcaf7acf3dc9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:00:04 crc kubenswrapper[4779]: I1128 13:00:04.948219 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bec9363-8311-40e0-ab18-fcaf7acf3dc9-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1bec9363-8311-40e0-ab18-fcaf7acf3dc9" (UID: "1bec9363-8311-40e0-ab18-fcaf7acf3dc9"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:00:05 crc kubenswrapper[4779]: I1128 13:00:05.002236 4779 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1bec9363-8311-40e0-ab18-fcaf7acf3dc9-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 13:00:05 crc kubenswrapper[4779]: I1128 13:00:05.002285 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4m6c\" (UniqueName: \"kubernetes.io/projected/1bec9363-8311-40e0-ab18-fcaf7acf3dc9-kube-api-access-k4m6c\") on node \"crc\" DevicePath \"\"" Nov 28 13:00:05 crc kubenswrapper[4779]: I1128 13:00:05.002306 4779 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1bec9363-8311-40e0-ab18-fcaf7acf3dc9-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 13:00:05 crc kubenswrapper[4779]: I1128 13:00:05.216518 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vz8vz" event={"ID":"1bec9363-8311-40e0-ab18-fcaf7acf3dc9","Type":"ContainerDied","Data":"f553631b768c775858732c55fae49cd486c067ef7b2a3963fbb2e5572c8cde62"} Nov 28 13:00:05 crc kubenswrapper[4779]: I1128 13:00:05.216559 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f553631b768c775858732c55fae49cd486c067ef7b2a3963fbb2e5572c8cde62" Nov 28 13:00:05 crc kubenswrapper[4779]: I1128 13:00:05.216653 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vz8vz" Nov 28 13:00:05 crc kubenswrapper[4779]: I1128 13:00:05.319327 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4tp8f"] Nov 28 13:00:05 crc kubenswrapper[4779]: E1128 13:00:05.319809 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2652142-08f6-4c0d-ad6c-8efd85280704" containerName="collect-profiles" Nov 28 13:00:05 crc kubenswrapper[4779]: I1128 13:00:05.319830 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2652142-08f6-4c0d-ad6c-8efd85280704" containerName="collect-profiles" Nov 28 13:00:05 crc kubenswrapper[4779]: E1128 13:00:05.319856 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bec9363-8311-40e0-ab18-fcaf7acf3dc9" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 28 13:00:05 crc kubenswrapper[4779]: I1128 13:00:05.319865 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bec9363-8311-40e0-ab18-fcaf7acf3dc9" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 28 13:00:05 crc kubenswrapper[4779]: I1128 13:00:05.320108 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2652142-08f6-4c0d-ad6c-8efd85280704" containerName="collect-profiles" Nov 28 13:00:05 crc kubenswrapper[4779]: I1128 13:00:05.320140 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bec9363-8311-40e0-ab18-fcaf7acf3dc9" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 28 13:00:05 crc kubenswrapper[4779]: I1128 13:00:05.320921 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4tp8f" Nov 28 13:00:05 crc kubenswrapper[4779]: I1128 13:00:05.324649 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 13:00:05 crc kubenswrapper[4779]: I1128 13:00:05.326791 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zfcth" Nov 28 13:00:05 crc kubenswrapper[4779]: I1128 13:00:05.327347 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 13:00:05 crc kubenswrapper[4779]: I1128 13:00:05.327541 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 13:00:05 crc kubenswrapper[4779]: I1128 13:00:05.336583 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4tp8f"] Nov 28 13:00:05 crc kubenswrapper[4779]: I1128 13:00:05.426047 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/81be23ef-d854-4ac3-8f39-601540e013ea-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4tp8f\" (UID: \"81be23ef-d854-4ac3-8f39-601540e013ea\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4tp8f" Nov 28 13:00:05 crc kubenswrapper[4779]: I1128 13:00:05.426328 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81be23ef-d854-4ac3-8f39-601540e013ea-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4tp8f\" (UID: \"81be23ef-d854-4ac3-8f39-601540e013ea\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4tp8f" Nov 28 13:00:05 crc kubenswrapper[4779]: I1128 13:00:05.426529 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drvr7\" (UniqueName: \"kubernetes.io/projected/81be23ef-d854-4ac3-8f39-601540e013ea-kube-api-access-drvr7\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4tp8f\" (UID: \"81be23ef-d854-4ac3-8f39-601540e013ea\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4tp8f" Nov 28 13:00:05 crc kubenswrapper[4779]: I1128 13:00:05.426662 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81be23ef-d854-4ac3-8f39-601540e013ea-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4tp8f\" (UID: \"81be23ef-d854-4ac3-8f39-601540e013ea\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4tp8f" Nov 28 13:00:05 crc kubenswrapper[4779]: I1128 13:00:05.528502 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/81be23ef-d854-4ac3-8f39-601540e013ea-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4tp8f\" (UID: \"81be23ef-d854-4ac3-8f39-601540e013ea\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4tp8f" Nov 28 13:00:05 crc kubenswrapper[4779]: I1128 13:00:05.528548 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81be23ef-d854-4ac3-8f39-601540e013ea-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4tp8f\" (UID: \"81be23ef-d854-4ac3-8f39-601540e013ea\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4tp8f" Nov 28 13:00:05 crc kubenswrapper[4779]: I1128 13:00:05.528594 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drvr7\" (UniqueName: \"kubernetes.io/projected/81be23ef-d854-4ac3-8f39-601540e013ea-kube-api-access-drvr7\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4tp8f\" (UID: \"81be23ef-d854-4ac3-8f39-601540e013ea\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4tp8f" Nov 28 13:00:05 crc kubenswrapper[4779]: I1128 13:00:05.528622 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81be23ef-d854-4ac3-8f39-601540e013ea-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4tp8f\" (UID: \"81be23ef-d854-4ac3-8f39-601540e013ea\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4tp8f" Nov 28 13:00:05 crc kubenswrapper[4779]: I1128 13:00:05.535950 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/81be23ef-d854-4ac3-8f39-601540e013ea-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4tp8f\" (UID: \"81be23ef-d854-4ac3-8f39-601540e013ea\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4tp8f" Nov 28 13:00:05 crc kubenswrapper[4779]: I1128 13:00:05.542793 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81be23ef-d854-4ac3-8f39-601540e013ea-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4tp8f\" (UID: \"81be23ef-d854-4ac3-8f39-601540e013ea\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4tp8f" Nov 28 13:00:05 crc kubenswrapper[4779]: I1128 13:00:05.549243 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81be23ef-d854-4ac3-8f39-601540e013ea-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4tp8f\" (UID: \"81be23ef-d854-4ac3-8f39-601540e013ea\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4tp8f" Nov 28 13:00:05 crc kubenswrapper[4779]: I1128 13:00:05.561237 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drvr7\" (UniqueName: \"kubernetes.io/projected/81be23ef-d854-4ac3-8f39-601540e013ea-kube-api-access-drvr7\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4tp8f\" (UID: \"81be23ef-d854-4ac3-8f39-601540e013ea\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4tp8f" Nov 28 13:00:05 crc kubenswrapper[4779]: I1128 13:00:05.643245 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4tp8f" Nov 28 13:00:06 crc kubenswrapper[4779]: W1128 13:00:06.268975 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81be23ef_d854_4ac3_8f39_601540e013ea.slice/crio-3d1a97c49f014e69625d35be996b61a524044ad779b39ea8a73032d7cc0cc426 WatchSource:0}: Error finding container 3d1a97c49f014e69625d35be996b61a524044ad779b39ea8a73032d7cc0cc426: Status 404 returned error can't find the container with id 3d1a97c49f014e69625d35be996b61a524044ad779b39ea8a73032d7cc0cc426 Nov 28 13:00:06 crc kubenswrapper[4779]: I1128 13:00:06.281590 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4tp8f"] Nov 28 13:00:07 crc kubenswrapper[4779]: I1128 13:00:07.247873 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4tp8f" event={"ID":"81be23ef-d854-4ac3-8f39-601540e013ea","Type":"ContainerStarted","Data":"3d1a97c49f014e69625d35be996b61a524044ad779b39ea8a73032d7cc0cc426"} Nov 28 13:00:08 crc kubenswrapper[4779]: I1128 13:00:08.273366 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4tp8f" event={"ID":"81be23ef-d854-4ac3-8f39-601540e013ea","Type":"ContainerStarted","Data":"9f8a7af5a41c05cf7bb251ec88f433f8cf7f60bc1070f225b8d3157f2a170a51"} Nov 28 13:00:08 crc kubenswrapper[4779]: I1128 13:00:08.313587 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4tp8f" podStartSLOduration=2.390149839 podStartE2EDuration="3.313556682s" podCreationTimestamp="2025-11-28 13:00:05 +0000 UTC" firstStartedPulling="2025-11-28 13:00:06.27361792 +0000 UTC m=+1466.839293314" lastFinishedPulling="2025-11-28 13:00:07.197024773 +0000 UTC m=+1467.762700157" observedRunningTime="2025-11-28 13:00:08.297889662 +0000 UTC m=+1468.863565056" watchObservedRunningTime="2025-11-28 13:00:08.313556682 +0000 UTC m=+1468.879232046" Nov 28 13:00:46 crc kubenswrapper[4779]: I1128 13:00:46.285361 4779 patch_prober.go:28] interesting pod/machine-config-daemon-kj9g2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 13:00:46 crc kubenswrapper[4779]: I1128 13:00:46.285972 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 13:01:00 crc kubenswrapper[4779]: I1128 13:01:00.160390 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29405581-vh7vm"] Nov 28 13:01:00 crc kubenswrapper[4779]: I1128 13:01:00.163146 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29405581-vh7vm" Nov 28 13:01:00 crc kubenswrapper[4779]: I1128 13:01:00.184120 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29405581-vh7vm"] Nov 28 13:01:00 crc kubenswrapper[4779]: I1128 13:01:00.275246 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ad47080-f81f-4366-ac8b-b110a18c1834-config-data\") pod \"keystone-cron-29405581-vh7vm\" (UID: \"8ad47080-f81f-4366-ac8b-b110a18c1834\") " pod="openstack/keystone-cron-29405581-vh7vm" Nov 28 13:01:00 crc kubenswrapper[4779]: I1128 13:01:00.275324 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28txm\" (UniqueName: \"kubernetes.io/projected/8ad47080-f81f-4366-ac8b-b110a18c1834-kube-api-access-28txm\") pod \"keystone-cron-29405581-vh7vm\" (UID: \"8ad47080-f81f-4366-ac8b-b110a18c1834\") " pod="openstack/keystone-cron-29405581-vh7vm" Nov 28 13:01:00 crc kubenswrapper[4779]: I1128 13:01:00.275621 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8ad47080-f81f-4366-ac8b-b110a18c1834-fernet-keys\") pod \"keystone-cron-29405581-vh7vm\" (UID: \"8ad47080-f81f-4366-ac8b-b110a18c1834\") " pod="openstack/keystone-cron-29405581-vh7vm" Nov 28 13:01:00 crc kubenswrapper[4779]: I1128 13:01:00.275892 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ad47080-f81f-4366-ac8b-b110a18c1834-combined-ca-bundle\") pod \"keystone-cron-29405581-vh7vm\" (UID: \"8ad47080-f81f-4366-ac8b-b110a18c1834\") " pod="openstack/keystone-cron-29405581-vh7vm" Nov 28 13:01:00 crc kubenswrapper[4779]: I1128 13:01:00.378137 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8ad47080-f81f-4366-ac8b-b110a18c1834-fernet-keys\") pod \"keystone-cron-29405581-vh7vm\" (UID: \"8ad47080-f81f-4366-ac8b-b110a18c1834\") " pod="openstack/keystone-cron-29405581-vh7vm" Nov 28 13:01:00 crc kubenswrapper[4779]: I1128 13:01:00.378260 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ad47080-f81f-4366-ac8b-b110a18c1834-combined-ca-bundle\") pod \"keystone-cron-29405581-vh7vm\" (UID: \"8ad47080-f81f-4366-ac8b-b110a18c1834\") " pod="openstack/keystone-cron-29405581-vh7vm" Nov 28 13:01:00 crc kubenswrapper[4779]: I1128 13:01:00.378315 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ad47080-f81f-4366-ac8b-b110a18c1834-config-data\") pod \"keystone-cron-29405581-vh7vm\" (UID: \"8ad47080-f81f-4366-ac8b-b110a18c1834\") " pod="openstack/keystone-cron-29405581-vh7vm" Nov 28 13:01:00 crc kubenswrapper[4779]: I1128 13:01:00.378356 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28txm\" (UniqueName: \"kubernetes.io/projected/8ad47080-f81f-4366-ac8b-b110a18c1834-kube-api-access-28txm\") pod \"keystone-cron-29405581-vh7vm\" (UID: \"8ad47080-f81f-4366-ac8b-b110a18c1834\") " pod="openstack/keystone-cron-29405581-vh7vm" Nov 28 13:01:00 crc kubenswrapper[4779]: I1128 13:01:00.386595 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ad47080-f81f-4366-ac8b-b110a18c1834-config-data\") pod \"keystone-cron-29405581-vh7vm\" (UID: \"8ad47080-f81f-4366-ac8b-b110a18c1834\") " pod="openstack/keystone-cron-29405581-vh7vm" Nov 28 13:01:00 crc kubenswrapper[4779]: I1128 13:01:00.387645 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8ad47080-f81f-4366-ac8b-b110a18c1834-fernet-keys\") pod \"keystone-cron-29405581-vh7vm\" (UID: \"8ad47080-f81f-4366-ac8b-b110a18c1834\") " pod="openstack/keystone-cron-29405581-vh7vm" Nov 28 13:01:00 crc kubenswrapper[4779]: I1128 13:01:00.389958 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ad47080-f81f-4366-ac8b-b110a18c1834-combined-ca-bundle\") pod \"keystone-cron-29405581-vh7vm\" (UID: \"8ad47080-f81f-4366-ac8b-b110a18c1834\") " pod="openstack/keystone-cron-29405581-vh7vm" Nov 28 13:01:00 crc kubenswrapper[4779]: I1128 13:01:00.411167 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28txm\" (UniqueName: \"kubernetes.io/projected/8ad47080-f81f-4366-ac8b-b110a18c1834-kube-api-access-28txm\") pod \"keystone-cron-29405581-vh7vm\" (UID: \"8ad47080-f81f-4366-ac8b-b110a18c1834\") " pod="openstack/keystone-cron-29405581-vh7vm" Nov 28 13:01:00 crc kubenswrapper[4779]: I1128 13:01:00.490745 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29405581-vh7vm" Nov 28 13:01:01 crc kubenswrapper[4779]: I1128 13:01:01.016924 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29405581-vh7vm"] Nov 28 13:01:01 crc kubenswrapper[4779]: I1128 13:01:01.937128 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29405581-vh7vm" event={"ID":"8ad47080-f81f-4366-ac8b-b110a18c1834","Type":"ContainerStarted","Data":"b82211da53955286a5534cde6d77ef8c373d5541c9f2bfd8ec934c11eb5c042d"} Nov 28 13:01:01 crc kubenswrapper[4779]: I1128 13:01:01.937707 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29405581-vh7vm" event={"ID":"8ad47080-f81f-4366-ac8b-b110a18c1834","Type":"ContainerStarted","Data":"43c9b78ac1ee606268ea0570ce1b80f957e29e730bf4c61eef0847a3782e3236"} Nov 28 13:01:01 crc kubenswrapper[4779]: I1128 13:01:01.968988 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29405581-vh7vm" podStartSLOduration=1.96896674 podStartE2EDuration="1.96896674s" podCreationTimestamp="2025-11-28 13:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 13:01:01.95514253 +0000 UTC m=+1522.520817924" watchObservedRunningTime="2025-11-28 13:01:01.96896674 +0000 UTC m=+1522.534642094" Nov 28 13:01:03 crc kubenswrapper[4779]: I1128 13:01:03.976417 4779 generic.go:334] "Generic (PLEG): container finished" podID="8ad47080-f81f-4366-ac8b-b110a18c1834" containerID="b82211da53955286a5534cde6d77ef8c373d5541c9f2bfd8ec934c11eb5c042d" exitCode=0 Nov 28 13:01:03 crc kubenswrapper[4779]: I1128 13:01:03.976646 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29405581-vh7vm" event={"ID":"8ad47080-f81f-4366-ac8b-b110a18c1834","Type":"ContainerDied","Data":"b82211da53955286a5534cde6d77ef8c373d5541c9f2bfd8ec934c11eb5c042d"} Nov 28 13:01:05 crc kubenswrapper[4779]: I1128 13:01:05.398425 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29405581-vh7vm" Nov 28 13:01:05 crc kubenswrapper[4779]: I1128 13:01:05.490894 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28txm\" (UniqueName: \"kubernetes.io/projected/8ad47080-f81f-4366-ac8b-b110a18c1834-kube-api-access-28txm\") pod \"8ad47080-f81f-4366-ac8b-b110a18c1834\" (UID: \"8ad47080-f81f-4366-ac8b-b110a18c1834\") " Nov 28 13:01:05 crc kubenswrapper[4779]: I1128 13:01:05.491013 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8ad47080-f81f-4366-ac8b-b110a18c1834-fernet-keys\") pod \"8ad47080-f81f-4366-ac8b-b110a18c1834\" (UID: \"8ad47080-f81f-4366-ac8b-b110a18c1834\") " Nov 28 13:01:05 crc kubenswrapper[4779]: I1128 13:01:05.492742 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ad47080-f81f-4366-ac8b-b110a18c1834-config-data\") pod \"8ad47080-f81f-4366-ac8b-b110a18c1834\" (UID: \"8ad47080-f81f-4366-ac8b-b110a18c1834\") " Nov 28 13:01:05 crc kubenswrapper[4779]: I1128 13:01:05.493005 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ad47080-f81f-4366-ac8b-b110a18c1834-combined-ca-bundle\") pod \"8ad47080-f81f-4366-ac8b-b110a18c1834\" (UID: \"8ad47080-f81f-4366-ac8b-b110a18c1834\") " Nov 28 13:01:05 crc kubenswrapper[4779]: I1128 13:01:05.497796 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ad47080-f81f-4366-ac8b-b110a18c1834-kube-api-access-28txm" (OuterVolumeSpecName: "kube-api-access-28txm") pod "8ad47080-f81f-4366-ac8b-b110a18c1834" (UID: "8ad47080-f81f-4366-ac8b-b110a18c1834"). InnerVolumeSpecName "kube-api-access-28txm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:01:05 crc kubenswrapper[4779]: I1128 13:01:05.498287 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ad47080-f81f-4366-ac8b-b110a18c1834-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "8ad47080-f81f-4366-ac8b-b110a18c1834" (UID: "8ad47080-f81f-4366-ac8b-b110a18c1834"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:01:05 crc kubenswrapper[4779]: I1128 13:01:05.541489 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ad47080-f81f-4366-ac8b-b110a18c1834-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ad47080-f81f-4366-ac8b-b110a18c1834" (UID: "8ad47080-f81f-4366-ac8b-b110a18c1834"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:01:05 crc kubenswrapper[4779]: I1128 13:01:05.591310 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ad47080-f81f-4366-ac8b-b110a18c1834-config-data" (OuterVolumeSpecName: "config-data") pod "8ad47080-f81f-4366-ac8b-b110a18c1834" (UID: "8ad47080-f81f-4366-ac8b-b110a18c1834"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:01:05 crc kubenswrapper[4779]: I1128 13:01:05.597172 4779 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ad47080-f81f-4366-ac8b-b110a18c1834-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 13:01:05 crc kubenswrapper[4779]: I1128 13:01:05.597225 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28txm\" (UniqueName: \"kubernetes.io/projected/8ad47080-f81f-4366-ac8b-b110a18c1834-kube-api-access-28txm\") on node \"crc\" DevicePath \"\"" Nov 28 13:01:05 crc kubenswrapper[4779]: I1128 13:01:05.597245 4779 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8ad47080-f81f-4366-ac8b-b110a18c1834-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 28 13:01:05 crc kubenswrapper[4779]: I1128 13:01:05.597263 4779 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ad47080-f81f-4366-ac8b-b110a18c1834-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 13:01:06 crc kubenswrapper[4779]: I1128 13:01:06.004637 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29405581-vh7vm" event={"ID":"8ad47080-f81f-4366-ac8b-b110a18c1834","Type":"ContainerDied","Data":"43c9b78ac1ee606268ea0570ce1b80f957e29e730bf4c61eef0847a3782e3236"} Nov 28 13:01:06 crc kubenswrapper[4779]: I1128 13:01:06.004722 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43c9b78ac1ee606268ea0570ce1b80f957e29e730bf4c61eef0847a3782e3236" Nov 28 13:01:06 crc kubenswrapper[4779]: I1128 13:01:06.004807 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29405581-vh7vm" Nov 28 13:01:07 crc kubenswrapper[4779]: I1128 13:01:07.815080 4779 scope.go:117] "RemoveContainer" containerID="33096196fd271bbe3108c36cc829c5d0a96697b7ea1c7ef4e6f62c1b83e3c738" Nov 28 13:01:07 crc kubenswrapper[4779]: I1128 13:01:07.854575 4779 scope.go:117] "RemoveContainer" containerID="152891d1ce9830ccd975c1998b9ce771e2bb41f01cfb71a544da9319642e6cae" Nov 28 13:01:07 crc kubenswrapper[4779]: I1128 13:01:07.919291 4779 scope.go:117] "RemoveContainer" containerID="8550a6d31866f1124fe606fc8fe9729e7eb07c23b5656ff7ad010771b01292f4" Nov 28 13:01:16 crc kubenswrapper[4779]: I1128 13:01:16.284598 4779 patch_prober.go:28] interesting pod/machine-config-daemon-kj9g2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 13:01:16 crc kubenswrapper[4779]: I1128 13:01:16.285305 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 13:01:44 crc kubenswrapper[4779]: I1128 13:01:44.579975 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xjk4c"] Nov 28 13:01:44 crc kubenswrapper[4779]: E1128 13:01:44.581312 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ad47080-f81f-4366-ac8b-b110a18c1834" containerName="keystone-cron" Nov 28 13:01:44 crc kubenswrapper[4779]: I1128 13:01:44.581336 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ad47080-f81f-4366-ac8b-b110a18c1834" containerName="keystone-cron" Nov 28 13:01:44 crc kubenswrapper[4779]: I1128 13:01:44.581670 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ad47080-f81f-4366-ac8b-b110a18c1834" containerName="keystone-cron" Nov 28 13:01:44 crc kubenswrapper[4779]: I1128 13:01:44.584140 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xjk4c" Nov 28 13:01:44 crc kubenswrapper[4779]: I1128 13:01:44.599126 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xjk4c"] Nov 28 13:01:44 crc kubenswrapper[4779]: I1128 13:01:44.759196 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c57ac67f-f367-4e73-89a7-8d283c7a9946-utilities\") pod \"community-operators-xjk4c\" (UID: \"c57ac67f-f367-4e73-89a7-8d283c7a9946\") " pod="openshift-marketplace/community-operators-xjk4c" Nov 28 13:01:44 crc kubenswrapper[4779]: I1128 13:01:44.759460 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5448b\" (UniqueName: \"kubernetes.io/projected/c57ac67f-f367-4e73-89a7-8d283c7a9946-kube-api-access-5448b\") pod \"community-operators-xjk4c\" (UID: \"c57ac67f-f367-4e73-89a7-8d283c7a9946\") " pod="openshift-marketplace/community-operators-xjk4c" Nov 28 13:01:44 crc kubenswrapper[4779]: I1128 13:01:44.759657 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c57ac67f-f367-4e73-89a7-8d283c7a9946-catalog-content\") pod \"community-operators-xjk4c\" (UID: \"c57ac67f-f367-4e73-89a7-8d283c7a9946\") " pod="openshift-marketplace/community-operators-xjk4c" Nov 28 13:01:44 crc kubenswrapper[4779]: I1128 13:01:44.861601 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c57ac67f-f367-4e73-89a7-8d283c7a9946-catalog-content\") pod \"community-operators-xjk4c\" (UID: \"c57ac67f-f367-4e73-89a7-8d283c7a9946\") " pod="openshift-marketplace/community-operators-xjk4c" Nov 28 13:01:44 crc kubenswrapper[4779]: I1128 13:01:44.861668 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c57ac67f-f367-4e73-89a7-8d283c7a9946-utilities\") pod \"community-operators-xjk4c\" (UID: \"c57ac67f-f367-4e73-89a7-8d283c7a9946\") " pod="openshift-marketplace/community-operators-xjk4c" Nov 28 13:01:44 crc kubenswrapper[4779]: I1128 13:01:44.861773 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5448b\" (UniqueName: \"kubernetes.io/projected/c57ac67f-f367-4e73-89a7-8d283c7a9946-kube-api-access-5448b\") pod \"community-operators-xjk4c\" (UID: \"c57ac67f-f367-4e73-89a7-8d283c7a9946\") " pod="openshift-marketplace/community-operators-xjk4c" Nov 28 13:01:44 crc kubenswrapper[4779]: I1128 13:01:44.862317 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c57ac67f-f367-4e73-89a7-8d283c7a9946-catalog-content\") pod \"community-operators-xjk4c\" (UID: \"c57ac67f-f367-4e73-89a7-8d283c7a9946\") " pod="openshift-marketplace/community-operators-xjk4c" Nov 28 13:01:44 crc kubenswrapper[4779]: I1128 13:01:44.862355 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c57ac67f-f367-4e73-89a7-8d283c7a9946-utilities\") pod \"community-operators-xjk4c\" (UID: \"c57ac67f-f367-4e73-89a7-8d283c7a9946\") " pod="openshift-marketplace/community-operators-xjk4c" Nov 28 13:01:44 crc kubenswrapper[4779]: I1128 13:01:44.881871 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5448b\" (UniqueName: \"kubernetes.io/projected/c57ac67f-f367-4e73-89a7-8d283c7a9946-kube-api-access-5448b\") pod \"community-operators-xjk4c\" (UID: \"c57ac67f-f367-4e73-89a7-8d283c7a9946\") " pod="openshift-marketplace/community-operators-xjk4c" Nov 28 13:01:44 crc kubenswrapper[4779]: I1128 13:01:44.927668 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xjk4c" Nov 28 13:01:45 crc kubenswrapper[4779]: I1128 13:01:45.537860 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xjk4c"] Nov 28 13:01:46 crc kubenswrapper[4779]: I1128 13:01:46.284662 4779 patch_prober.go:28] interesting pod/machine-config-daemon-kj9g2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 13:01:46 crc kubenswrapper[4779]: I1128 13:01:46.284978 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 13:01:46 crc kubenswrapper[4779]: I1128 13:01:46.285044 4779 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" Nov 28 13:01:46 crc kubenswrapper[4779]: I1128 13:01:46.285731 4779 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3a5057813024b5f9eddaf198924d294d2253857acafc1e169a218697e2d27bcf"} pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 13:01:46 crc kubenswrapper[4779]: I1128 13:01:46.285782 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" containerID="cri-o://3a5057813024b5f9eddaf198924d294d2253857acafc1e169a218697e2d27bcf" gracePeriod=600 Nov 28 13:01:46 crc kubenswrapper[4779]: E1128 13:01:46.419186 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:01:46 crc kubenswrapper[4779]: I1128 13:01:46.468166 4779 generic.go:334] "Generic (PLEG): container finished" podID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerID="3a5057813024b5f9eddaf198924d294d2253857acafc1e169a218697e2d27bcf" exitCode=0 Nov 28 13:01:46 crc kubenswrapper[4779]: I1128 13:01:46.468263 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" event={"ID":"3b2a3eb4-4de5-491b-b466-3a35b7d745ec","Type":"ContainerDied","Data":"3a5057813024b5f9eddaf198924d294d2253857acafc1e169a218697e2d27bcf"} Nov 28 13:01:46 crc kubenswrapper[4779]: I1128 13:01:46.468803 4779 scope.go:117] "RemoveContainer" containerID="7c21214830b8f1e0b08f1ae5ac2fb71de0793255942c5f72a4ead485743abffa" Nov 28 13:01:46 crc kubenswrapper[4779]: I1128 13:01:46.469768 4779 scope.go:117] "RemoveContainer" containerID="3a5057813024b5f9eddaf198924d294d2253857acafc1e169a218697e2d27bcf" Nov 28 13:01:46 crc kubenswrapper[4779]: E1128 13:01:46.470266 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:01:46 crc kubenswrapper[4779]: I1128 13:01:46.471398 4779 generic.go:334] "Generic (PLEG): container finished" podID="c57ac67f-f367-4e73-89a7-8d283c7a9946" containerID="752a322f630b42a2ca4fbcbcd62758c5c1b8db0d8b8afa0bab9512fae5257c9c" exitCode=0 Nov 28 13:01:46 crc kubenswrapper[4779]: I1128 13:01:46.471446 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xjk4c" event={"ID":"c57ac67f-f367-4e73-89a7-8d283c7a9946","Type":"ContainerDied","Data":"752a322f630b42a2ca4fbcbcd62758c5c1b8db0d8b8afa0bab9512fae5257c9c"} Nov 28 13:01:46 crc kubenswrapper[4779]: I1128 13:01:46.471499 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xjk4c" event={"ID":"c57ac67f-f367-4e73-89a7-8d283c7a9946","Type":"ContainerStarted","Data":"2c0672c47e206ee01ab8755ea4e486e251c937c362db21e86f27545f31c014e9"} Nov 28 13:01:48 crc kubenswrapper[4779]: I1128 13:01:48.496637 4779 generic.go:334] "Generic (PLEG): container finished" podID="c57ac67f-f367-4e73-89a7-8d283c7a9946" containerID="38ecf936565a04bf58ccd255cca52f36ff3e62dfdd954de34d1b2271d0980781" exitCode=0 Nov 28 13:01:48 crc kubenswrapper[4779]: I1128 13:01:48.496682 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xjk4c" event={"ID":"c57ac67f-f367-4e73-89a7-8d283c7a9946","Type":"ContainerDied","Data":"38ecf936565a04bf58ccd255cca52f36ff3e62dfdd954de34d1b2271d0980781"} Nov 28 13:01:50 crc kubenswrapper[4779]: I1128 13:01:50.521673 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xjk4c" event={"ID":"c57ac67f-f367-4e73-89a7-8d283c7a9946","Type":"ContainerStarted","Data":"75ea7a418aba1cccf7d91a0cc1a3e60f95f4601ceaccf984862a48312de17506"} Nov 28 13:01:50 crc kubenswrapper[4779]: I1128 13:01:50.547840 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xjk4c" podStartSLOduration=3.105340477 podStartE2EDuration="6.547817067s" podCreationTimestamp="2025-11-28 13:01:44 +0000 UTC" firstStartedPulling="2025-11-28 13:01:46.481081275 +0000 UTC m=+1567.046756639" lastFinishedPulling="2025-11-28 13:01:49.923557865 +0000 UTC m=+1570.489233229" observedRunningTime="2025-11-28 13:01:50.544034255 +0000 UTC m=+1571.109709609" watchObservedRunningTime="2025-11-28 13:01:50.547817067 +0000 UTC m=+1571.113492421" Nov 28 13:01:54 crc kubenswrapper[4779]: I1128 13:01:54.928313 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xjk4c" Nov 28 13:01:54 crc kubenswrapper[4779]: I1128 13:01:54.929082 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xjk4c" Nov 28 13:01:54 crc kubenswrapper[4779]: I1128 13:01:54.997677 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xjk4c" Nov 28 13:01:55 crc kubenswrapper[4779]: I1128 13:01:55.670849 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xjk4c" Nov 28 13:01:55 crc kubenswrapper[4779]: I1128 13:01:55.753564 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xjk4c"] Nov 28 13:01:57 crc kubenswrapper[4779]: I1128 13:01:57.634971 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xjk4c" podUID="c57ac67f-f367-4e73-89a7-8d283c7a9946" containerName="registry-server" containerID="cri-o://75ea7a418aba1cccf7d91a0cc1a3e60f95f4601ceaccf984862a48312de17506" gracePeriod=2 Nov 28 13:01:58 crc kubenswrapper[4779]: I1128 13:01:58.175387 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xjk4c" Nov 28 13:01:58 crc kubenswrapper[4779]: I1128 13:01:58.342637 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5448b\" (UniqueName: \"kubernetes.io/projected/c57ac67f-f367-4e73-89a7-8d283c7a9946-kube-api-access-5448b\") pod \"c57ac67f-f367-4e73-89a7-8d283c7a9946\" (UID: \"c57ac67f-f367-4e73-89a7-8d283c7a9946\") " Nov 28 13:01:58 crc kubenswrapper[4779]: I1128 13:01:58.342706 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c57ac67f-f367-4e73-89a7-8d283c7a9946-catalog-content\") pod \"c57ac67f-f367-4e73-89a7-8d283c7a9946\" (UID: \"c57ac67f-f367-4e73-89a7-8d283c7a9946\") " Nov 28 13:01:58 crc kubenswrapper[4779]: I1128 13:01:58.342766 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c57ac67f-f367-4e73-89a7-8d283c7a9946-utilities\") pod \"c57ac67f-f367-4e73-89a7-8d283c7a9946\" (UID: \"c57ac67f-f367-4e73-89a7-8d283c7a9946\") " Nov 28 13:01:58 crc kubenswrapper[4779]: I1128 13:01:58.344166 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c57ac67f-f367-4e73-89a7-8d283c7a9946-utilities" (OuterVolumeSpecName: "utilities") pod "c57ac67f-f367-4e73-89a7-8d283c7a9946" (UID: "c57ac67f-f367-4e73-89a7-8d283c7a9946"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 13:01:58 crc kubenswrapper[4779]: I1128 13:01:58.356315 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c57ac67f-f367-4e73-89a7-8d283c7a9946-kube-api-access-5448b" (OuterVolumeSpecName: "kube-api-access-5448b") pod "c57ac67f-f367-4e73-89a7-8d283c7a9946" (UID: "c57ac67f-f367-4e73-89a7-8d283c7a9946"). InnerVolumeSpecName "kube-api-access-5448b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:01:58 crc kubenswrapper[4779]: I1128 13:01:58.396933 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c57ac67f-f367-4e73-89a7-8d283c7a9946-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c57ac67f-f367-4e73-89a7-8d283c7a9946" (UID: "c57ac67f-f367-4e73-89a7-8d283c7a9946"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 13:01:58 crc kubenswrapper[4779]: I1128 13:01:58.444870 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5448b\" (UniqueName: \"kubernetes.io/projected/c57ac67f-f367-4e73-89a7-8d283c7a9946-kube-api-access-5448b\") on node \"crc\" DevicePath \"\"" Nov 28 13:01:58 crc kubenswrapper[4779]: I1128 13:01:58.444915 4779 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c57ac67f-f367-4e73-89a7-8d283c7a9946-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 13:01:58 crc kubenswrapper[4779]: I1128 13:01:58.444933 4779 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c57ac67f-f367-4e73-89a7-8d283c7a9946-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 13:01:58 crc kubenswrapper[4779]: I1128 13:01:58.656939 4779 generic.go:334] "Generic (PLEG): container finished" podID="c57ac67f-f367-4e73-89a7-8d283c7a9946" containerID="75ea7a418aba1cccf7d91a0cc1a3e60f95f4601ceaccf984862a48312de17506" exitCode=0 Nov 28 13:01:58 crc kubenswrapper[4779]: I1128 13:01:58.656999 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xjk4c" event={"ID":"c57ac67f-f367-4e73-89a7-8d283c7a9946","Type":"ContainerDied","Data":"75ea7a418aba1cccf7d91a0cc1a3e60f95f4601ceaccf984862a48312de17506"} Nov 28 13:01:58 crc kubenswrapper[4779]: I1128 13:01:58.657037 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xjk4c" event={"ID":"c57ac67f-f367-4e73-89a7-8d283c7a9946","Type":"ContainerDied","Data":"2c0672c47e206ee01ab8755ea4e486e251c937c362db21e86f27545f31c014e9"} Nov 28 13:01:58 crc kubenswrapper[4779]: I1128 13:01:58.657062 4779 scope.go:117] "RemoveContainer" containerID="75ea7a418aba1cccf7d91a0cc1a3e60f95f4601ceaccf984862a48312de17506" Nov 28 13:01:58 crc kubenswrapper[4779]: I1128 13:01:58.657277 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xjk4c" Nov 28 13:01:58 crc kubenswrapper[4779]: I1128 13:01:58.701772 4779 scope.go:117] "RemoveContainer" containerID="38ecf936565a04bf58ccd255cca52f36ff3e62dfdd954de34d1b2271d0980781" Nov 28 13:01:58 crc kubenswrapper[4779]: I1128 13:01:58.717468 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xjk4c"] Nov 28 13:01:58 crc kubenswrapper[4779]: I1128 13:01:58.727295 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xjk4c"] Nov 28 13:01:58 crc kubenswrapper[4779]: I1128 13:01:58.744059 4779 scope.go:117] "RemoveContainer" containerID="752a322f630b42a2ca4fbcbcd62758c5c1b8db0d8b8afa0bab9512fae5257c9c" Nov 28 13:01:58 crc kubenswrapper[4779]: I1128 13:01:58.780774 4779 scope.go:117] "RemoveContainer" containerID="75ea7a418aba1cccf7d91a0cc1a3e60f95f4601ceaccf984862a48312de17506" Nov 28 13:01:58 crc kubenswrapper[4779]: E1128 13:01:58.781513 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75ea7a418aba1cccf7d91a0cc1a3e60f95f4601ceaccf984862a48312de17506\": container with ID starting with 75ea7a418aba1cccf7d91a0cc1a3e60f95f4601ceaccf984862a48312de17506 not found: ID does not exist" containerID="75ea7a418aba1cccf7d91a0cc1a3e60f95f4601ceaccf984862a48312de17506" Nov 28 13:01:58 crc kubenswrapper[4779]: I1128 13:01:58.781617 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75ea7a418aba1cccf7d91a0cc1a3e60f95f4601ceaccf984862a48312de17506"} err="failed to get container status \"75ea7a418aba1cccf7d91a0cc1a3e60f95f4601ceaccf984862a48312de17506\": rpc error: code = NotFound desc = could not find container \"75ea7a418aba1cccf7d91a0cc1a3e60f95f4601ceaccf984862a48312de17506\": container with ID starting with 75ea7a418aba1cccf7d91a0cc1a3e60f95f4601ceaccf984862a48312de17506 not found: ID does not exist" Nov 28 13:01:58 crc kubenswrapper[4779]: I1128 13:01:58.781663 4779 scope.go:117] "RemoveContainer" containerID="38ecf936565a04bf58ccd255cca52f36ff3e62dfdd954de34d1b2271d0980781" Nov 28 13:01:58 crc kubenswrapper[4779]: E1128 13:01:58.782241 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38ecf936565a04bf58ccd255cca52f36ff3e62dfdd954de34d1b2271d0980781\": container with ID starting with 38ecf936565a04bf58ccd255cca52f36ff3e62dfdd954de34d1b2271d0980781 not found: ID does not exist" containerID="38ecf936565a04bf58ccd255cca52f36ff3e62dfdd954de34d1b2271d0980781" Nov 28 13:01:58 crc kubenswrapper[4779]: I1128 13:01:58.782295 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38ecf936565a04bf58ccd255cca52f36ff3e62dfdd954de34d1b2271d0980781"} err="failed to get container status \"38ecf936565a04bf58ccd255cca52f36ff3e62dfdd954de34d1b2271d0980781\": rpc error: code = NotFound desc = could not find container \"38ecf936565a04bf58ccd255cca52f36ff3e62dfdd954de34d1b2271d0980781\": container with ID starting with 38ecf936565a04bf58ccd255cca52f36ff3e62dfdd954de34d1b2271d0980781 not found: ID does not exist" Nov 28 13:01:58 crc kubenswrapper[4779]: I1128 13:01:58.782338 4779 scope.go:117] "RemoveContainer" containerID="752a322f630b42a2ca4fbcbcd62758c5c1b8db0d8b8afa0bab9512fae5257c9c" Nov 28 13:01:58 crc kubenswrapper[4779]: E1128 13:01:58.782711 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"752a322f630b42a2ca4fbcbcd62758c5c1b8db0d8b8afa0bab9512fae5257c9c\": container with ID starting with 752a322f630b42a2ca4fbcbcd62758c5c1b8db0d8b8afa0bab9512fae5257c9c not found: ID does not exist" containerID="752a322f630b42a2ca4fbcbcd62758c5c1b8db0d8b8afa0bab9512fae5257c9c" Nov 28 13:01:58 crc kubenswrapper[4779]: I1128 13:01:58.782756 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"752a322f630b42a2ca4fbcbcd62758c5c1b8db0d8b8afa0bab9512fae5257c9c"} err="failed to get container status \"752a322f630b42a2ca4fbcbcd62758c5c1b8db0d8b8afa0bab9512fae5257c9c\": rpc error: code = NotFound desc = could not find container \"752a322f630b42a2ca4fbcbcd62758c5c1b8db0d8b8afa0bab9512fae5257c9c\": container with ID starting with 752a322f630b42a2ca4fbcbcd62758c5c1b8db0d8b8afa0bab9512fae5257c9c not found: ID does not exist" Nov 28 13:01:59 crc kubenswrapper[4779]: I1128 13:01:59.750611 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c57ac67f-f367-4e73-89a7-8d283c7a9946" path="/var/lib/kubelet/pods/c57ac67f-f367-4e73-89a7-8d283c7a9946/volumes" Nov 28 13:02:00 crc kubenswrapper[4779]: I1128 13:02:00.727186 4779 scope.go:117] "RemoveContainer" containerID="3a5057813024b5f9eddaf198924d294d2253857acafc1e169a218697e2d27bcf" Nov 28 13:02:00 crc kubenswrapper[4779]: E1128 13:02:00.727487 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:02:08 crc kubenswrapper[4779]: I1128 13:02:08.047913 4779 scope.go:117] "RemoveContainer" containerID="bfb97f0358d284d94f52421e9edc4d3f896df828fcc1e9f49163b623682331b6" Nov 28 13:02:12 crc kubenswrapper[4779]: I1128 13:02:12.732960 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-drwqf"] Nov 28 13:02:12 crc kubenswrapper[4779]: E1128 13:02:12.734735 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c57ac67f-f367-4e73-89a7-8d283c7a9946" containerName="extract-content" Nov 28 13:02:12 crc kubenswrapper[4779]: I1128 13:02:12.734772 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="c57ac67f-f367-4e73-89a7-8d283c7a9946" containerName="extract-content" Nov 28 13:02:12 crc kubenswrapper[4779]: E1128 13:02:12.734859 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c57ac67f-f367-4e73-89a7-8d283c7a9946" containerName="extract-utilities" Nov 28 13:02:12 crc kubenswrapper[4779]: I1128 13:02:12.734878 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="c57ac67f-f367-4e73-89a7-8d283c7a9946" containerName="extract-utilities" Nov 28 13:02:12 crc kubenswrapper[4779]: E1128 13:02:12.734913 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c57ac67f-f367-4e73-89a7-8d283c7a9946" containerName="registry-server" Nov 28 13:02:12 crc kubenswrapper[4779]: I1128 13:02:12.734929 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="c57ac67f-f367-4e73-89a7-8d283c7a9946" containerName="registry-server" Nov 28 13:02:12 crc kubenswrapper[4779]: I1128 13:02:12.735437 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="c57ac67f-f367-4e73-89a7-8d283c7a9946" containerName="registry-server" Nov 28 13:02:12 crc kubenswrapper[4779]: I1128 13:02:12.738311 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-drwqf" Nov 28 13:02:12 crc kubenswrapper[4779]: I1128 13:02:12.753284 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-drwqf"] Nov 28 13:02:12 crc kubenswrapper[4779]: I1128 13:02:12.866459 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fca16ccf-e0cf-4713-870a-93db4d17e133-catalog-content\") pod \"certified-operators-drwqf\" (UID: \"fca16ccf-e0cf-4713-870a-93db4d17e133\") " pod="openshift-marketplace/certified-operators-drwqf" Nov 28 13:02:12 crc kubenswrapper[4779]: I1128 13:02:12.866582 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4gkl\" (UniqueName: \"kubernetes.io/projected/fca16ccf-e0cf-4713-870a-93db4d17e133-kube-api-access-r4gkl\") pod \"certified-operators-drwqf\" (UID: \"fca16ccf-e0cf-4713-870a-93db4d17e133\") " pod="openshift-marketplace/certified-operators-drwqf" Nov 28 13:02:12 crc kubenswrapper[4779]: I1128 13:02:12.866756 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fca16ccf-e0cf-4713-870a-93db4d17e133-utilities\") pod \"certified-operators-drwqf\" (UID: \"fca16ccf-e0cf-4713-870a-93db4d17e133\") " pod="openshift-marketplace/certified-operators-drwqf" Nov 28 13:02:12 crc kubenswrapper[4779]: I1128 13:02:12.968431 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fca16ccf-e0cf-4713-870a-93db4d17e133-utilities\") pod \"certified-operators-drwqf\" (UID: \"fca16ccf-e0cf-4713-870a-93db4d17e133\") " pod="openshift-marketplace/certified-operators-drwqf" Nov 28 13:02:12 crc kubenswrapper[4779]: I1128 13:02:12.968859 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fca16ccf-e0cf-4713-870a-93db4d17e133-catalog-content\") pod \"certified-operators-drwqf\" (UID: \"fca16ccf-e0cf-4713-870a-93db4d17e133\") " pod="openshift-marketplace/certified-operators-drwqf" Nov 28 13:02:12 crc kubenswrapper[4779]: I1128 13:02:12.968938 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4gkl\" (UniqueName: \"kubernetes.io/projected/fca16ccf-e0cf-4713-870a-93db4d17e133-kube-api-access-r4gkl\") pod \"certified-operators-drwqf\" (UID: \"fca16ccf-e0cf-4713-870a-93db4d17e133\") " pod="openshift-marketplace/certified-operators-drwqf" Nov 28 13:02:12 crc kubenswrapper[4779]: I1128 13:02:12.969298 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fca16ccf-e0cf-4713-870a-93db4d17e133-utilities\") pod \"certified-operators-drwqf\" (UID: \"fca16ccf-e0cf-4713-870a-93db4d17e133\") " pod="openshift-marketplace/certified-operators-drwqf" Nov 28 13:02:12 crc kubenswrapper[4779]: I1128 13:02:12.969518 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fca16ccf-e0cf-4713-870a-93db4d17e133-catalog-content\") pod \"certified-operators-drwqf\" (UID: \"fca16ccf-e0cf-4713-870a-93db4d17e133\") " pod="openshift-marketplace/certified-operators-drwqf" Nov 28 13:02:12 crc kubenswrapper[4779]: I1128 13:02:12.996528 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4gkl\" (UniqueName: \"kubernetes.io/projected/fca16ccf-e0cf-4713-870a-93db4d17e133-kube-api-access-r4gkl\") pod \"certified-operators-drwqf\" (UID: \"fca16ccf-e0cf-4713-870a-93db4d17e133\") " pod="openshift-marketplace/certified-operators-drwqf" Nov 28 13:02:13 crc kubenswrapper[4779]: I1128 13:02:13.081450 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-drwqf" Nov 28 13:02:13 crc kubenswrapper[4779]: I1128 13:02:13.583195 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-drwqf"] Nov 28 13:02:13 crc kubenswrapper[4779]: I1128 13:02:13.833755 4779 generic.go:334] "Generic (PLEG): container finished" podID="fca16ccf-e0cf-4713-870a-93db4d17e133" containerID="7555f8a262e44958fa2e406bf4cbda90927a868d2958ea85450a429c20cb5064" exitCode=0 Nov 28 13:02:13 crc kubenswrapper[4779]: I1128 13:02:13.833822 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drwqf" event={"ID":"fca16ccf-e0cf-4713-870a-93db4d17e133","Type":"ContainerDied","Data":"7555f8a262e44958fa2e406bf4cbda90927a868d2958ea85450a429c20cb5064"} Nov 28 13:02:13 crc kubenswrapper[4779]: I1128 13:02:13.834205 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drwqf" event={"ID":"fca16ccf-e0cf-4713-870a-93db4d17e133","Type":"ContainerStarted","Data":"f702c19cd308cfc18dd6e05c495abb77ee7dd968b8b6418b72cb86cfc0a1cdc7"} Nov 28 13:02:14 crc kubenswrapper[4779]: I1128 13:02:14.727334 4779 scope.go:117] "RemoveContainer" containerID="3a5057813024b5f9eddaf198924d294d2253857acafc1e169a218697e2d27bcf" Nov 28 13:02:14 crc kubenswrapper[4779]: E1128 13:02:14.727592 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:02:15 crc kubenswrapper[4779]: I1128 13:02:15.857253 4779 generic.go:334] "Generic (PLEG): container finished" podID="fca16ccf-e0cf-4713-870a-93db4d17e133" containerID="2e937c6ee6f047e41d66f5c08017c822453573434c7c0edca29eef6350e91c75" exitCode=0 Nov 28 13:02:15 crc kubenswrapper[4779]: I1128 13:02:15.857323 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drwqf" event={"ID":"fca16ccf-e0cf-4713-870a-93db4d17e133","Type":"ContainerDied","Data":"2e937c6ee6f047e41d66f5c08017c822453573434c7c0edca29eef6350e91c75"} Nov 28 13:02:16 crc kubenswrapper[4779]: I1128 13:02:16.871773 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drwqf" event={"ID":"fca16ccf-e0cf-4713-870a-93db4d17e133","Type":"ContainerStarted","Data":"4fdc956bfe12461e3a7e9092c490759acb81f8bb55bd777ca54a6d7059284bc7"} Nov 28 13:02:16 crc kubenswrapper[4779]: I1128 13:02:16.897845 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-drwqf" podStartSLOduration=2.186833714 podStartE2EDuration="4.897819557s" podCreationTimestamp="2025-11-28 13:02:12 +0000 UTC" firstStartedPulling="2025-11-28 13:02:13.836704855 +0000 UTC m=+1594.402380219" lastFinishedPulling="2025-11-28 13:02:16.547690698 +0000 UTC m=+1597.113366062" observedRunningTime="2025-11-28 13:02:16.894421256 +0000 UTC m=+1597.460096650" watchObservedRunningTime="2025-11-28 13:02:16.897819557 +0000 UTC m=+1597.463494951" Nov 28 13:02:23 crc kubenswrapper[4779]: I1128 13:02:23.081763 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-drwqf" Nov 28 13:02:23 crc kubenswrapper[4779]: I1128 13:02:23.082347 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-drwqf" Nov 28 13:02:23 crc kubenswrapper[4779]: I1128 13:02:23.151566 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-drwqf" Nov 28 13:02:24 crc kubenswrapper[4779]: I1128 13:02:24.013649 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-drwqf" Nov 28 13:02:24 crc kubenswrapper[4779]: I1128 13:02:24.090465 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-drwqf"] Nov 28 13:02:25 crc kubenswrapper[4779]: I1128 13:02:25.970731 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-drwqf" podUID="fca16ccf-e0cf-4713-870a-93db4d17e133" containerName="registry-server" containerID="cri-o://4fdc956bfe12461e3a7e9092c490759acb81f8bb55bd777ca54a6d7059284bc7" gracePeriod=2 Nov 28 13:02:26 crc kubenswrapper[4779]: I1128 13:02:26.979366 4779 generic.go:334] "Generic (PLEG): container finished" podID="fca16ccf-e0cf-4713-870a-93db4d17e133" containerID="4fdc956bfe12461e3a7e9092c490759acb81f8bb55bd777ca54a6d7059284bc7" exitCode=0 Nov 28 13:02:26 crc kubenswrapper[4779]: I1128 13:02:26.979424 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drwqf" event={"ID":"fca16ccf-e0cf-4713-870a-93db4d17e133","Type":"ContainerDied","Data":"4fdc956bfe12461e3a7e9092c490759acb81f8bb55bd777ca54a6d7059284bc7"} Nov 28 13:02:27 crc kubenswrapper[4779]: I1128 13:02:27.626234 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-drwqf" Nov 28 13:02:27 crc kubenswrapper[4779]: I1128 13:02:27.692794 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fca16ccf-e0cf-4713-870a-93db4d17e133-utilities\") pod \"fca16ccf-e0cf-4713-870a-93db4d17e133\" (UID: \"fca16ccf-e0cf-4713-870a-93db4d17e133\") " Nov 28 13:02:27 crc kubenswrapper[4779]: I1128 13:02:27.694080 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4gkl\" (UniqueName: \"kubernetes.io/projected/fca16ccf-e0cf-4713-870a-93db4d17e133-kube-api-access-r4gkl\") pod \"fca16ccf-e0cf-4713-870a-93db4d17e133\" (UID: \"fca16ccf-e0cf-4713-870a-93db4d17e133\") " Nov 28 13:02:27 crc kubenswrapper[4779]: I1128 13:02:27.694454 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fca16ccf-e0cf-4713-870a-93db4d17e133-catalog-content\") pod \"fca16ccf-e0cf-4713-870a-93db4d17e133\" (UID: \"fca16ccf-e0cf-4713-870a-93db4d17e133\") " Nov 28 13:02:27 crc kubenswrapper[4779]: I1128 13:02:27.696779 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fca16ccf-e0cf-4713-870a-93db4d17e133-utilities" (OuterVolumeSpecName: "utilities") pod "fca16ccf-e0cf-4713-870a-93db4d17e133" (UID: "fca16ccf-e0cf-4713-870a-93db4d17e133"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 13:02:27 crc kubenswrapper[4779]: I1128 13:02:27.706361 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fca16ccf-e0cf-4713-870a-93db4d17e133-kube-api-access-r4gkl" (OuterVolumeSpecName: "kube-api-access-r4gkl") pod "fca16ccf-e0cf-4713-870a-93db4d17e133" (UID: "fca16ccf-e0cf-4713-870a-93db4d17e133"). InnerVolumeSpecName "kube-api-access-r4gkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:02:27 crc kubenswrapper[4779]: I1128 13:02:27.741370 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fca16ccf-e0cf-4713-870a-93db4d17e133-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fca16ccf-e0cf-4713-870a-93db4d17e133" (UID: "fca16ccf-e0cf-4713-870a-93db4d17e133"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 13:02:27 crc kubenswrapper[4779]: I1128 13:02:27.798756 4779 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fca16ccf-e0cf-4713-870a-93db4d17e133-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 13:02:27 crc kubenswrapper[4779]: I1128 13:02:27.798960 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4gkl\" (UniqueName: \"kubernetes.io/projected/fca16ccf-e0cf-4713-870a-93db4d17e133-kube-api-access-r4gkl\") on node \"crc\" DevicePath \"\"" Nov 28 13:02:27 crc kubenswrapper[4779]: I1128 13:02:27.798986 4779 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fca16ccf-e0cf-4713-870a-93db4d17e133-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 13:02:27 crc kubenswrapper[4779]: I1128 13:02:27.992911 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drwqf" event={"ID":"fca16ccf-e0cf-4713-870a-93db4d17e133","Type":"ContainerDied","Data":"f702c19cd308cfc18dd6e05c495abb77ee7dd968b8b6418b72cb86cfc0a1cdc7"} Nov 28 13:02:27 crc kubenswrapper[4779]: I1128 13:02:27.992962 4779 scope.go:117] "RemoveContainer" containerID="4fdc956bfe12461e3a7e9092c490759acb81f8bb55bd777ca54a6d7059284bc7" Nov 28 13:02:27 crc kubenswrapper[4779]: I1128 13:02:27.993118 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-drwqf" Nov 28 13:02:28 crc kubenswrapper[4779]: I1128 13:02:28.021471 4779 scope.go:117] "RemoveContainer" containerID="2e937c6ee6f047e41d66f5c08017c822453573434c7c0edca29eef6350e91c75" Nov 28 13:02:28 crc kubenswrapper[4779]: I1128 13:02:28.022877 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-drwqf"] Nov 28 13:02:28 crc kubenswrapper[4779]: I1128 13:02:28.031761 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-drwqf"] Nov 28 13:02:28 crc kubenswrapper[4779]: I1128 13:02:28.051229 4779 scope.go:117] "RemoveContainer" containerID="7555f8a262e44958fa2e406bf4cbda90927a868d2958ea85450a429c20cb5064" Nov 28 13:02:29 crc kubenswrapper[4779]: I1128 13:02:29.743206 4779 scope.go:117] "RemoveContainer" containerID="3a5057813024b5f9eddaf198924d294d2253857acafc1e169a218697e2d27bcf" Nov 28 13:02:29 crc kubenswrapper[4779]: E1128 13:02:29.743884 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:02:29 crc kubenswrapper[4779]: I1128 13:02:29.750534 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fca16ccf-e0cf-4713-870a-93db4d17e133" path="/var/lib/kubelet/pods/fca16ccf-e0cf-4713-870a-93db4d17e133/volumes" Nov 28 13:02:38 crc kubenswrapper[4779]: I1128 13:02:38.817142 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rx48z"] Nov 28 13:02:38 crc kubenswrapper[4779]: E1128 13:02:38.818330 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fca16ccf-e0cf-4713-870a-93db4d17e133" containerName="extract-utilities" Nov 28 13:02:38 crc kubenswrapper[4779]: I1128 13:02:38.818354 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="fca16ccf-e0cf-4713-870a-93db4d17e133" containerName="extract-utilities" Nov 28 13:02:38 crc kubenswrapper[4779]: E1128 13:02:38.818393 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fca16ccf-e0cf-4713-870a-93db4d17e133" containerName="extract-content" Nov 28 13:02:38 crc kubenswrapper[4779]: I1128 13:02:38.818405 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="fca16ccf-e0cf-4713-870a-93db4d17e133" containerName="extract-content" Nov 28 13:02:38 crc kubenswrapper[4779]: E1128 13:02:38.818433 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fca16ccf-e0cf-4713-870a-93db4d17e133" containerName="registry-server" Nov 28 13:02:38 crc kubenswrapper[4779]: I1128 13:02:38.818445 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="fca16ccf-e0cf-4713-870a-93db4d17e133" containerName="registry-server" Nov 28 13:02:38 crc kubenswrapper[4779]: I1128 13:02:38.818811 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="fca16ccf-e0cf-4713-870a-93db4d17e133" containerName="registry-server" Nov 28 13:02:38 crc kubenswrapper[4779]: I1128 13:02:38.821538 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rx48z" Nov 28 13:02:38 crc kubenswrapper[4779]: I1128 13:02:38.844898 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rx48z"] Nov 28 13:02:38 crc kubenswrapper[4779]: I1128 13:02:38.953541 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be073eb4-014b-4bc5-85fe-4ae76805fa90-utilities\") pod \"redhat-marketplace-rx48z\" (UID: \"be073eb4-014b-4bc5-85fe-4ae76805fa90\") " pod="openshift-marketplace/redhat-marketplace-rx48z" Nov 28 13:02:38 crc kubenswrapper[4779]: I1128 13:02:38.953648 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be073eb4-014b-4bc5-85fe-4ae76805fa90-catalog-content\") pod \"redhat-marketplace-rx48z\" (UID: \"be073eb4-014b-4bc5-85fe-4ae76805fa90\") " pod="openshift-marketplace/redhat-marketplace-rx48z" Nov 28 13:02:38 crc kubenswrapper[4779]: I1128 13:02:38.953787 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm77p\" (UniqueName: \"kubernetes.io/projected/be073eb4-014b-4bc5-85fe-4ae76805fa90-kube-api-access-vm77p\") pod \"redhat-marketplace-rx48z\" (UID: \"be073eb4-014b-4bc5-85fe-4ae76805fa90\") " pod="openshift-marketplace/redhat-marketplace-rx48z" Nov 28 13:02:39 crc kubenswrapper[4779]: I1128 13:02:39.055512 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm77p\" (UniqueName: \"kubernetes.io/projected/be073eb4-014b-4bc5-85fe-4ae76805fa90-kube-api-access-vm77p\") pod \"redhat-marketplace-rx48z\" (UID: \"be073eb4-014b-4bc5-85fe-4ae76805fa90\") " pod="openshift-marketplace/redhat-marketplace-rx48z" Nov 28 13:02:39 crc kubenswrapper[4779]: I1128 13:02:39.055969 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be073eb4-014b-4bc5-85fe-4ae76805fa90-utilities\") pod \"redhat-marketplace-rx48z\" (UID: \"be073eb4-014b-4bc5-85fe-4ae76805fa90\") " pod="openshift-marketplace/redhat-marketplace-rx48z" Nov 28 13:02:39 crc kubenswrapper[4779]: I1128 13:02:39.056017 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be073eb4-014b-4bc5-85fe-4ae76805fa90-catalog-content\") pod \"redhat-marketplace-rx48z\" (UID: \"be073eb4-014b-4bc5-85fe-4ae76805fa90\") " pod="openshift-marketplace/redhat-marketplace-rx48z" Nov 28 13:02:39 crc kubenswrapper[4779]: I1128 13:02:39.056551 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be073eb4-014b-4bc5-85fe-4ae76805fa90-utilities\") pod \"redhat-marketplace-rx48z\" (UID: \"be073eb4-014b-4bc5-85fe-4ae76805fa90\") " pod="openshift-marketplace/redhat-marketplace-rx48z" Nov 28 13:02:39 crc kubenswrapper[4779]: I1128 13:02:39.056622 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be073eb4-014b-4bc5-85fe-4ae76805fa90-catalog-content\") pod \"redhat-marketplace-rx48z\" (UID: \"be073eb4-014b-4bc5-85fe-4ae76805fa90\") " pod="openshift-marketplace/redhat-marketplace-rx48z" Nov 28 13:02:39 crc kubenswrapper[4779]: I1128 13:02:39.077228 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm77p\" (UniqueName: \"kubernetes.io/projected/be073eb4-014b-4bc5-85fe-4ae76805fa90-kube-api-access-vm77p\") pod \"redhat-marketplace-rx48z\" (UID: \"be073eb4-014b-4bc5-85fe-4ae76805fa90\") " pod="openshift-marketplace/redhat-marketplace-rx48z" Nov 28 13:02:39 crc kubenswrapper[4779]: I1128 13:02:39.171746 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rx48z" Nov 28 13:02:39 crc kubenswrapper[4779]: I1128 13:02:39.456920 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rx48z"] Nov 28 13:02:40 crc kubenswrapper[4779]: I1128 13:02:40.181545 4779 generic.go:334] "Generic (PLEG): container finished" podID="be073eb4-014b-4bc5-85fe-4ae76805fa90" containerID="f6974271097d612cb47bd5b592d677154f620430eafd1b990d91902e0e3615e7" exitCode=0 Nov 28 13:02:40 crc kubenswrapper[4779]: I1128 13:02:40.182970 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rx48z" event={"ID":"be073eb4-014b-4bc5-85fe-4ae76805fa90","Type":"ContainerDied","Data":"f6974271097d612cb47bd5b592d677154f620430eafd1b990d91902e0e3615e7"} Nov 28 13:02:40 crc kubenswrapper[4779]: I1128 13:02:40.183103 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rx48z" event={"ID":"be073eb4-014b-4bc5-85fe-4ae76805fa90","Type":"ContainerStarted","Data":"2b198aeaa8e42e0435b1cb9b2a4ec04fb5f70d8bb29bd24b34e4f948e389ab6c"} Nov 28 13:02:41 crc kubenswrapper[4779]: I1128 13:02:41.194265 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rx48z" event={"ID":"be073eb4-014b-4bc5-85fe-4ae76805fa90","Type":"ContainerStarted","Data":"55dcf10554c16eea4de0742311918cc3ef02569ddd2586dfa70acf7dc1c0843b"} Nov 28 13:02:42 crc kubenswrapper[4779]: I1128 13:02:42.210525 4779 generic.go:334] "Generic (PLEG): container finished" podID="be073eb4-014b-4bc5-85fe-4ae76805fa90" containerID="55dcf10554c16eea4de0742311918cc3ef02569ddd2586dfa70acf7dc1c0843b" exitCode=0 Nov 28 13:02:42 crc kubenswrapper[4779]: I1128 13:02:42.210613 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rx48z" event={"ID":"be073eb4-014b-4bc5-85fe-4ae76805fa90","Type":"ContainerDied","Data":"55dcf10554c16eea4de0742311918cc3ef02569ddd2586dfa70acf7dc1c0843b"} Nov 28 13:02:43 crc kubenswrapper[4779]: I1128 13:02:43.228061 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rx48z" event={"ID":"be073eb4-014b-4bc5-85fe-4ae76805fa90","Type":"ContainerStarted","Data":"4d336523c0ea5f5229e060fa8a051b54952c42583d65f176c38c58e3b4627f82"} Nov 28 13:02:43 crc kubenswrapper[4779]: I1128 13:02:43.263604 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rx48z" podStartSLOduration=2.792261369 podStartE2EDuration="5.263578582s" podCreationTimestamp="2025-11-28 13:02:38 +0000 UTC" firstStartedPulling="2025-11-28 13:02:40.185620188 +0000 UTC m=+1620.751295552" lastFinishedPulling="2025-11-28 13:02:42.656937361 +0000 UTC m=+1623.222612765" observedRunningTime="2025-11-28 13:02:43.257804487 +0000 UTC m=+1623.823479871" watchObservedRunningTime="2025-11-28 13:02:43.263578582 +0000 UTC m=+1623.829253966" Nov 28 13:02:44 crc kubenswrapper[4779]: I1128 13:02:44.727018 4779 scope.go:117] "RemoveContainer" containerID="3a5057813024b5f9eddaf198924d294d2253857acafc1e169a218697e2d27bcf" Nov 28 13:02:44 crc kubenswrapper[4779]: E1128 13:02:44.727700 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:02:49 crc kubenswrapper[4779]: I1128 13:02:49.174614 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rx48z" Nov 28 13:02:49 crc kubenswrapper[4779]: I1128 13:02:49.175083 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rx48z" Nov 28 13:02:49 crc kubenswrapper[4779]: I1128 13:02:49.264425 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rx48z" Nov 28 13:02:49 crc kubenswrapper[4779]: I1128 13:02:49.380608 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rx48z" Nov 28 13:02:49 crc kubenswrapper[4779]: I1128 13:02:49.513763 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rx48z"] Nov 28 13:02:51 crc kubenswrapper[4779]: I1128 13:02:51.327693 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rx48z" podUID="be073eb4-014b-4bc5-85fe-4ae76805fa90" containerName="registry-server" containerID="cri-o://4d336523c0ea5f5229e060fa8a051b54952c42583d65f176c38c58e3b4627f82" gracePeriod=2 Nov 28 13:02:51 crc kubenswrapper[4779]: I1128 13:02:51.886838 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rx48z" Nov 28 13:02:51 crc kubenswrapper[4779]: I1128 13:02:51.979890 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vm77p\" (UniqueName: \"kubernetes.io/projected/be073eb4-014b-4bc5-85fe-4ae76805fa90-kube-api-access-vm77p\") pod \"be073eb4-014b-4bc5-85fe-4ae76805fa90\" (UID: \"be073eb4-014b-4bc5-85fe-4ae76805fa90\") " Nov 28 13:02:51 crc kubenswrapper[4779]: I1128 13:02:51.980185 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be073eb4-014b-4bc5-85fe-4ae76805fa90-utilities\") pod \"be073eb4-014b-4bc5-85fe-4ae76805fa90\" (UID: \"be073eb4-014b-4bc5-85fe-4ae76805fa90\") " Nov 28 13:02:51 crc kubenswrapper[4779]: I1128 13:02:51.980287 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be073eb4-014b-4bc5-85fe-4ae76805fa90-catalog-content\") pod \"be073eb4-014b-4bc5-85fe-4ae76805fa90\" (UID: \"be073eb4-014b-4bc5-85fe-4ae76805fa90\") " Nov 28 13:02:51 crc kubenswrapper[4779]: I1128 13:02:51.981199 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be073eb4-014b-4bc5-85fe-4ae76805fa90-utilities" (OuterVolumeSpecName: "utilities") pod "be073eb4-014b-4bc5-85fe-4ae76805fa90" (UID: "be073eb4-014b-4bc5-85fe-4ae76805fa90"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 13:02:51 crc kubenswrapper[4779]: I1128 13:02:51.987757 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be073eb4-014b-4bc5-85fe-4ae76805fa90-kube-api-access-vm77p" (OuterVolumeSpecName: "kube-api-access-vm77p") pod "be073eb4-014b-4bc5-85fe-4ae76805fa90" (UID: "be073eb4-014b-4bc5-85fe-4ae76805fa90"). InnerVolumeSpecName "kube-api-access-vm77p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:02:52 crc kubenswrapper[4779]: I1128 13:02:52.014550 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be073eb4-014b-4bc5-85fe-4ae76805fa90-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "be073eb4-014b-4bc5-85fe-4ae76805fa90" (UID: "be073eb4-014b-4bc5-85fe-4ae76805fa90"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 13:02:52 crc kubenswrapper[4779]: I1128 13:02:52.082445 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vm77p\" (UniqueName: \"kubernetes.io/projected/be073eb4-014b-4bc5-85fe-4ae76805fa90-kube-api-access-vm77p\") on node \"crc\" DevicePath \"\"" Nov 28 13:02:52 crc kubenswrapper[4779]: I1128 13:02:52.082479 4779 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be073eb4-014b-4bc5-85fe-4ae76805fa90-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 13:02:52 crc kubenswrapper[4779]: I1128 13:02:52.082489 4779 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be073eb4-014b-4bc5-85fe-4ae76805fa90-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 13:02:52 crc kubenswrapper[4779]: I1128 13:02:52.345675 4779 generic.go:334] "Generic (PLEG): container finished" podID="be073eb4-014b-4bc5-85fe-4ae76805fa90" containerID="4d336523c0ea5f5229e060fa8a051b54952c42583d65f176c38c58e3b4627f82" exitCode=0 Nov 28 13:02:52 crc kubenswrapper[4779]: I1128 13:02:52.345795 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rx48z" Nov 28 13:02:52 crc kubenswrapper[4779]: I1128 13:02:52.345809 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rx48z" event={"ID":"be073eb4-014b-4bc5-85fe-4ae76805fa90","Type":"ContainerDied","Data":"4d336523c0ea5f5229e060fa8a051b54952c42583d65f176c38c58e3b4627f82"} Nov 28 13:02:52 crc kubenswrapper[4779]: I1128 13:02:52.346153 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rx48z" event={"ID":"be073eb4-014b-4bc5-85fe-4ae76805fa90","Type":"ContainerDied","Data":"2b198aeaa8e42e0435b1cb9b2a4ec04fb5f70d8bb29bd24b34e4f948e389ab6c"} Nov 28 13:02:52 crc kubenswrapper[4779]: I1128 13:02:52.346188 4779 scope.go:117] "RemoveContainer" containerID="4d336523c0ea5f5229e060fa8a051b54952c42583d65f176c38c58e3b4627f82" Nov 28 13:02:52 crc kubenswrapper[4779]: I1128 13:02:52.387262 4779 scope.go:117] "RemoveContainer" containerID="55dcf10554c16eea4de0742311918cc3ef02569ddd2586dfa70acf7dc1c0843b" Nov 28 13:02:52 crc kubenswrapper[4779]: I1128 13:02:52.402317 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rx48z"] Nov 28 13:02:52 crc kubenswrapper[4779]: I1128 13:02:52.414309 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rx48z"] Nov 28 13:02:52 crc kubenswrapper[4779]: I1128 13:02:52.421461 4779 scope.go:117] "RemoveContainer" containerID="f6974271097d612cb47bd5b592d677154f620430eafd1b990d91902e0e3615e7" Nov 28 13:02:52 crc kubenswrapper[4779]: I1128 13:02:52.480654 4779 scope.go:117] "RemoveContainer" containerID="4d336523c0ea5f5229e060fa8a051b54952c42583d65f176c38c58e3b4627f82" Nov 28 13:02:52 crc kubenswrapper[4779]: E1128 13:02:52.481498 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d336523c0ea5f5229e060fa8a051b54952c42583d65f176c38c58e3b4627f82\": container with ID starting with 4d336523c0ea5f5229e060fa8a051b54952c42583d65f176c38c58e3b4627f82 not found: ID does not exist" containerID="4d336523c0ea5f5229e060fa8a051b54952c42583d65f176c38c58e3b4627f82" Nov 28 13:02:52 crc kubenswrapper[4779]: I1128 13:02:52.481555 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d336523c0ea5f5229e060fa8a051b54952c42583d65f176c38c58e3b4627f82"} err="failed to get container status \"4d336523c0ea5f5229e060fa8a051b54952c42583d65f176c38c58e3b4627f82\": rpc error: code = NotFound desc = could not find container \"4d336523c0ea5f5229e060fa8a051b54952c42583d65f176c38c58e3b4627f82\": container with ID starting with 4d336523c0ea5f5229e060fa8a051b54952c42583d65f176c38c58e3b4627f82 not found: ID does not exist" Nov 28 13:02:52 crc kubenswrapper[4779]: I1128 13:02:52.481588 4779 scope.go:117] "RemoveContainer" containerID="55dcf10554c16eea4de0742311918cc3ef02569ddd2586dfa70acf7dc1c0843b" Nov 28 13:02:52 crc kubenswrapper[4779]: E1128 13:02:52.482033 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55dcf10554c16eea4de0742311918cc3ef02569ddd2586dfa70acf7dc1c0843b\": container with ID starting with 55dcf10554c16eea4de0742311918cc3ef02569ddd2586dfa70acf7dc1c0843b not found: ID does not exist" containerID="55dcf10554c16eea4de0742311918cc3ef02569ddd2586dfa70acf7dc1c0843b" Nov 28 13:02:52 crc kubenswrapper[4779]: I1128 13:02:52.482090 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55dcf10554c16eea4de0742311918cc3ef02569ddd2586dfa70acf7dc1c0843b"} err="failed to get container status \"55dcf10554c16eea4de0742311918cc3ef02569ddd2586dfa70acf7dc1c0843b\": rpc error: code = NotFound desc = could not find container \"55dcf10554c16eea4de0742311918cc3ef02569ddd2586dfa70acf7dc1c0843b\": container with ID starting with 55dcf10554c16eea4de0742311918cc3ef02569ddd2586dfa70acf7dc1c0843b not found: ID does not exist" Nov 28 13:02:52 crc kubenswrapper[4779]: I1128 13:02:52.482191 4779 scope.go:117] "RemoveContainer" containerID="f6974271097d612cb47bd5b592d677154f620430eafd1b990d91902e0e3615e7" Nov 28 13:02:52 crc kubenswrapper[4779]: E1128 13:02:52.482610 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6974271097d612cb47bd5b592d677154f620430eafd1b990d91902e0e3615e7\": container with ID starting with f6974271097d612cb47bd5b592d677154f620430eafd1b990d91902e0e3615e7 not found: ID does not exist" containerID="f6974271097d612cb47bd5b592d677154f620430eafd1b990d91902e0e3615e7" Nov 28 13:02:52 crc kubenswrapper[4779]: I1128 13:02:52.482752 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6974271097d612cb47bd5b592d677154f620430eafd1b990d91902e0e3615e7"} err="failed to get container status \"f6974271097d612cb47bd5b592d677154f620430eafd1b990d91902e0e3615e7\": rpc error: code = NotFound desc = could not find container \"f6974271097d612cb47bd5b592d677154f620430eafd1b990d91902e0e3615e7\": container with ID starting with f6974271097d612cb47bd5b592d677154f620430eafd1b990d91902e0e3615e7 not found: ID does not exist" Nov 28 13:02:53 crc kubenswrapper[4779]: I1128 13:02:53.739472 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be073eb4-014b-4bc5-85fe-4ae76805fa90" path="/var/lib/kubelet/pods/be073eb4-014b-4bc5-85fe-4ae76805fa90/volumes" Nov 28 13:02:57 crc kubenswrapper[4779]: I1128 13:02:57.726158 4779 scope.go:117] "RemoveContainer" containerID="3a5057813024b5f9eddaf198924d294d2253857acafc1e169a218697e2d27bcf" Nov 28 13:02:57 crc kubenswrapper[4779]: E1128 13:02:57.727394 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:03:11 crc kubenswrapper[4779]: I1128 13:03:11.725876 4779 scope.go:117] "RemoveContainer" containerID="3a5057813024b5f9eddaf198924d294d2253857acafc1e169a218697e2d27bcf" Nov 28 13:03:11 crc kubenswrapper[4779]: E1128 13:03:11.726667 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:03:21 crc kubenswrapper[4779]: I1128 13:03:21.715845 4779 generic.go:334] "Generic (PLEG): container finished" podID="81be23ef-d854-4ac3-8f39-601540e013ea" containerID="9f8a7af5a41c05cf7bb251ec88f433f8cf7f60bc1070f225b8d3157f2a170a51" exitCode=0 Nov 28 13:03:21 crc kubenswrapper[4779]: I1128 13:03:21.715975 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4tp8f" event={"ID":"81be23ef-d854-4ac3-8f39-601540e013ea","Type":"ContainerDied","Data":"9f8a7af5a41c05cf7bb251ec88f433f8cf7f60bc1070f225b8d3157f2a170a51"} Nov 28 13:03:23 crc kubenswrapper[4779]: I1128 13:03:23.323786 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4tp8f" Nov 28 13:03:23 crc kubenswrapper[4779]: I1128 13:03:23.465865 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/81be23ef-d854-4ac3-8f39-601540e013ea-ssh-key\") pod \"81be23ef-d854-4ac3-8f39-601540e013ea\" (UID: \"81be23ef-d854-4ac3-8f39-601540e013ea\") " Nov 28 13:03:23 crc kubenswrapper[4779]: I1128 13:03:23.466325 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drvr7\" (UniqueName: \"kubernetes.io/projected/81be23ef-d854-4ac3-8f39-601540e013ea-kube-api-access-drvr7\") pod \"81be23ef-d854-4ac3-8f39-601540e013ea\" (UID: \"81be23ef-d854-4ac3-8f39-601540e013ea\") " Nov 28 13:03:23 crc kubenswrapper[4779]: I1128 13:03:23.466421 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81be23ef-d854-4ac3-8f39-601540e013ea-inventory\") pod \"81be23ef-d854-4ac3-8f39-601540e013ea\" (UID: \"81be23ef-d854-4ac3-8f39-601540e013ea\") " Nov 28 13:03:23 crc kubenswrapper[4779]: I1128 13:03:23.466471 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81be23ef-d854-4ac3-8f39-601540e013ea-bootstrap-combined-ca-bundle\") pod \"81be23ef-d854-4ac3-8f39-601540e013ea\" (UID: \"81be23ef-d854-4ac3-8f39-601540e013ea\") " Nov 28 13:03:23 crc kubenswrapper[4779]: I1128 13:03:23.474377 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81be23ef-d854-4ac3-8f39-601540e013ea-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "81be23ef-d854-4ac3-8f39-601540e013ea" (UID: "81be23ef-d854-4ac3-8f39-601540e013ea"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:03:23 crc kubenswrapper[4779]: I1128 13:03:23.475143 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81be23ef-d854-4ac3-8f39-601540e013ea-kube-api-access-drvr7" (OuterVolumeSpecName: "kube-api-access-drvr7") pod "81be23ef-d854-4ac3-8f39-601540e013ea" (UID: "81be23ef-d854-4ac3-8f39-601540e013ea"). InnerVolumeSpecName "kube-api-access-drvr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:03:23 crc kubenswrapper[4779]: I1128 13:03:23.503505 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81be23ef-d854-4ac3-8f39-601540e013ea-inventory" (OuterVolumeSpecName: "inventory") pod "81be23ef-d854-4ac3-8f39-601540e013ea" (UID: "81be23ef-d854-4ac3-8f39-601540e013ea"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:03:23 crc kubenswrapper[4779]: I1128 13:03:23.509325 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81be23ef-d854-4ac3-8f39-601540e013ea-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "81be23ef-d854-4ac3-8f39-601540e013ea" (UID: "81be23ef-d854-4ac3-8f39-601540e013ea"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:03:23 crc kubenswrapper[4779]: I1128 13:03:23.570029 4779 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/81be23ef-d854-4ac3-8f39-601540e013ea-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 13:03:23 crc kubenswrapper[4779]: I1128 13:03:23.570077 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drvr7\" (UniqueName: \"kubernetes.io/projected/81be23ef-d854-4ac3-8f39-601540e013ea-kube-api-access-drvr7\") on node \"crc\" DevicePath \"\"" Nov 28 13:03:23 crc kubenswrapper[4779]: I1128 13:03:23.570121 4779 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81be23ef-d854-4ac3-8f39-601540e013ea-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 13:03:23 crc kubenswrapper[4779]: I1128 13:03:23.570140 4779 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81be23ef-d854-4ac3-8f39-601540e013ea-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 13:03:23 crc kubenswrapper[4779]: I1128 13:03:23.742189 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4tp8f" event={"ID":"81be23ef-d854-4ac3-8f39-601540e013ea","Type":"ContainerDied","Data":"3d1a97c49f014e69625d35be996b61a524044ad779b39ea8a73032d7cc0cc426"} Nov 28 13:03:23 crc kubenswrapper[4779]: I1128 13:03:23.742249 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d1a97c49f014e69625d35be996b61a524044ad779b39ea8a73032d7cc0cc426" Nov 28 13:03:23 crc kubenswrapper[4779]: I1128 13:03:23.742254 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4tp8f" Nov 28 13:03:23 crc kubenswrapper[4779]: I1128 13:03:23.853525 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bgfzz"] Nov 28 13:03:23 crc kubenswrapper[4779]: E1128 13:03:23.854261 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be073eb4-014b-4bc5-85fe-4ae76805fa90" containerName="extract-content" Nov 28 13:03:23 crc kubenswrapper[4779]: I1128 13:03:23.854296 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="be073eb4-014b-4bc5-85fe-4ae76805fa90" containerName="extract-content" Nov 28 13:03:23 crc kubenswrapper[4779]: E1128 13:03:23.854339 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be073eb4-014b-4bc5-85fe-4ae76805fa90" containerName="registry-server" Nov 28 13:03:23 crc kubenswrapper[4779]: I1128 13:03:23.854352 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="be073eb4-014b-4bc5-85fe-4ae76805fa90" containerName="registry-server" Nov 28 13:03:23 crc kubenswrapper[4779]: E1128 13:03:23.854413 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be073eb4-014b-4bc5-85fe-4ae76805fa90" containerName="extract-utilities" Nov 28 13:03:23 crc kubenswrapper[4779]: I1128 13:03:23.854428 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="be073eb4-014b-4bc5-85fe-4ae76805fa90" containerName="extract-utilities" Nov 28 13:03:23 crc kubenswrapper[4779]: E1128 13:03:23.854459 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81be23ef-d854-4ac3-8f39-601540e013ea" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 28 13:03:23 crc kubenswrapper[4779]: I1128 13:03:23.854473 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="81be23ef-d854-4ac3-8f39-601540e013ea" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 28 13:03:23 crc kubenswrapper[4779]: I1128 13:03:23.854811 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="81be23ef-d854-4ac3-8f39-601540e013ea" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 28 13:03:23 crc kubenswrapper[4779]: I1128 13:03:23.854868 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="be073eb4-014b-4bc5-85fe-4ae76805fa90" containerName="registry-server" Nov 28 13:03:23 crc kubenswrapper[4779]: I1128 13:03:23.855948 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bgfzz" Nov 28 13:03:23 crc kubenswrapper[4779]: I1128 13:03:23.858415 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 13:03:23 crc kubenswrapper[4779]: I1128 13:03:23.858789 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zfcth" Nov 28 13:03:23 crc kubenswrapper[4779]: I1128 13:03:23.858790 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 13:03:23 crc kubenswrapper[4779]: I1128 13:03:23.861083 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 13:03:23 crc kubenswrapper[4779]: I1128 13:03:23.862931 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bgfzz"] Nov 28 13:03:23 crc kubenswrapper[4779]: I1128 13:03:23.978769 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/867eb458-fc69-4d08-958e-67f69bbf7ec9-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bgfzz\" (UID: \"867eb458-fc69-4d08-958e-67f69bbf7ec9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bgfzz" Nov 28 13:03:23 crc kubenswrapper[4779]: I1128 13:03:23.978876 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/867eb458-fc69-4d08-958e-67f69bbf7ec9-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bgfzz\" (UID: \"867eb458-fc69-4d08-958e-67f69bbf7ec9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bgfzz" Nov 28 13:03:23 crc kubenswrapper[4779]: I1128 13:03:23.979030 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wknfq\" (UniqueName: \"kubernetes.io/projected/867eb458-fc69-4d08-958e-67f69bbf7ec9-kube-api-access-wknfq\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bgfzz\" (UID: \"867eb458-fc69-4d08-958e-67f69bbf7ec9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bgfzz" Nov 28 13:03:24 crc kubenswrapper[4779]: I1128 13:03:24.081260 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/867eb458-fc69-4d08-958e-67f69bbf7ec9-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bgfzz\" (UID: \"867eb458-fc69-4d08-958e-67f69bbf7ec9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bgfzz" Nov 28 13:03:24 crc kubenswrapper[4779]: I1128 13:03:24.081390 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/867eb458-fc69-4d08-958e-67f69bbf7ec9-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bgfzz\" (UID: \"867eb458-fc69-4d08-958e-67f69bbf7ec9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bgfzz" Nov 28 13:03:24 crc kubenswrapper[4779]: I1128 13:03:24.081553 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wknfq\" (UniqueName: \"kubernetes.io/projected/867eb458-fc69-4d08-958e-67f69bbf7ec9-kube-api-access-wknfq\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bgfzz\" (UID: \"867eb458-fc69-4d08-958e-67f69bbf7ec9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bgfzz" Nov 28 13:03:24 crc kubenswrapper[4779]: I1128 13:03:24.086380 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/867eb458-fc69-4d08-958e-67f69bbf7ec9-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bgfzz\" (UID: \"867eb458-fc69-4d08-958e-67f69bbf7ec9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bgfzz" Nov 28 13:03:24 crc kubenswrapper[4779]: I1128 13:03:24.089070 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/867eb458-fc69-4d08-958e-67f69bbf7ec9-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bgfzz\" (UID: \"867eb458-fc69-4d08-958e-67f69bbf7ec9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bgfzz" Nov 28 13:03:24 crc kubenswrapper[4779]: I1128 13:03:24.100677 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wknfq\" (UniqueName: \"kubernetes.io/projected/867eb458-fc69-4d08-958e-67f69bbf7ec9-kube-api-access-wknfq\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bgfzz\" (UID: \"867eb458-fc69-4d08-958e-67f69bbf7ec9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bgfzz" Nov 28 13:03:24 crc kubenswrapper[4779]: I1128 13:03:24.181258 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bgfzz" Nov 28 13:03:24 crc kubenswrapper[4779]: I1128 13:03:24.827835 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bgfzz"] Nov 28 13:03:24 crc kubenswrapper[4779]: I1128 13:03:24.830184 4779 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 13:03:25 crc kubenswrapper[4779]: I1128 13:03:25.767022 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bgfzz" event={"ID":"867eb458-fc69-4d08-958e-67f69bbf7ec9","Type":"ContainerStarted","Data":"d64d0b3f5e4f48f1b0dc0ce8c43b1b050f093271f8b0355cdd10de7a82eaedb5"} Nov 28 13:03:25 crc kubenswrapper[4779]: I1128 13:03:25.767591 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bgfzz" event={"ID":"867eb458-fc69-4d08-958e-67f69bbf7ec9","Type":"ContainerStarted","Data":"79bf56d7b581d1a838830ff41f862d573a0c807e5ac3975946ca4717c63ad8dd"} Nov 28 13:03:25 crc kubenswrapper[4779]: I1128 13:03:25.800515 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bgfzz" podStartSLOduration=2.287674945 podStartE2EDuration="2.800497563s" podCreationTimestamp="2025-11-28 13:03:23 +0000 UTC" firstStartedPulling="2025-11-28 13:03:24.829871651 +0000 UTC m=+1665.395547015" lastFinishedPulling="2025-11-28 13:03:25.342694239 +0000 UTC m=+1665.908369633" observedRunningTime="2025-11-28 13:03:25.792415626 +0000 UTC m=+1666.358090990" watchObservedRunningTime="2025-11-28 13:03:25.800497563 +0000 UTC m=+1666.366172927" Nov 28 13:03:26 crc kubenswrapper[4779]: I1128 13:03:26.727626 4779 scope.go:117] "RemoveContainer" containerID="3a5057813024b5f9eddaf198924d294d2253857acafc1e169a218697e2d27bcf" Nov 28 13:03:26 crc kubenswrapper[4779]: E1128 13:03:26.727934 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:03:41 crc kubenswrapper[4779]: I1128 13:03:41.726119 4779 scope.go:117] "RemoveContainer" containerID="3a5057813024b5f9eddaf198924d294d2253857acafc1e169a218697e2d27bcf" Nov 28 13:03:41 crc kubenswrapper[4779]: E1128 13:03:41.727015 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:03:53 crc kubenswrapper[4779]: I1128 13:03:53.726539 4779 scope.go:117] "RemoveContainer" containerID="3a5057813024b5f9eddaf198924d294d2253857acafc1e169a218697e2d27bcf" Nov 28 13:03:53 crc kubenswrapper[4779]: E1128 13:03:53.727729 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:03:58 crc kubenswrapper[4779]: I1128 13:03:58.048566 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-425f-account-create-update-l6kl6"] Nov 28 13:03:58 crc kubenswrapper[4779]: I1128 13:03:58.070768 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-8msj7"] Nov 28 13:03:58 crc kubenswrapper[4779]: I1128 13:03:58.081477 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-425f-account-create-update-l6kl6"] Nov 28 13:03:58 crc kubenswrapper[4779]: I1128 13:03:58.092483 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-8msj7"] Nov 28 13:03:59 crc kubenswrapper[4779]: I1128 13:03:59.034778 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-8ea0-account-create-update-4kjzs"] Nov 28 13:03:59 crc kubenswrapper[4779]: I1128 13:03:59.049531 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-6cjk6"] Nov 28 13:03:59 crc kubenswrapper[4779]: I1128 13:03:59.064913 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-7fa2-account-create-update-jmqxq"] Nov 28 13:03:59 crc kubenswrapper[4779]: I1128 13:03:59.074525 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-8ea0-account-create-update-4kjzs"] Nov 28 13:03:59 crc kubenswrapper[4779]: I1128 13:03:59.085129 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-5vn8z"] Nov 28 13:03:59 crc kubenswrapper[4779]: I1128 13:03:59.094121 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-7fa2-account-create-update-jmqxq"] Nov 28 13:03:59 crc kubenswrapper[4779]: I1128 13:03:59.103121 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-5vn8z"] Nov 28 13:03:59 crc kubenswrapper[4779]: I1128 13:03:59.111964 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-6cjk6"] Nov 28 13:03:59 crc kubenswrapper[4779]: I1128 13:03:59.742610 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a63a0ed-a2ba-4acb-8f0d-a88e165e6cc9" path="/var/lib/kubelet/pods/0a63a0ed-a2ba-4acb-8f0d-a88e165e6cc9/volumes" Nov 28 13:03:59 crc kubenswrapper[4779]: I1128 13:03:59.743323 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81190d6e-e211-4aae-890b-bfd66bd92381" path="/var/lib/kubelet/pods/81190d6e-e211-4aae-890b-bfd66bd92381/volumes" Nov 28 13:03:59 crc kubenswrapper[4779]: I1128 13:03:59.743913 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e2a93bc-7245-4557-851b-33230f2031dc" path="/var/lib/kubelet/pods/8e2a93bc-7245-4557-851b-33230f2031dc/volumes" Nov 28 13:03:59 crc kubenswrapper[4779]: I1128 13:03:59.744550 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1273702-2d8b-401a-afd5-335a5ceb8bbe" path="/var/lib/kubelet/pods/c1273702-2d8b-401a-afd5-335a5ceb8bbe/volumes" Nov 28 13:03:59 crc kubenswrapper[4779]: I1128 13:03:59.745635 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e230e28f-3821-476c-b967-2dc505f4206c" path="/var/lib/kubelet/pods/e230e28f-3821-476c-b967-2dc505f4206c/volumes" Nov 28 13:03:59 crc kubenswrapper[4779]: I1128 13:03:59.746190 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f140dc70-fd92-49d0-b831-44c97eb32ead" path="/var/lib/kubelet/pods/f140dc70-fd92-49d0-b831-44c97eb32ead/volumes" Nov 28 13:04:07 crc kubenswrapper[4779]: I1128 13:04:07.726839 4779 scope.go:117] "RemoveContainer" containerID="3a5057813024b5f9eddaf198924d294d2253857acafc1e169a218697e2d27bcf" Nov 28 13:04:07 crc kubenswrapper[4779]: E1128 13:04:07.727740 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:04:08 crc kubenswrapper[4779]: I1128 13:04:08.238694 4779 scope.go:117] "RemoveContainer" containerID="2127b3d45c645c4324c1049ec66e2977c65be3269d8d492985536aba6284a0eb" Nov 28 13:04:08 crc kubenswrapper[4779]: I1128 13:04:08.285827 4779 scope.go:117] "RemoveContainer" containerID="da51acca282f1b5df22c6223a93480a5826ff2942625b9a772c53a78f9a8f914" Nov 28 13:04:08 crc kubenswrapper[4779]: I1128 13:04:08.339606 4779 scope.go:117] "RemoveContainer" containerID="bc4bf7c3e5cd78cac151a4c3aaf31093108c26b9d53fce0fbf18737703d40733" Nov 28 13:04:08 crc kubenswrapper[4779]: I1128 13:04:08.375584 4779 scope.go:117] "RemoveContainer" containerID="fc8c58524405df12ba69786bb2626f5c6b5e862bf65ef0c6a72a5fd1b721d53d" Nov 28 13:04:08 crc kubenswrapper[4779]: I1128 13:04:08.411529 4779 scope.go:117] "RemoveContainer" containerID="dc344c587375bd3128515924db8821f4e1d2fa6f8a7dca76850c0b4f8c8b53ca" Nov 28 13:04:08 crc kubenswrapper[4779]: I1128 13:04:08.468780 4779 scope.go:117] "RemoveContainer" containerID="4a8f33639020f3c0fe5b644edf134735b546c1893f7feb6c7a1293a777446c08" Nov 28 13:04:20 crc kubenswrapper[4779]: I1128 13:04:20.725838 4779 scope.go:117] "RemoveContainer" containerID="3a5057813024b5f9eddaf198924d294d2253857acafc1e169a218697e2d27bcf" Nov 28 13:04:20 crc kubenswrapper[4779]: E1128 13:04:20.726656 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:04:25 crc kubenswrapper[4779]: I1128 13:04:25.049876 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-h4bzm"] Nov 28 13:04:25 crc kubenswrapper[4779]: I1128 13:04:25.066589 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-js645"] Nov 28 13:04:25 crc kubenswrapper[4779]: I1128 13:04:25.076521 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-h4bzm"] Nov 28 13:04:25 crc kubenswrapper[4779]: I1128 13:04:25.084765 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-js645"] Nov 28 13:04:25 crc kubenswrapper[4779]: I1128 13:04:25.744467 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07dc1232-19a1-43de-9b7b-9613e964a39b" path="/var/lib/kubelet/pods/07dc1232-19a1-43de-9b7b-9613e964a39b/volumes" Nov 28 13:04:25 crc kubenswrapper[4779]: I1128 13:04:25.745971 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e0e6aa9-aad0-4d46-85c9-11cf40ac2928" path="/var/lib/kubelet/pods/4e0e6aa9-aad0-4d46-85c9-11cf40ac2928/volumes" Nov 28 13:04:28 crc kubenswrapper[4779]: I1128 13:04:28.055648 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-bhx9p"] Nov 28 13:04:28 crc kubenswrapper[4779]: I1128 13:04:28.085850 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-a859-account-create-update-82cnk"] Nov 28 13:04:28 crc kubenswrapper[4779]: I1128 13:04:28.108638 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-bhx9p"] Nov 28 13:04:28 crc kubenswrapper[4779]: I1128 13:04:28.120856 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-6c6a-account-create-update-cvfsb"] Nov 28 13:04:28 crc kubenswrapper[4779]: I1128 13:04:28.135222 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-36a9-account-create-update-blv9g"] Nov 28 13:04:28 crc kubenswrapper[4779]: I1128 13:04:28.144419 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-a859-account-create-update-82cnk"] Nov 28 13:04:28 crc kubenswrapper[4779]: I1128 13:04:28.151819 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-05a1-account-create-update-bjnnp"] Nov 28 13:04:28 crc kubenswrapper[4779]: I1128 13:04:28.159749 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-bl4b8"] Nov 28 13:04:28 crc kubenswrapper[4779]: I1128 13:04:28.166807 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-6c6a-account-create-update-cvfsb"] Nov 28 13:04:28 crc kubenswrapper[4779]: I1128 13:04:28.174071 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-05a1-account-create-update-bjnnp"] Nov 28 13:04:28 crc kubenswrapper[4779]: I1128 13:04:28.182019 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-36a9-account-create-update-blv9g"] Nov 28 13:04:28 crc kubenswrapper[4779]: I1128 13:04:28.189355 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-bl4b8"] Nov 28 13:04:29 crc kubenswrapper[4779]: I1128 13:04:29.750991 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0aafeb02-0f52-4cf1-b856-e357be7e80b2" path="/var/lib/kubelet/pods/0aafeb02-0f52-4cf1-b856-e357be7e80b2/volumes" Nov 28 13:04:29 crc kubenswrapper[4779]: I1128 13:04:29.753738 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73eaf386-eead-4fbe-bbfb-a41423521b9f" path="/var/lib/kubelet/pods/73eaf386-eead-4fbe-bbfb-a41423521b9f/volumes" Nov 28 13:04:29 crc kubenswrapper[4779]: I1128 13:04:29.755318 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="917a2830-66a9-4c55-9cf0-74c6dac98030" path="/var/lib/kubelet/pods/917a2830-66a9-4c55-9cf0-74c6dac98030/volumes" Nov 28 13:04:29 crc kubenswrapper[4779]: I1128 13:04:29.756578 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b001146a-ebfc-4821-b7b8-3dbbf14749c9" path="/var/lib/kubelet/pods/b001146a-ebfc-4821-b7b8-3dbbf14749c9/volumes" Nov 28 13:04:29 crc kubenswrapper[4779]: I1128 13:04:29.758699 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3ebdf4b-0d38-4646-9e9d-742c3152849c" path="/var/lib/kubelet/pods/b3ebdf4b-0d38-4646-9e9d-742c3152849c/volumes" Nov 28 13:04:29 crc kubenswrapper[4779]: I1128 13:04:29.759423 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef019325-ce2d-4119-85d3-eac3868665ce" path="/var/lib/kubelet/pods/ef019325-ce2d-4119-85d3-eac3868665ce/volumes" Nov 28 13:04:30 crc kubenswrapper[4779]: I1128 13:04:30.042513 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-hzq4r"] Nov 28 13:04:30 crc kubenswrapper[4779]: I1128 13:04:30.058157 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-hzq4r"] Nov 28 13:04:31 crc kubenswrapper[4779]: I1128 13:04:31.751974 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30731004-d3bb-4ed7-820a-37fe3e7ee7e1" path="/var/lib/kubelet/pods/30731004-d3bb-4ed7-820a-37fe3e7ee7e1/volumes" Nov 28 13:04:33 crc kubenswrapper[4779]: I1128 13:04:33.050077 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-vlmfj"] Nov 28 13:04:33 crc kubenswrapper[4779]: I1128 13:04:33.063579 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-vlmfj"] Nov 28 13:04:33 crc kubenswrapper[4779]: I1128 13:04:33.728634 4779 scope.go:117] "RemoveContainer" containerID="3a5057813024b5f9eddaf198924d294d2253857acafc1e169a218697e2d27bcf" Nov 28 13:04:33 crc kubenswrapper[4779]: E1128 13:04:33.730087 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:04:33 crc kubenswrapper[4779]: I1128 13:04:33.743162 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ae2270c-607f-4315-959e-eb8536afafe9" path="/var/lib/kubelet/pods/4ae2270c-607f-4315-959e-eb8536afafe9/volumes" Nov 28 13:04:47 crc kubenswrapper[4779]: I1128 13:04:47.726961 4779 scope.go:117] "RemoveContainer" containerID="3a5057813024b5f9eddaf198924d294d2253857acafc1e169a218697e2d27bcf" Nov 28 13:04:47 crc kubenswrapper[4779]: E1128 13:04:47.728171 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:05:00 crc kubenswrapper[4779]: I1128 13:05:00.726894 4779 scope.go:117] "RemoveContainer" containerID="3a5057813024b5f9eddaf198924d294d2253857acafc1e169a218697e2d27bcf" Nov 28 13:05:00 crc kubenswrapper[4779]: E1128 13:05:00.727744 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:05:08 crc kubenswrapper[4779]: I1128 13:05:08.664670 4779 scope.go:117] "RemoveContainer" containerID="3b0fc7683f2230e31c28fb1a5a9771b7c83d12a389ecb78437e32f3c551262e8" Nov 28 13:05:08 crc kubenswrapper[4779]: I1128 13:05:08.695076 4779 scope.go:117] "RemoveContainer" containerID="e781b14fc8cd82214a4ac1e0bc8b2cbae527cfa43c17b3146ab4866d4c42828f" Nov 28 13:05:08 crc kubenswrapper[4779]: I1128 13:05:08.732422 4779 scope.go:117] "RemoveContainer" containerID="ace92a66f48960cbff8f49d4255b9015c950a72ce94c5e5e8560d9829fe42fed" Nov 28 13:05:08 crc kubenswrapper[4779]: I1128 13:05:08.809237 4779 scope.go:117] "RemoveContainer" containerID="d5712495c1505a6517ba972b5a5eff011ae8d6a80eefe06e89033e23e512c235" Nov 28 13:05:08 crc kubenswrapper[4779]: I1128 13:05:08.873178 4779 scope.go:117] "RemoveContainer" containerID="ba8c4cb8663ad9a2e4f4df75a7f7dd6f90b98d7d5ad7ccd8f0d6f920c243499f" Nov 28 13:05:08 crc kubenswrapper[4779]: I1128 13:05:08.907136 4779 scope.go:117] "RemoveContainer" containerID="3bd66579955aa434bcbddd29386fbe17ea2dba6b7b6ee070c4c82d4c626c53c0" Nov 28 13:05:08 crc kubenswrapper[4779]: I1128 13:05:08.947027 4779 scope.go:117] "RemoveContainer" containerID="fe7b5777bf81ec3381397ea71bbabb6f37a576a56aaec9256a74fb1b34b4fafe" Nov 28 13:05:08 crc kubenswrapper[4779]: I1128 13:05:08.995435 4779 scope.go:117] "RemoveContainer" containerID="26fededc6d08e301a2ca39e554e235beff22fb28cc2965eedeb2d6746ed78e18" Nov 28 13:05:09 crc kubenswrapper[4779]: I1128 13:05:09.048516 4779 scope.go:117] "RemoveContainer" containerID="93db06973d8e05ad0b9283a1955630c607c962cf1fe004a75e5916a4363e4d32" Nov 28 13:05:09 crc kubenswrapper[4779]: I1128 13:05:09.082355 4779 scope.go:117] "RemoveContainer" containerID="1eada839e5267d3245004362b6eb536e129a4d143dc96d5010455efc59426b88" Nov 28 13:05:10 crc kubenswrapper[4779]: I1128 13:05:10.063237 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-q7v56"] Nov 28 13:05:10 crc kubenswrapper[4779]: I1128 13:05:10.076896 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-q7v56"] Nov 28 13:05:11 crc kubenswrapper[4779]: I1128 13:05:11.748442 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="082059dc-73e6-482b-a0ad-ed2a62282f61" path="/var/lib/kubelet/pods/082059dc-73e6-482b-a0ad-ed2a62282f61/volumes" Nov 28 13:05:14 crc kubenswrapper[4779]: I1128 13:05:14.071219 4779 generic.go:334] "Generic (PLEG): container finished" podID="867eb458-fc69-4d08-958e-67f69bbf7ec9" containerID="d64d0b3f5e4f48f1b0dc0ce8c43b1b050f093271f8b0355cdd10de7a82eaedb5" exitCode=0 Nov 28 13:05:14 crc kubenswrapper[4779]: I1128 13:05:14.071291 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bgfzz" event={"ID":"867eb458-fc69-4d08-958e-67f69bbf7ec9","Type":"ContainerDied","Data":"d64d0b3f5e4f48f1b0dc0ce8c43b1b050f093271f8b0355cdd10de7a82eaedb5"} Nov 28 13:05:15 crc kubenswrapper[4779]: I1128 13:05:15.048643 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-hgpw7"] Nov 28 13:05:15 crc kubenswrapper[4779]: I1128 13:05:15.066421 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-hgpw7"] Nov 28 13:05:15 crc kubenswrapper[4779]: I1128 13:05:15.605128 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bgfzz" Nov 28 13:05:15 crc kubenswrapper[4779]: I1128 13:05:15.727407 4779 scope.go:117] "RemoveContainer" containerID="3a5057813024b5f9eddaf198924d294d2253857acafc1e169a218697e2d27bcf" Nov 28 13:05:15 crc kubenswrapper[4779]: E1128 13:05:15.727759 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:05:15 crc kubenswrapper[4779]: I1128 13:05:15.737879 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4dca0b7-4681-4e3c-8602-b777c31b27f1" path="/var/lib/kubelet/pods/a4dca0b7-4681-4e3c-8602-b777c31b27f1/volumes" Nov 28 13:05:15 crc kubenswrapper[4779]: I1128 13:05:15.738807 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/867eb458-fc69-4d08-958e-67f69bbf7ec9-ssh-key\") pod \"867eb458-fc69-4d08-958e-67f69bbf7ec9\" (UID: \"867eb458-fc69-4d08-958e-67f69bbf7ec9\") " Nov 28 13:05:15 crc kubenswrapper[4779]: I1128 13:05:15.738933 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/867eb458-fc69-4d08-958e-67f69bbf7ec9-inventory\") pod \"867eb458-fc69-4d08-958e-67f69bbf7ec9\" (UID: \"867eb458-fc69-4d08-958e-67f69bbf7ec9\") " Nov 28 13:05:15 crc kubenswrapper[4779]: I1128 13:05:15.739014 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wknfq\" (UniqueName: \"kubernetes.io/projected/867eb458-fc69-4d08-958e-67f69bbf7ec9-kube-api-access-wknfq\") pod \"867eb458-fc69-4d08-958e-67f69bbf7ec9\" (UID: \"867eb458-fc69-4d08-958e-67f69bbf7ec9\") " Nov 28 13:05:15 crc kubenswrapper[4779]: I1128 13:05:15.747448 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/867eb458-fc69-4d08-958e-67f69bbf7ec9-kube-api-access-wknfq" (OuterVolumeSpecName: "kube-api-access-wknfq") pod "867eb458-fc69-4d08-958e-67f69bbf7ec9" (UID: "867eb458-fc69-4d08-958e-67f69bbf7ec9"). InnerVolumeSpecName "kube-api-access-wknfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:05:15 crc kubenswrapper[4779]: I1128 13:05:15.775180 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/867eb458-fc69-4d08-958e-67f69bbf7ec9-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "867eb458-fc69-4d08-958e-67f69bbf7ec9" (UID: "867eb458-fc69-4d08-958e-67f69bbf7ec9"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:05:15 crc kubenswrapper[4779]: I1128 13:05:15.776449 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/867eb458-fc69-4d08-958e-67f69bbf7ec9-inventory" (OuterVolumeSpecName: "inventory") pod "867eb458-fc69-4d08-958e-67f69bbf7ec9" (UID: "867eb458-fc69-4d08-958e-67f69bbf7ec9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:05:15 crc kubenswrapper[4779]: I1128 13:05:15.841052 4779 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/867eb458-fc69-4d08-958e-67f69bbf7ec9-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 13:05:15 crc kubenswrapper[4779]: I1128 13:05:15.841510 4779 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/867eb458-fc69-4d08-958e-67f69bbf7ec9-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 13:05:15 crc kubenswrapper[4779]: I1128 13:05:15.841527 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wknfq\" (UniqueName: \"kubernetes.io/projected/867eb458-fc69-4d08-958e-67f69bbf7ec9-kube-api-access-wknfq\") on node \"crc\" DevicePath \"\"" Nov 28 13:05:16 crc kubenswrapper[4779]: I1128 13:05:16.093196 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bgfzz" event={"ID":"867eb458-fc69-4d08-958e-67f69bbf7ec9","Type":"ContainerDied","Data":"79bf56d7b581d1a838830ff41f862d573a0c807e5ac3975946ca4717c63ad8dd"} Nov 28 13:05:16 crc kubenswrapper[4779]: I1128 13:05:16.093261 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79bf56d7b581d1a838830ff41f862d573a0c807e5ac3975946ca4717c63ad8dd" Nov 28 13:05:16 crc kubenswrapper[4779]: I1128 13:05:16.093268 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bgfzz" Nov 28 13:05:16 crc kubenswrapper[4779]: I1128 13:05:16.197520 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s4kqc"] Nov 28 13:05:16 crc kubenswrapper[4779]: E1128 13:05:16.205693 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="867eb458-fc69-4d08-958e-67f69bbf7ec9" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 28 13:05:16 crc kubenswrapper[4779]: I1128 13:05:16.205721 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="867eb458-fc69-4d08-958e-67f69bbf7ec9" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 28 13:05:16 crc kubenswrapper[4779]: I1128 13:05:16.206070 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="867eb458-fc69-4d08-958e-67f69bbf7ec9" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 28 13:05:16 crc kubenswrapper[4779]: I1128 13:05:16.206766 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s4kqc" Nov 28 13:05:16 crc kubenswrapper[4779]: I1128 13:05:16.209591 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 13:05:16 crc kubenswrapper[4779]: I1128 13:05:16.209778 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 13:05:16 crc kubenswrapper[4779]: I1128 13:05:16.210047 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zfcth" Nov 28 13:05:16 crc kubenswrapper[4779]: I1128 13:05:16.211576 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 13:05:16 crc kubenswrapper[4779]: I1128 13:05:16.227263 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s4kqc"] Nov 28 13:05:16 crc kubenswrapper[4779]: I1128 13:05:16.252322 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c4e4bcb3-1c6f-4b3c-9cfb-bcffde886f96-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-s4kqc\" (UID: \"c4e4bcb3-1c6f-4b3c-9cfb-bcffde886f96\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s4kqc" Nov 28 13:05:16 crc kubenswrapper[4779]: I1128 13:05:16.252686 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xppdr\" (UniqueName: \"kubernetes.io/projected/c4e4bcb3-1c6f-4b3c-9cfb-bcffde886f96-kube-api-access-xppdr\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-s4kqc\" (UID: \"c4e4bcb3-1c6f-4b3c-9cfb-bcffde886f96\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s4kqc" Nov 28 13:05:16 crc kubenswrapper[4779]: I1128 13:05:16.252767 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4e4bcb3-1c6f-4b3c-9cfb-bcffde886f96-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-s4kqc\" (UID: \"c4e4bcb3-1c6f-4b3c-9cfb-bcffde886f96\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s4kqc" Nov 28 13:05:16 crc kubenswrapper[4779]: I1128 13:05:16.354775 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xppdr\" (UniqueName: \"kubernetes.io/projected/c4e4bcb3-1c6f-4b3c-9cfb-bcffde886f96-kube-api-access-xppdr\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-s4kqc\" (UID: \"c4e4bcb3-1c6f-4b3c-9cfb-bcffde886f96\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s4kqc" Nov 28 13:05:16 crc kubenswrapper[4779]: I1128 13:05:16.354824 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4e4bcb3-1c6f-4b3c-9cfb-bcffde886f96-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-s4kqc\" (UID: \"c4e4bcb3-1c6f-4b3c-9cfb-bcffde886f96\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s4kqc" Nov 28 13:05:16 crc kubenswrapper[4779]: I1128 13:05:16.354945 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c4e4bcb3-1c6f-4b3c-9cfb-bcffde886f96-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-s4kqc\" (UID: \"c4e4bcb3-1c6f-4b3c-9cfb-bcffde886f96\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s4kqc" Nov 28 13:05:16 crc kubenswrapper[4779]: I1128 13:05:16.360231 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4e4bcb3-1c6f-4b3c-9cfb-bcffde886f96-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-s4kqc\" (UID: \"c4e4bcb3-1c6f-4b3c-9cfb-bcffde886f96\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s4kqc" Nov 28 13:05:16 crc kubenswrapper[4779]: I1128 13:05:16.360801 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c4e4bcb3-1c6f-4b3c-9cfb-bcffde886f96-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-s4kqc\" (UID: \"c4e4bcb3-1c6f-4b3c-9cfb-bcffde886f96\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s4kqc" Nov 28 13:05:16 crc kubenswrapper[4779]: I1128 13:05:16.371635 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xppdr\" (UniqueName: \"kubernetes.io/projected/c4e4bcb3-1c6f-4b3c-9cfb-bcffde886f96-kube-api-access-xppdr\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-s4kqc\" (UID: \"c4e4bcb3-1c6f-4b3c-9cfb-bcffde886f96\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s4kqc" Nov 28 13:05:16 crc kubenswrapper[4779]: I1128 13:05:16.526622 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s4kqc" Nov 28 13:05:17 crc kubenswrapper[4779]: I1128 13:05:17.181092 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s4kqc"] Nov 28 13:05:18 crc kubenswrapper[4779]: I1128 13:05:18.122795 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s4kqc" event={"ID":"c4e4bcb3-1c6f-4b3c-9cfb-bcffde886f96","Type":"ContainerStarted","Data":"12cd7b999ab38dbd77b1c9caee1c8a42fdc89ebe12b6d4c77c35972edaeae78f"} Nov 28 13:05:18 crc kubenswrapper[4779]: I1128 13:05:18.123365 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s4kqc" event={"ID":"c4e4bcb3-1c6f-4b3c-9cfb-bcffde886f96","Type":"ContainerStarted","Data":"7b42e2578ab5d7286e03c1fb8c79b7f2db1c43082bd5148ba4e709050db9da85"} Nov 28 13:05:18 crc kubenswrapper[4779]: I1128 13:05:18.159040 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s4kqc" podStartSLOduration=1.603688642 podStartE2EDuration="2.159007282s" podCreationTimestamp="2025-11-28 13:05:16 +0000 UTC" firstStartedPulling="2025-11-28 13:05:17.199275081 +0000 UTC m=+1777.764950425" lastFinishedPulling="2025-11-28 13:05:17.754593701 +0000 UTC m=+1778.320269065" observedRunningTime="2025-11-28 13:05:18.147050451 +0000 UTC m=+1778.712725815" watchObservedRunningTime="2025-11-28 13:05:18.159007282 +0000 UTC m=+1778.724682686" Nov 28 13:05:26 crc kubenswrapper[4779]: I1128 13:05:26.040697 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-2rlgj"] Nov 28 13:05:26 crc kubenswrapper[4779]: I1128 13:05:26.049270 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-nrmk4"] Nov 28 13:05:26 crc kubenswrapper[4779]: I1128 13:05:26.060742 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-nrmk4"] Nov 28 13:05:26 crc kubenswrapper[4779]: I1128 13:05:26.080263 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-2rlgj"] Nov 28 13:05:27 crc kubenswrapper[4779]: I1128 13:05:27.743554 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="090eca16-3536-4b84-85c8-e9a0d3a7deb6" path="/var/lib/kubelet/pods/090eca16-3536-4b84-85c8-e9a0d3a7deb6/volumes" Nov 28 13:05:27 crc kubenswrapper[4779]: I1128 13:05:27.744902 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be93bf1f-510b-4a38-8f85-b59c36b2feb1" path="/var/lib/kubelet/pods/be93bf1f-510b-4a38-8f85-b59c36b2feb1/volumes" Nov 28 13:05:29 crc kubenswrapper[4779]: I1128 13:05:29.742032 4779 scope.go:117] "RemoveContainer" containerID="3a5057813024b5f9eddaf198924d294d2253857acafc1e169a218697e2d27bcf" Nov 28 13:05:29 crc kubenswrapper[4779]: E1128 13:05:29.742928 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:05:30 crc kubenswrapper[4779]: I1128 13:05:30.043171 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-ggv2n"] Nov 28 13:05:30 crc kubenswrapper[4779]: I1128 13:05:30.057666 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-ggv2n"] Nov 28 13:05:31 crc kubenswrapper[4779]: I1128 13:05:31.744609 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f844f06-a227-4423-9d97-33f9c85c0df8" path="/var/lib/kubelet/pods/1f844f06-a227-4423-9d97-33f9c85c0df8/volumes" Nov 28 13:05:33 crc kubenswrapper[4779]: I1128 13:05:33.045047 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-sh4hl"] Nov 28 13:05:33 crc kubenswrapper[4779]: I1128 13:05:33.060857 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-sh4hl"] Nov 28 13:05:33 crc kubenswrapper[4779]: I1128 13:05:33.739825 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aaa51f35-9ab4-4629-ae5a-349484d0917d" path="/var/lib/kubelet/pods/aaa51f35-9ab4-4629-ae5a-349484d0917d/volumes" Nov 28 13:05:41 crc kubenswrapper[4779]: I1128 13:05:41.726976 4779 scope.go:117] "RemoveContainer" containerID="3a5057813024b5f9eddaf198924d294d2253857acafc1e169a218697e2d27bcf" Nov 28 13:05:41 crc kubenswrapper[4779]: E1128 13:05:41.728315 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:05:53 crc kubenswrapper[4779]: I1128 13:05:53.727433 4779 scope.go:117] "RemoveContainer" containerID="3a5057813024b5f9eddaf198924d294d2253857acafc1e169a218697e2d27bcf" Nov 28 13:05:53 crc kubenswrapper[4779]: E1128 13:05:53.728474 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:06:05 crc kubenswrapper[4779]: I1128 13:06:05.726985 4779 scope.go:117] "RemoveContainer" containerID="3a5057813024b5f9eddaf198924d294d2253857acafc1e169a218697e2d27bcf" Nov 28 13:06:05 crc kubenswrapper[4779]: E1128 13:06:05.730771 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:06:09 crc kubenswrapper[4779]: I1128 13:06:09.358371 4779 scope.go:117] "RemoveContainer" containerID="121b08d2adad132bfcefaa378c89b7d30716fbd3708ae63ea920eecc8466b749" Nov 28 13:06:09 crc kubenswrapper[4779]: I1128 13:06:09.434624 4779 scope.go:117] "RemoveContainer" containerID="dfbb86a0458722a007432ce973db0be61e1713c43e30999473692d6f1f0db0b5" Nov 28 13:06:09 crc kubenswrapper[4779]: I1128 13:06:09.494324 4779 scope.go:117] "RemoveContainer" containerID="d1d8f787176c8df9ef9f55761ae58991d4c78c9e80ce202068bfc1496ae1f167" Nov 28 13:06:09 crc kubenswrapper[4779]: I1128 13:06:09.552155 4779 scope.go:117] "RemoveContainer" containerID="f8062de0006b63e01d516fec87f5ce45e8e1ff97a8d34602c6f5d5f2a47bd0c6" Nov 28 13:06:09 crc kubenswrapper[4779]: I1128 13:06:09.601965 4779 scope.go:117] "RemoveContainer" containerID="eed56070a46314d3063bf11a7af91f813f4c532a1abe79a677c9bd9a61beba5e" Nov 28 13:06:09 crc kubenswrapper[4779]: I1128 13:06:09.648432 4779 scope.go:117] "RemoveContainer" containerID="ba69b178e94de13e79a3c7e8d5ba4c90bbc358a0148d4b49f0ddce22f9797d61" Nov 28 13:06:15 crc kubenswrapper[4779]: I1128 13:06:15.042618 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-dwv55"] Nov 28 13:06:15 crc kubenswrapper[4779]: I1128 13:06:15.056373 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-dwv55"] Nov 28 13:06:15 crc kubenswrapper[4779]: I1128 13:06:15.748400 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e457ff42-a87d-4bfd-91d1-bcdc8632533d" path="/var/lib/kubelet/pods/e457ff42-a87d-4bfd-91d1-bcdc8632533d/volumes" Nov 28 13:06:16 crc kubenswrapper[4779]: I1128 13:06:16.072266 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-f633-account-create-update-sj299"] Nov 28 13:06:16 crc kubenswrapper[4779]: I1128 13:06:16.081883 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-237e-account-create-update-8kj9n"] Nov 28 13:06:16 crc kubenswrapper[4779]: I1128 13:06:16.094414 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-f633-account-create-update-sj299"] Nov 28 13:06:16 crc kubenswrapper[4779]: I1128 13:06:16.104206 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-29dd-account-create-update-zs2sb"] Nov 28 13:06:16 crc kubenswrapper[4779]: I1128 13:06:16.110783 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-cv5pl"] Nov 28 13:06:16 crc kubenswrapper[4779]: I1128 13:06:16.117368 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-dkjsg"] Nov 28 13:06:16 crc kubenswrapper[4779]: I1128 13:06:16.123839 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-237e-account-create-update-8kj9n"] Nov 28 13:06:16 crc kubenswrapper[4779]: I1128 13:06:16.131538 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-dkjsg"] Nov 28 13:06:16 crc kubenswrapper[4779]: I1128 13:06:16.141722 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-29dd-account-create-update-zs2sb"] Nov 28 13:06:16 crc kubenswrapper[4779]: I1128 13:06:16.151039 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-cv5pl"] Nov 28 13:06:16 crc kubenswrapper[4779]: I1128 13:06:16.727272 4779 scope.go:117] "RemoveContainer" containerID="3a5057813024b5f9eddaf198924d294d2253857acafc1e169a218697e2d27bcf" Nov 28 13:06:16 crc kubenswrapper[4779]: E1128 13:06:16.728167 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:06:17 crc kubenswrapper[4779]: I1128 13:06:17.747555 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3284e9ee-a945-4fb4-ae73-e2e2f580c7ac" path="/var/lib/kubelet/pods/3284e9ee-a945-4fb4-ae73-e2e2f580c7ac/volumes" Nov 28 13:06:17 crc kubenswrapper[4779]: I1128 13:06:17.748561 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a740778-7386-4f7a-ad57-4bd5fa6c2fc6" path="/var/lib/kubelet/pods/8a740778-7386-4f7a-ad57-4bd5fa6c2fc6/volumes" Nov 28 13:06:17 crc kubenswrapper[4779]: I1128 13:06:17.749570 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a00b970a-2a2a-40c2-bc07-c1bb05d74810" path="/var/lib/kubelet/pods/a00b970a-2a2a-40c2-bc07-c1bb05d74810/volumes" Nov 28 13:06:17 crc kubenswrapper[4779]: I1128 13:06:17.750508 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf493f67-858c-412a-9bf8-804c687c4f12" path="/var/lib/kubelet/pods/cf493f67-858c-412a-9bf8-804c687c4f12/volumes" Nov 28 13:06:17 crc kubenswrapper[4779]: I1128 13:06:17.751869 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fea6137f-d265-4494-a0ba-f92b3bdd82a2" path="/var/lib/kubelet/pods/fea6137f-d265-4494-a0ba-f92b3bdd82a2/volumes" Nov 28 13:06:28 crc kubenswrapper[4779]: I1128 13:06:28.727299 4779 scope.go:117] "RemoveContainer" containerID="3a5057813024b5f9eddaf198924d294d2253857acafc1e169a218697e2d27bcf" Nov 28 13:06:28 crc kubenswrapper[4779]: E1128 13:06:28.728551 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:06:40 crc kubenswrapper[4779]: I1128 13:06:40.179282 4779 generic.go:334] "Generic (PLEG): container finished" podID="c4e4bcb3-1c6f-4b3c-9cfb-bcffde886f96" containerID="12cd7b999ab38dbd77b1c9caee1c8a42fdc89ebe12b6d4c77c35972edaeae78f" exitCode=0 Nov 28 13:06:40 crc kubenswrapper[4779]: I1128 13:06:40.179473 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s4kqc" event={"ID":"c4e4bcb3-1c6f-4b3c-9cfb-bcffde886f96","Type":"ContainerDied","Data":"12cd7b999ab38dbd77b1c9caee1c8a42fdc89ebe12b6d4c77c35972edaeae78f"} Nov 28 13:06:41 crc kubenswrapper[4779]: I1128 13:06:41.601355 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s4kqc" Nov 28 13:06:41 crc kubenswrapper[4779]: I1128 13:06:41.797947 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c4e4bcb3-1c6f-4b3c-9cfb-bcffde886f96-ssh-key\") pod \"c4e4bcb3-1c6f-4b3c-9cfb-bcffde886f96\" (UID: \"c4e4bcb3-1c6f-4b3c-9cfb-bcffde886f96\") " Nov 28 13:06:41 crc kubenswrapper[4779]: I1128 13:06:41.798208 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4e4bcb3-1c6f-4b3c-9cfb-bcffde886f96-inventory\") pod \"c4e4bcb3-1c6f-4b3c-9cfb-bcffde886f96\" (UID: \"c4e4bcb3-1c6f-4b3c-9cfb-bcffde886f96\") " Nov 28 13:06:41 crc kubenswrapper[4779]: I1128 13:06:41.798374 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xppdr\" (UniqueName: \"kubernetes.io/projected/c4e4bcb3-1c6f-4b3c-9cfb-bcffde886f96-kube-api-access-xppdr\") pod \"c4e4bcb3-1c6f-4b3c-9cfb-bcffde886f96\" (UID: \"c4e4bcb3-1c6f-4b3c-9cfb-bcffde886f96\") " Nov 28 13:06:41 crc kubenswrapper[4779]: I1128 13:06:41.803477 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4e4bcb3-1c6f-4b3c-9cfb-bcffde886f96-kube-api-access-xppdr" (OuterVolumeSpecName: "kube-api-access-xppdr") pod "c4e4bcb3-1c6f-4b3c-9cfb-bcffde886f96" (UID: "c4e4bcb3-1c6f-4b3c-9cfb-bcffde886f96"). InnerVolumeSpecName "kube-api-access-xppdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:06:41 crc kubenswrapper[4779]: I1128 13:06:41.847535 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4e4bcb3-1c6f-4b3c-9cfb-bcffde886f96-inventory" (OuterVolumeSpecName: "inventory") pod "c4e4bcb3-1c6f-4b3c-9cfb-bcffde886f96" (UID: "c4e4bcb3-1c6f-4b3c-9cfb-bcffde886f96"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:06:41 crc kubenswrapper[4779]: I1128 13:06:41.849677 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4e4bcb3-1c6f-4b3c-9cfb-bcffde886f96-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c4e4bcb3-1c6f-4b3c-9cfb-bcffde886f96" (UID: "c4e4bcb3-1c6f-4b3c-9cfb-bcffde886f96"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:06:41 crc kubenswrapper[4779]: I1128 13:06:41.900609 4779 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c4e4bcb3-1c6f-4b3c-9cfb-bcffde886f96-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 13:06:41 crc kubenswrapper[4779]: I1128 13:06:41.900645 4779 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4e4bcb3-1c6f-4b3c-9cfb-bcffde886f96-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 13:06:41 crc kubenswrapper[4779]: I1128 13:06:41.900658 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xppdr\" (UniqueName: \"kubernetes.io/projected/c4e4bcb3-1c6f-4b3c-9cfb-bcffde886f96-kube-api-access-xppdr\") on node \"crc\" DevicePath \"\"" Nov 28 13:06:42 crc kubenswrapper[4779]: I1128 13:06:42.203867 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s4kqc" event={"ID":"c4e4bcb3-1c6f-4b3c-9cfb-bcffde886f96","Type":"ContainerDied","Data":"7b42e2578ab5d7286e03c1fb8c79b7f2db1c43082bd5148ba4e709050db9da85"} Nov 28 13:06:42 crc kubenswrapper[4779]: I1128 13:06:42.203911 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b42e2578ab5d7286e03c1fb8c79b7f2db1c43082bd5148ba4e709050db9da85" Nov 28 13:06:42 crc kubenswrapper[4779]: I1128 13:06:42.203945 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s4kqc" Nov 28 13:06:42 crc kubenswrapper[4779]: I1128 13:06:42.323697 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nw7dt"] Nov 28 13:06:42 crc kubenswrapper[4779]: E1128 13:06:42.324459 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4e4bcb3-1c6f-4b3c-9cfb-bcffde886f96" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 28 13:06:42 crc kubenswrapper[4779]: I1128 13:06:42.324502 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4e4bcb3-1c6f-4b3c-9cfb-bcffde886f96" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 28 13:06:42 crc kubenswrapper[4779]: I1128 13:06:42.325015 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4e4bcb3-1c6f-4b3c-9cfb-bcffde886f96" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 28 13:06:42 crc kubenswrapper[4779]: I1128 13:06:42.326395 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nw7dt" Nov 28 13:06:42 crc kubenswrapper[4779]: I1128 13:06:42.337301 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nw7dt"] Nov 28 13:06:42 crc kubenswrapper[4779]: I1128 13:06:42.363596 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 13:06:42 crc kubenswrapper[4779]: I1128 13:06:42.363931 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 13:06:42 crc kubenswrapper[4779]: I1128 13:06:42.365769 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zfcth" Nov 28 13:06:42 crc kubenswrapper[4779]: I1128 13:06:42.382122 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 13:06:42 crc kubenswrapper[4779]: I1128 13:06:42.512406 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/01e25eb1-de3d-4912-933f-09b22837436d-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nw7dt\" (UID: \"01e25eb1-de3d-4912-933f-09b22837436d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nw7dt" Nov 28 13:06:42 crc kubenswrapper[4779]: I1128 13:06:42.512485 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01e25eb1-de3d-4912-933f-09b22837436d-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nw7dt\" (UID: \"01e25eb1-de3d-4912-933f-09b22837436d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nw7dt" Nov 28 13:06:42 crc kubenswrapper[4779]: I1128 13:06:42.512521 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc86r\" (UniqueName: \"kubernetes.io/projected/01e25eb1-de3d-4912-933f-09b22837436d-kube-api-access-pc86r\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nw7dt\" (UID: \"01e25eb1-de3d-4912-933f-09b22837436d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nw7dt" Nov 28 13:06:42 crc kubenswrapper[4779]: I1128 13:06:42.614017 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/01e25eb1-de3d-4912-933f-09b22837436d-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nw7dt\" (UID: \"01e25eb1-de3d-4912-933f-09b22837436d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nw7dt" Nov 28 13:06:42 crc kubenswrapper[4779]: I1128 13:06:42.614374 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01e25eb1-de3d-4912-933f-09b22837436d-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nw7dt\" (UID: \"01e25eb1-de3d-4912-933f-09b22837436d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nw7dt" Nov 28 13:06:42 crc kubenswrapper[4779]: I1128 13:06:42.614403 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pc86r\" (UniqueName: \"kubernetes.io/projected/01e25eb1-de3d-4912-933f-09b22837436d-kube-api-access-pc86r\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nw7dt\" (UID: \"01e25eb1-de3d-4912-933f-09b22837436d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nw7dt" Nov 28 13:06:42 crc kubenswrapper[4779]: I1128 13:06:42.617874 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/01e25eb1-de3d-4912-933f-09b22837436d-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nw7dt\" (UID: \"01e25eb1-de3d-4912-933f-09b22837436d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nw7dt" Nov 28 13:06:42 crc kubenswrapper[4779]: I1128 13:06:42.620329 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01e25eb1-de3d-4912-933f-09b22837436d-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nw7dt\" (UID: \"01e25eb1-de3d-4912-933f-09b22837436d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nw7dt" Nov 28 13:06:42 crc kubenswrapper[4779]: I1128 13:06:42.634803 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pc86r\" (UniqueName: \"kubernetes.io/projected/01e25eb1-de3d-4912-933f-09b22837436d-kube-api-access-pc86r\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nw7dt\" (UID: \"01e25eb1-de3d-4912-933f-09b22837436d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nw7dt" Nov 28 13:06:42 crc kubenswrapper[4779]: I1128 13:06:42.677115 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nw7dt" Nov 28 13:06:42 crc kubenswrapper[4779]: I1128 13:06:42.726589 4779 scope.go:117] "RemoveContainer" containerID="3a5057813024b5f9eddaf198924d294d2253857acafc1e169a218697e2d27bcf" Nov 28 13:06:42 crc kubenswrapper[4779]: E1128 13:06:42.727051 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:06:43 crc kubenswrapper[4779]: I1128 13:06:43.358790 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nw7dt"] Nov 28 13:06:43 crc kubenswrapper[4779]: W1128 13:06:43.360898 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01e25eb1_de3d_4912_933f_09b22837436d.slice/crio-6e8d7631944f545142ce9e7045833cd4bdc5c4995476eed6f119c1d6846a3ca1 WatchSource:0}: Error finding container 6e8d7631944f545142ce9e7045833cd4bdc5c4995476eed6f119c1d6846a3ca1: Status 404 returned error can't find the container with id 6e8d7631944f545142ce9e7045833cd4bdc5c4995476eed6f119c1d6846a3ca1 Nov 28 13:06:44 crc kubenswrapper[4779]: I1128 13:06:44.231118 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nw7dt" event={"ID":"01e25eb1-de3d-4912-933f-09b22837436d","Type":"ContainerStarted","Data":"558901ee2308f435d0f323efb466428ece075ebd0d25d5aafef1d94e599aecfe"} Nov 28 13:06:44 crc kubenswrapper[4779]: I1128 13:06:44.231461 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nw7dt" event={"ID":"01e25eb1-de3d-4912-933f-09b22837436d","Type":"ContainerStarted","Data":"6e8d7631944f545142ce9e7045833cd4bdc5c4995476eed6f119c1d6846a3ca1"} Nov 28 13:06:44 crc kubenswrapper[4779]: I1128 13:06:44.251275 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nw7dt" podStartSLOduration=1.665191959 podStartE2EDuration="2.251259314s" podCreationTimestamp="2025-11-28 13:06:42 +0000 UTC" firstStartedPulling="2025-11-28 13:06:43.364810535 +0000 UTC m=+1863.930485909" lastFinishedPulling="2025-11-28 13:06:43.95087791 +0000 UTC m=+1864.516553264" observedRunningTime="2025-11-28 13:06:44.243593129 +0000 UTC m=+1864.809268503" watchObservedRunningTime="2025-11-28 13:06:44.251259314 +0000 UTC m=+1864.816934668" Nov 28 13:06:50 crc kubenswrapper[4779]: I1128 13:06:50.290607 4779 generic.go:334] "Generic (PLEG): container finished" podID="01e25eb1-de3d-4912-933f-09b22837436d" containerID="558901ee2308f435d0f323efb466428ece075ebd0d25d5aafef1d94e599aecfe" exitCode=0 Nov 28 13:06:50 crc kubenswrapper[4779]: I1128 13:06:50.290692 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nw7dt" event={"ID":"01e25eb1-de3d-4912-933f-09b22837436d","Type":"ContainerDied","Data":"558901ee2308f435d0f323efb466428ece075ebd0d25d5aafef1d94e599aecfe"} Nov 28 13:06:51 crc kubenswrapper[4779]: I1128 13:06:51.791280 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nw7dt" Nov 28 13:06:51 crc kubenswrapper[4779]: I1128 13:06:51.937227 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/01e25eb1-de3d-4912-933f-09b22837436d-ssh-key\") pod \"01e25eb1-de3d-4912-933f-09b22837436d\" (UID: \"01e25eb1-de3d-4912-933f-09b22837436d\") " Nov 28 13:06:51 crc kubenswrapper[4779]: I1128 13:06:51.937464 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01e25eb1-de3d-4912-933f-09b22837436d-inventory\") pod \"01e25eb1-de3d-4912-933f-09b22837436d\" (UID: \"01e25eb1-de3d-4912-933f-09b22837436d\") " Nov 28 13:06:51 crc kubenswrapper[4779]: I1128 13:06:51.937738 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pc86r\" (UniqueName: \"kubernetes.io/projected/01e25eb1-de3d-4912-933f-09b22837436d-kube-api-access-pc86r\") pod \"01e25eb1-de3d-4912-933f-09b22837436d\" (UID: \"01e25eb1-de3d-4912-933f-09b22837436d\") " Nov 28 13:06:51 crc kubenswrapper[4779]: I1128 13:06:51.950312 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01e25eb1-de3d-4912-933f-09b22837436d-kube-api-access-pc86r" (OuterVolumeSpecName: "kube-api-access-pc86r") pod "01e25eb1-de3d-4912-933f-09b22837436d" (UID: "01e25eb1-de3d-4912-933f-09b22837436d"). InnerVolumeSpecName "kube-api-access-pc86r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:06:51 crc kubenswrapper[4779]: I1128 13:06:51.969061 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01e25eb1-de3d-4912-933f-09b22837436d-inventory" (OuterVolumeSpecName: "inventory") pod "01e25eb1-de3d-4912-933f-09b22837436d" (UID: "01e25eb1-de3d-4912-933f-09b22837436d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:06:51 crc kubenswrapper[4779]: I1128 13:06:51.987735 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01e25eb1-de3d-4912-933f-09b22837436d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "01e25eb1-de3d-4912-933f-09b22837436d" (UID: "01e25eb1-de3d-4912-933f-09b22837436d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:06:52 crc kubenswrapper[4779]: I1128 13:06:52.040254 4779 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/01e25eb1-de3d-4912-933f-09b22837436d-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 13:06:52 crc kubenswrapper[4779]: I1128 13:06:52.040290 4779 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01e25eb1-de3d-4912-933f-09b22837436d-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 13:06:52 crc kubenswrapper[4779]: I1128 13:06:52.040301 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pc86r\" (UniqueName: \"kubernetes.io/projected/01e25eb1-de3d-4912-933f-09b22837436d-kube-api-access-pc86r\") on node \"crc\" DevicePath \"\"" Nov 28 13:06:52 crc kubenswrapper[4779]: I1128 13:06:52.314014 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nw7dt" event={"ID":"01e25eb1-de3d-4912-933f-09b22837436d","Type":"ContainerDied","Data":"6e8d7631944f545142ce9e7045833cd4bdc5c4995476eed6f119c1d6846a3ca1"} Nov 28 13:06:52 crc kubenswrapper[4779]: I1128 13:06:52.314053 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nw7dt" Nov 28 13:06:52 crc kubenswrapper[4779]: I1128 13:06:52.314059 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e8d7631944f545142ce9e7045833cd4bdc5c4995476eed6f119c1d6846a3ca1" Nov 28 13:06:52 crc kubenswrapper[4779]: I1128 13:06:52.415171 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-cjqrj"] Nov 28 13:06:52 crc kubenswrapper[4779]: E1128 13:06:52.415883 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01e25eb1-de3d-4912-933f-09b22837436d" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 28 13:06:52 crc kubenswrapper[4779]: I1128 13:06:52.415914 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="01e25eb1-de3d-4912-933f-09b22837436d" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 28 13:06:52 crc kubenswrapper[4779]: I1128 13:06:52.416327 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="01e25eb1-de3d-4912-933f-09b22837436d" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 28 13:06:52 crc kubenswrapper[4779]: I1128 13:06:52.417464 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cjqrj" Nov 28 13:06:52 crc kubenswrapper[4779]: I1128 13:06:52.421007 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 13:06:52 crc kubenswrapper[4779]: I1128 13:06:52.421559 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 13:06:52 crc kubenswrapper[4779]: I1128 13:06:52.421801 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zfcth" Nov 28 13:06:52 crc kubenswrapper[4779]: I1128 13:06:52.422177 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 13:06:52 crc kubenswrapper[4779]: I1128 13:06:52.425227 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-cjqrj"] Nov 28 13:06:52 crc kubenswrapper[4779]: I1128 13:06:52.550136 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/10420a90-84fa-45f9-a726-b3fcb8db4a20-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cjqrj\" (UID: \"10420a90-84fa-45f9-a726-b3fcb8db4a20\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cjqrj" Nov 28 13:06:52 crc kubenswrapper[4779]: I1128 13:06:52.550667 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/10420a90-84fa-45f9-a726-b3fcb8db4a20-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cjqrj\" (UID: \"10420a90-84fa-45f9-a726-b3fcb8db4a20\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cjqrj" Nov 28 13:06:52 crc kubenswrapper[4779]: I1128 13:06:52.550880 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g2nc\" (UniqueName: \"kubernetes.io/projected/10420a90-84fa-45f9-a726-b3fcb8db4a20-kube-api-access-8g2nc\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cjqrj\" (UID: \"10420a90-84fa-45f9-a726-b3fcb8db4a20\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cjqrj" Nov 28 13:06:52 crc kubenswrapper[4779]: I1128 13:06:52.653805 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8g2nc\" (UniqueName: \"kubernetes.io/projected/10420a90-84fa-45f9-a726-b3fcb8db4a20-kube-api-access-8g2nc\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cjqrj\" (UID: \"10420a90-84fa-45f9-a726-b3fcb8db4a20\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cjqrj" Nov 28 13:06:52 crc kubenswrapper[4779]: I1128 13:06:52.654208 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/10420a90-84fa-45f9-a726-b3fcb8db4a20-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cjqrj\" (UID: \"10420a90-84fa-45f9-a726-b3fcb8db4a20\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cjqrj" Nov 28 13:06:52 crc kubenswrapper[4779]: I1128 13:06:52.654322 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/10420a90-84fa-45f9-a726-b3fcb8db4a20-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cjqrj\" (UID: \"10420a90-84fa-45f9-a726-b3fcb8db4a20\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cjqrj" Nov 28 13:06:52 crc kubenswrapper[4779]: I1128 13:06:52.658717 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/10420a90-84fa-45f9-a726-b3fcb8db4a20-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cjqrj\" (UID: \"10420a90-84fa-45f9-a726-b3fcb8db4a20\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cjqrj" Nov 28 13:06:52 crc kubenswrapper[4779]: I1128 13:06:52.668352 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/10420a90-84fa-45f9-a726-b3fcb8db4a20-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cjqrj\" (UID: \"10420a90-84fa-45f9-a726-b3fcb8db4a20\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cjqrj" Nov 28 13:06:52 crc kubenswrapper[4779]: I1128 13:06:52.683023 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8g2nc\" (UniqueName: \"kubernetes.io/projected/10420a90-84fa-45f9-a726-b3fcb8db4a20-kube-api-access-8g2nc\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cjqrj\" (UID: \"10420a90-84fa-45f9-a726-b3fcb8db4a20\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cjqrj" Nov 28 13:06:52 crc kubenswrapper[4779]: I1128 13:06:52.764791 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cjqrj" Nov 28 13:06:53 crc kubenswrapper[4779]: W1128 13:06:53.334872 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10420a90_84fa_45f9_a726_b3fcb8db4a20.slice/crio-05122ca2973c3b83d27967395048541e3058412e159b40d843aa2c89e4f4705a WatchSource:0}: Error finding container 05122ca2973c3b83d27967395048541e3058412e159b40d843aa2c89e4f4705a: Status 404 returned error can't find the container with id 05122ca2973c3b83d27967395048541e3058412e159b40d843aa2c89e4f4705a Nov 28 13:06:53 crc kubenswrapper[4779]: I1128 13:06:53.336274 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-cjqrj"] Nov 28 13:06:54 crc kubenswrapper[4779]: I1128 13:06:54.337851 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cjqrj" event={"ID":"10420a90-84fa-45f9-a726-b3fcb8db4a20","Type":"ContainerStarted","Data":"1a3209c15cfd04639bcd2d14351ee366c37e91d12ecf409d21f47f203ae1d830"} Nov 28 13:06:54 crc kubenswrapper[4779]: I1128 13:06:54.338234 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cjqrj" event={"ID":"10420a90-84fa-45f9-a726-b3fcb8db4a20","Type":"ContainerStarted","Data":"05122ca2973c3b83d27967395048541e3058412e159b40d843aa2c89e4f4705a"} Nov 28 13:06:54 crc kubenswrapper[4779]: I1128 13:06:54.361965 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cjqrj" podStartSLOduration=1.85632044 podStartE2EDuration="2.361950781s" podCreationTimestamp="2025-11-28 13:06:52 +0000 UTC" firstStartedPulling="2025-11-28 13:06:53.338572436 +0000 UTC m=+1873.904247790" lastFinishedPulling="2025-11-28 13:06:53.844202747 +0000 UTC m=+1874.409878131" observedRunningTime="2025-11-28 13:06:54.358053887 +0000 UTC m=+1874.923729241" watchObservedRunningTime="2025-11-28 13:06:54.361950781 +0000 UTC m=+1874.927626125" Nov 28 13:06:56 crc kubenswrapper[4779]: I1128 13:06:56.726473 4779 scope.go:117] "RemoveContainer" containerID="3a5057813024b5f9eddaf198924d294d2253857acafc1e169a218697e2d27bcf" Nov 28 13:06:57 crc kubenswrapper[4779]: I1128 13:06:57.385136 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" event={"ID":"3b2a3eb4-4de5-491b-b466-3a35b7d745ec","Type":"ContainerStarted","Data":"618d5fbe87fa7087aedd943e484ddc1d1d52c7576dd65968b53ae378fd1610f9"} Nov 28 13:06:59 crc kubenswrapper[4779]: I1128 13:06:59.043552 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-tlgl6"] Nov 28 13:06:59 crc kubenswrapper[4779]: I1128 13:06:59.052890 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-tlgl6"] Nov 28 13:06:59 crc kubenswrapper[4779]: I1128 13:06:59.739081 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21e20123-0f0f-48a0-8412-6167f107ed2a" path="/var/lib/kubelet/pods/21e20123-0f0f-48a0-8412-6167f107ed2a/volumes" Nov 28 13:07:09 crc kubenswrapper[4779]: I1128 13:07:09.872313 4779 scope.go:117] "RemoveContainer" containerID="e2cfa8fd89a409a79882e811dc980291b1440d11cbabb24ecd4dd07a4f42b670" Nov 28 13:07:09 crc kubenswrapper[4779]: I1128 13:07:09.915266 4779 scope.go:117] "RemoveContainer" containerID="5164249bfec8b5dd2e37a19274611d1e2dff07ec378dfa823574c05ed3ecd863" Nov 28 13:07:09 crc kubenswrapper[4779]: I1128 13:07:09.959457 4779 scope.go:117] "RemoveContainer" containerID="022440ff63c3b6837f6091a7c8348788600a0b01cdfe7994b11c6182e955e40c" Nov 28 13:07:10 crc kubenswrapper[4779]: I1128 13:07:10.031757 4779 scope.go:117] "RemoveContainer" containerID="1cc5cc53cd478678ee5bb23bb96389485b4796828fbc7ca5a9225ea7559680c6" Nov 28 13:07:10 crc kubenswrapper[4779]: I1128 13:07:10.055174 4779 scope.go:117] "RemoveContainer" containerID="654e9402753c28d374e3c509719b80da86524908ce29ef08f783560c53b34488" Nov 28 13:07:10 crc kubenswrapper[4779]: I1128 13:07:10.092261 4779 scope.go:117] "RemoveContainer" containerID="0604a82ed4585c583206490b24ced7dfd0b9874017e9242aaafb4cd5829ad83c" Nov 28 13:07:10 crc kubenswrapper[4779]: I1128 13:07:10.129619 4779 scope.go:117] "RemoveContainer" containerID="0158cd182892aabae03bca60078a21c1c658f82a89a3b70ab19bdee235f23dd3" Nov 28 13:07:21 crc kubenswrapper[4779]: I1128 13:07:21.046340 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-q9xf9"] Nov 28 13:07:21 crc kubenswrapper[4779]: I1128 13:07:21.057410 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-q9xf9"] Nov 28 13:07:21 crc kubenswrapper[4779]: I1128 13:07:21.743362 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e860d8bc-f4c3-4923-ba29-3fb022978027" path="/var/lib/kubelet/pods/e860d8bc-f4c3-4923-ba29-3fb022978027/volumes" Nov 28 13:07:31 crc kubenswrapper[4779]: I1128 13:07:31.058818 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-v8d6d"] Nov 28 13:07:31 crc kubenswrapper[4779]: I1128 13:07:31.068851 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-v8d6d"] Nov 28 13:07:31 crc kubenswrapper[4779]: I1128 13:07:31.744048 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16554019-fb17-4257-9bfd-1c1ffe3edb87" path="/var/lib/kubelet/pods/16554019-fb17-4257-9bfd-1c1ffe3edb87/volumes" Nov 28 13:07:39 crc kubenswrapper[4779]: I1128 13:07:39.914848 4779 generic.go:334] "Generic (PLEG): container finished" podID="10420a90-84fa-45f9-a726-b3fcb8db4a20" containerID="1a3209c15cfd04639bcd2d14351ee366c37e91d12ecf409d21f47f203ae1d830" exitCode=0 Nov 28 13:07:39 crc kubenswrapper[4779]: I1128 13:07:39.914947 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cjqrj" event={"ID":"10420a90-84fa-45f9-a726-b3fcb8db4a20","Type":"ContainerDied","Data":"1a3209c15cfd04639bcd2d14351ee366c37e91d12ecf409d21f47f203ae1d830"} Nov 28 13:07:41 crc kubenswrapper[4779]: I1128 13:07:41.450794 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cjqrj" Nov 28 13:07:41 crc kubenswrapper[4779]: I1128 13:07:41.612908 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8g2nc\" (UniqueName: \"kubernetes.io/projected/10420a90-84fa-45f9-a726-b3fcb8db4a20-kube-api-access-8g2nc\") pod \"10420a90-84fa-45f9-a726-b3fcb8db4a20\" (UID: \"10420a90-84fa-45f9-a726-b3fcb8db4a20\") " Nov 28 13:07:41 crc kubenswrapper[4779]: I1128 13:07:41.612977 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/10420a90-84fa-45f9-a726-b3fcb8db4a20-ssh-key\") pod \"10420a90-84fa-45f9-a726-b3fcb8db4a20\" (UID: \"10420a90-84fa-45f9-a726-b3fcb8db4a20\") " Nov 28 13:07:41 crc kubenswrapper[4779]: I1128 13:07:41.613011 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/10420a90-84fa-45f9-a726-b3fcb8db4a20-inventory\") pod \"10420a90-84fa-45f9-a726-b3fcb8db4a20\" (UID: \"10420a90-84fa-45f9-a726-b3fcb8db4a20\") " Nov 28 13:07:41 crc kubenswrapper[4779]: I1128 13:07:41.618854 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10420a90-84fa-45f9-a726-b3fcb8db4a20-kube-api-access-8g2nc" (OuterVolumeSpecName: "kube-api-access-8g2nc") pod "10420a90-84fa-45f9-a726-b3fcb8db4a20" (UID: "10420a90-84fa-45f9-a726-b3fcb8db4a20"). InnerVolumeSpecName "kube-api-access-8g2nc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:07:41 crc kubenswrapper[4779]: I1128 13:07:41.638549 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10420a90-84fa-45f9-a726-b3fcb8db4a20-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "10420a90-84fa-45f9-a726-b3fcb8db4a20" (UID: "10420a90-84fa-45f9-a726-b3fcb8db4a20"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:07:41 crc kubenswrapper[4779]: I1128 13:07:41.660129 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10420a90-84fa-45f9-a726-b3fcb8db4a20-inventory" (OuterVolumeSpecName: "inventory") pod "10420a90-84fa-45f9-a726-b3fcb8db4a20" (UID: "10420a90-84fa-45f9-a726-b3fcb8db4a20"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:07:41 crc kubenswrapper[4779]: I1128 13:07:41.715538 4779 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/10420a90-84fa-45f9-a726-b3fcb8db4a20-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 13:07:41 crc kubenswrapper[4779]: I1128 13:07:41.715590 4779 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/10420a90-84fa-45f9-a726-b3fcb8db4a20-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 13:07:41 crc kubenswrapper[4779]: I1128 13:07:41.715612 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8g2nc\" (UniqueName: \"kubernetes.io/projected/10420a90-84fa-45f9-a726-b3fcb8db4a20-kube-api-access-8g2nc\") on node \"crc\" DevicePath \"\"" Nov 28 13:07:41 crc kubenswrapper[4779]: I1128 13:07:41.942359 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cjqrj" event={"ID":"10420a90-84fa-45f9-a726-b3fcb8db4a20","Type":"ContainerDied","Data":"05122ca2973c3b83d27967395048541e3058412e159b40d843aa2c89e4f4705a"} Nov 28 13:07:41 crc kubenswrapper[4779]: I1128 13:07:41.942425 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05122ca2973c3b83d27967395048541e3058412e159b40d843aa2c89e4f4705a" Nov 28 13:07:41 crc kubenswrapper[4779]: I1128 13:07:41.942471 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cjqrj" Nov 28 13:07:42 crc kubenswrapper[4779]: I1128 13:07:42.037628 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45xv5"] Nov 28 13:07:42 crc kubenswrapper[4779]: E1128 13:07:42.038426 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10420a90-84fa-45f9-a726-b3fcb8db4a20" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 28 13:07:42 crc kubenswrapper[4779]: I1128 13:07:42.038463 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="10420a90-84fa-45f9-a726-b3fcb8db4a20" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 28 13:07:42 crc kubenswrapper[4779]: I1128 13:07:42.038906 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="10420a90-84fa-45f9-a726-b3fcb8db4a20" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 28 13:07:42 crc kubenswrapper[4779]: I1128 13:07:42.040302 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45xv5" Nov 28 13:07:42 crc kubenswrapper[4779]: I1128 13:07:42.042476 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 13:07:42 crc kubenswrapper[4779]: I1128 13:07:42.042627 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zfcth" Nov 28 13:07:42 crc kubenswrapper[4779]: I1128 13:07:42.043054 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 13:07:42 crc kubenswrapper[4779]: I1128 13:07:42.043256 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 13:07:42 crc kubenswrapper[4779]: I1128 13:07:42.051541 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45xv5"] Nov 28 13:07:42 crc kubenswrapper[4779]: I1128 13:07:42.227446 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grqsx\" (UniqueName: \"kubernetes.io/projected/21c70f9d-fd7b-4629-8b4f-0f745fd9eccb-kube-api-access-grqsx\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-45xv5\" (UID: \"21c70f9d-fd7b-4629-8b4f-0f745fd9eccb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45xv5" Nov 28 13:07:42 crc kubenswrapper[4779]: I1128 13:07:42.227752 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/21c70f9d-fd7b-4629-8b4f-0f745fd9eccb-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-45xv5\" (UID: \"21c70f9d-fd7b-4629-8b4f-0f745fd9eccb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45xv5" Nov 28 13:07:42 crc kubenswrapper[4779]: I1128 13:07:42.227892 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/21c70f9d-fd7b-4629-8b4f-0f745fd9eccb-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-45xv5\" (UID: \"21c70f9d-fd7b-4629-8b4f-0f745fd9eccb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45xv5" Nov 28 13:07:42 crc kubenswrapper[4779]: I1128 13:07:42.330181 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grqsx\" (UniqueName: \"kubernetes.io/projected/21c70f9d-fd7b-4629-8b4f-0f745fd9eccb-kube-api-access-grqsx\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-45xv5\" (UID: \"21c70f9d-fd7b-4629-8b4f-0f745fd9eccb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45xv5" Nov 28 13:07:42 crc kubenswrapper[4779]: I1128 13:07:42.330599 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/21c70f9d-fd7b-4629-8b4f-0f745fd9eccb-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-45xv5\" (UID: \"21c70f9d-fd7b-4629-8b4f-0f745fd9eccb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45xv5" Nov 28 13:07:42 crc kubenswrapper[4779]: I1128 13:07:42.330788 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/21c70f9d-fd7b-4629-8b4f-0f745fd9eccb-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-45xv5\" (UID: \"21c70f9d-fd7b-4629-8b4f-0f745fd9eccb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45xv5" Nov 28 13:07:42 crc kubenswrapper[4779]: I1128 13:07:42.335208 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/21c70f9d-fd7b-4629-8b4f-0f745fd9eccb-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-45xv5\" (UID: \"21c70f9d-fd7b-4629-8b4f-0f745fd9eccb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45xv5" Nov 28 13:07:42 crc kubenswrapper[4779]: I1128 13:07:42.335362 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/21c70f9d-fd7b-4629-8b4f-0f745fd9eccb-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-45xv5\" (UID: \"21c70f9d-fd7b-4629-8b4f-0f745fd9eccb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45xv5" Nov 28 13:07:42 crc kubenswrapper[4779]: I1128 13:07:42.357159 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grqsx\" (UniqueName: \"kubernetes.io/projected/21c70f9d-fd7b-4629-8b4f-0f745fd9eccb-kube-api-access-grqsx\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-45xv5\" (UID: \"21c70f9d-fd7b-4629-8b4f-0f745fd9eccb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45xv5" Nov 28 13:07:42 crc kubenswrapper[4779]: I1128 13:07:42.365025 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45xv5" Nov 28 13:07:42 crc kubenswrapper[4779]: I1128 13:07:42.922837 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45xv5"] Nov 28 13:07:42 crc kubenswrapper[4779]: W1128 13:07:42.936625 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21c70f9d_fd7b_4629_8b4f_0f745fd9eccb.slice/crio-21a4011180774100c118b2913b91acf0b71014120cb0499b5a06813ad9070fc2 WatchSource:0}: Error finding container 21a4011180774100c118b2913b91acf0b71014120cb0499b5a06813ad9070fc2: Status 404 returned error can't find the container with id 21a4011180774100c118b2913b91acf0b71014120cb0499b5a06813ad9070fc2 Nov 28 13:07:42 crc kubenswrapper[4779]: I1128 13:07:42.955296 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45xv5" event={"ID":"21c70f9d-fd7b-4629-8b4f-0f745fd9eccb","Type":"ContainerStarted","Data":"21a4011180774100c118b2913b91acf0b71014120cb0499b5a06813ad9070fc2"} Nov 28 13:07:43 crc kubenswrapper[4779]: I1128 13:07:43.964260 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45xv5" event={"ID":"21c70f9d-fd7b-4629-8b4f-0f745fd9eccb","Type":"ContainerStarted","Data":"6523308ed2f0c2f7166dae4efd5e9b59afb4169f608a98f433e6804be982e341"} Nov 28 13:07:43 crc kubenswrapper[4779]: I1128 13:07:43.985008 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45xv5" podStartSLOduration=1.533283724 podStartE2EDuration="1.984992451s" podCreationTimestamp="2025-11-28 13:07:42 +0000 UTC" firstStartedPulling="2025-11-28 13:07:42.939246495 +0000 UTC m=+1923.504921889" lastFinishedPulling="2025-11-28 13:07:43.390955232 +0000 UTC m=+1923.956630616" observedRunningTime="2025-11-28 13:07:43.983489491 +0000 UTC m=+1924.549164855" watchObservedRunningTime="2025-11-28 13:07:43.984992451 +0000 UTC m=+1924.550667805" Nov 28 13:08:10 crc kubenswrapper[4779]: I1128 13:08:10.055790 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-chfvd"] Nov 28 13:08:10 crc kubenswrapper[4779]: I1128 13:08:10.070565 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-chfvd"] Nov 28 13:08:10 crc kubenswrapper[4779]: I1128 13:08:10.328978 4779 scope.go:117] "RemoveContainer" containerID="09d013213ac1b1692af87bda94005251c2127c5b1430ef379f877b931cfd15b4" Nov 28 13:08:10 crc kubenswrapper[4779]: I1128 13:08:10.394673 4779 scope.go:117] "RemoveContainer" containerID="7e2a4b1a3e594104c9d15bdc1c3db153f7d2a04d3dd12886c0a1faf3fb8e6dad" Nov 28 13:08:11 crc kubenswrapper[4779]: I1128 13:08:11.748437 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="393ab5ea-7256-4ffd-85c6-31c5548c4795" path="/var/lib/kubelet/pods/393ab5ea-7256-4ffd-85c6-31c5548c4795/volumes" Nov 28 13:08:46 crc kubenswrapper[4779]: I1128 13:08:46.674239 4779 generic.go:334] "Generic (PLEG): container finished" podID="21c70f9d-fd7b-4629-8b4f-0f745fd9eccb" containerID="6523308ed2f0c2f7166dae4efd5e9b59afb4169f608a98f433e6804be982e341" exitCode=0 Nov 28 13:08:46 crc kubenswrapper[4779]: I1128 13:08:46.674363 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45xv5" event={"ID":"21c70f9d-fd7b-4629-8b4f-0f745fd9eccb","Type":"ContainerDied","Data":"6523308ed2f0c2f7166dae4efd5e9b59afb4169f608a98f433e6804be982e341"} Nov 28 13:08:48 crc kubenswrapper[4779]: I1128 13:08:48.110316 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45xv5" Nov 28 13:08:48 crc kubenswrapper[4779]: I1128 13:08:48.286322 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grqsx\" (UniqueName: \"kubernetes.io/projected/21c70f9d-fd7b-4629-8b4f-0f745fd9eccb-kube-api-access-grqsx\") pod \"21c70f9d-fd7b-4629-8b4f-0f745fd9eccb\" (UID: \"21c70f9d-fd7b-4629-8b4f-0f745fd9eccb\") " Nov 28 13:08:48 crc kubenswrapper[4779]: I1128 13:08:48.286494 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/21c70f9d-fd7b-4629-8b4f-0f745fd9eccb-inventory\") pod \"21c70f9d-fd7b-4629-8b4f-0f745fd9eccb\" (UID: \"21c70f9d-fd7b-4629-8b4f-0f745fd9eccb\") " Nov 28 13:08:48 crc kubenswrapper[4779]: I1128 13:08:48.286566 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/21c70f9d-fd7b-4629-8b4f-0f745fd9eccb-ssh-key\") pod \"21c70f9d-fd7b-4629-8b4f-0f745fd9eccb\" (UID: \"21c70f9d-fd7b-4629-8b4f-0f745fd9eccb\") " Nov 28 13:08:48 crc kubenswrapper[4779]: I1128 13:08:48.292739 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21c70f9d-fd7b-4629-8b4f-0f745fd9eccb-kube-api-access-grqsx" (OuterVolumeSpecName: "kube-api-access-grqsx") pod "21c70f9d-fd7b-4629-8b4f-0f745fd9eccb" (UID: "21c70f9d-fd7b-4629-8b4f-0f745fd9eccb"). InnerVolumeSpecName "kube-api-access-grqsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:08:48 crc kubenswrapper[4779]: I1128 13:08:48.327229 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21c70f9d-fd7b-4629-8b4f-0f745fd9eccb-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "21c70f9d-fd7b-4629-8b4f-0f745fd9eccb" (UID: "21c70f9d-fd7b-4629-8b4f-0f745fd9eccb"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:08:48 crc kubenswrapper[4779]: I1128 13:08:48.330278 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21c70f9d-fd7b-4629-8b4f-0f745fd9eccb-inventory" (OuterVolumeSpecName: "inventory") pod "21c70f9d-fd7b-4629-8b4f-0f745fd9eccb" (UID: "21c70f9d-fd7b-4629-8b4f-0f745fd9eccb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:08:48 crc kubenswrapper[4779]: I1128 13:08:48.388628 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grqsx\" (UniqueName: \"kubernetes.io/projected/21c70f9d-fd7b-4629-8b4f-0f745fd9eccb-kube-api-access-grqsx\") on node \"crc\" DevicePath \"\"" Nov 28 13:08:48 crc kubenswrapper[4779]: I1128 13:08:48.388689 4779 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/21c70f9d-fd7b-4629-8b4f-0f745fd9eccb-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 13:08:48 crc kubenswrapper[4779]: I1128 13:08:48.388701 4779 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/21c70f9d-fd7b-4629-8b4f-0f745fd9eccb-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 13:08:48 crc kubenswrapper[4779]: I1128 13:08:48.702691 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45xv5" event={"ID":"21c70f9d-fd7b-4629-8b4f-0f745fd9eccb","Type":"ContainerDied","Data":"21a4011180774100c118b2913b91acf0b71014120cb0499b5a06813ad9070fc2"} Nov 28 13:08:48 crc kubenswrapper[4779]: I1128 13:08:48.702754 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45xv5" Nov 28 13:08:48 crc kubenswrapper[4779]: I1128 13:08:48.702764 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21a4011180774100c118b2913b91acf0b71014120cb0499b5a06813ad9070fc2" Nov 28 13:08:48 crc kubenswrapper[4779]: I1128 13:08:48.835361 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-7mwdd"] Nov 28 13:08:48 crc kubenswrapper[4779]: E1128 13:08:48.835845 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21c70f9d-fd7b-4629-8b4f-0f745fd9eccb" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 28 13:08:48 crc kubenswrapper[4779]: I1128 13:08:48.835870 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="21c70f9d-fd7b-4629-8b4f-0f745fd9eccb" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 28 13:08:48 crc kubenswrapper[4779]: I1128 13:08:48.836114 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="21c70f9d-fd7b-4629-8b4f-0f745fd9eccb" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 28 13:08:48 crc kubenswrapper[4779]: I1128 13:08:48.836851 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-7mwdd" Nov 28 13:08:48 crc kubenswrapper[4779]: I1128 13:08:48.839665 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zfcth" Nov 28 13:08:48 crc kubenswrapper[4779]: I1128 13:08:48.840217 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 13:08:48 crc kubenswrapper[4779]: I1128 13:08:48.840599 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 13:08:48 crc kubenswrapper[4779]: I1128 13:08:48.840983 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 13:08:48 crc kubenswrapper[4779]: I1128 13:08:48.852858 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-7mwdd"] Nov 28 13:08:48 crc kubenswrapper[4779]: I1128 13:08:48.999588 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27fc90d6-d05c-412a-97fb-d9fe40d2a964-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-7mwdd\" (UID: \"27fc90d6-d05c-412a-97fb-d9fe40d2a964\") " pod="openstack/ssh-known-hosts-edpm-deployment-7mwdd" Nov 28 13:08:48 crc kubenswrapper[4779]: I1128 13:08:48.999721 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/27fc90d6-d05c-412a-97fb-d9fe40d2a964-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-7mwdd\" (UID: \"27fc90d6-d05c-412a-97fb-d9fe40d2a964\") " pod="openstack/ssh-known-hosts-edpm-deployment-7mwdd" Nov 28 13:08:49 crc kubenswrapper[4779]: I1128 13:08:48.999809 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4jfc\" (UniqueName: \"kubernetes.io/projected/27fc90d6-d05c-412a-97fb-d9fe40d2a964-kube-api-access-m4jfc\") pod \"ssh-known-hosts-edpm-deployment-7mwdd\" (UID: \"27fc90d6-d05c-412a-97fb-d9fe40d2a964\") " pod="openstack/ssh-known-hosts-edpm-deployment-7mwdd" Nov 28 13:08:49 crc kubenswrapper[4779]: I1128 13:08:49.101518 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27fc90d6-d05c-412a-97fb-d9fe40d2a964-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-7mwdd\" (UID: \"27fc90d6-d05c-412a-97fb-d9fe40d2a964\") " pod="openstack/ssh-known-hosts-edpm-deployment-7mwdd" Nov 28 13:08:49 crc kubenswrapper[4779]: I1128 13:08:49.101611 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/27fc90d6-d05c-412a-97fb-d9fe40d2a964-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-7mwdd\" (UID: \"27fc90d6-d05c-412a-97fb-d9fe40d2a964\") " pod="openstack/ssh-known-hosts-edpm-deployment-7mwdd" Nov 28 13:08:49 crc kubenswrapper[4779]: I1128 13:08:49.102254 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4jfc\" (UniqueName: \"kubernetes.io/projected/27fc90d6-d05c-412a-97fb-d9fe40d2a964-kube-api-access-m4jfc\") pod \"ssh-known-hosts-edpm-deployment-7mwdd\" (UID: \"27fc90d6-d05c-412a-97fb-d9fe40d2a964\") " pod="openstack/ssh-known-hosts-edpm-deployment-7mwdd" Nov 28 13:08:49 crc kubenswrapper[4779]: I1128 13:08:49.108514 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27fc90d6-d05c-412a-97fb-d9fe40d2a964-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-7mwdd\" (UID: \"27fc90d6-d05c-412a-97fb-d9fe40d2a964\") " pod="openstack/ssh-known-hosts-edpm-deployment-7mwdd" Nov 28 13:08:49 crc kubenswrapper[4779]: I1128 13:08:49.119188 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/27fc90d6-d05c-412a-97fb-d9fe40d2a964-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-7mwdd\" (UID: \"27fc90d6-d05c-412a-97fb-d9fe40d2a964\") " pod="openstack/ssh-known-hosts-edpm-deployment-7mwdd" Nov 28 13:08:49 crc kubenswrapper[4779]: I1128 13:08:49.124386 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4jfc\" (UniqueName: \"kubernetes.io/projected/27fc90d6-d05c-412a-97fb-d9fe40d2a964-kube-api-access-m4jfc\") pod \"ssh-known-hosts-edpm-deployment-7mwdd\" (UID: \"27fc90d6-d05c-412a-97fb-d9fe40d2a964\") " pod="openstack/ssh-known-hosts-edpm-deployment-7mwdd" Nov 28 13:08:49 crc kubenswrapper[4779]: I1128 13:08:49.173517 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-7mwdd" Nov 28 13:08:49 crc kubenswrapper[4779]: I1128 13:08:49.749763 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-7mwdd"] Nov 28 13:08:49 crc kubenswrapper[4779]: I1128 13:08:49.754022 4779 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 13:08:50 crc kubenswrapper[4779]: I1128 13:08:50.731952 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-7mwdd" event={"ID":"27fc90d6-d05c-412a-97fb-d9fe40d2a964","Type":"ContainerStarted","Data":"dcae673c4e292a8d71509017a34eced6bac700c7073495d5d6e9ac1f5359b81f"} Nov 28 13:08:51 crc kubenswrapper[4779]: I1128 13:08:51.748819 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-7mwdd" event={"ID":"27fc90d6-d05c-412a-97fb-d9fe40d2a964","Type":"ContainerStarted","Data":"991dc306b9b73950997dd35191b69b282e464de8b064e8becb4be73a821823ea"} Nov 28 13:08:51 crc kubenswrapper[4779]: I1128 13:08:51.776770 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-7mwdd" podStartSLOduration=2.13987872 podStartE2EDuration="3.776737435s" podCreationTimestamp="2025-11-28 13:08:48 +0000 UTC" firstStartedPulling="2025-11-28 13:08:49.753763069 +0000 UTC m=+1990.319438433" lastFinishedPulling="2025-11-28 13:08:51.390621794 +0000 UTC m=+1991.956297148" observedRunningTime="2025-11-28 13:08:51.76982388 +0000 UTC m=+1992.335499274" watchObservedRunningTime="2025-11-28 13:08:51.776737435 +0000 UTC m=+1992.342412849" Nov 28 13:08:59 crc kubenswrapper[4779]: I1128 13:08:59.823245 4779 generic.go:334] "Generic (PLEG): container finished" podID="27fc90d6-d05c-412a-97fb-d9fe40d2a964" containerID="991dc306b9b73950997dd35191b69b282e464de8b064e8becb4be73a821823ea" exitCode=0 Nov 28 13:08:59 crc kubenswrapper[4779]: I1128 13:08:59.823299 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-7mwdd" event={"ID":"27fc90d6-d05c-412a-97fb-d9fe40d2a964","Type":"ContainerDied","Data":"991dc306b9b73950997dd35191b69b282e464de8b064e8becb4be73a821823ea"} Nov 28 13:09:01 crc kubenswrapper[4779]: I1128 13:09:01.325914 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-7mwdd" Nov 28 13:09:01 crc kubenswrapper[4779]: I1128 13:09:01.508879 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4jfc\" (UniqueName: \"kubernetes.io/projected/27fc90d6-d05c-412a-97fb-d9fe40d2a964-kube-api-access-m4jfc\") pod \"27fc90d6-d05c-412a-97fb-d9fe40d2a964\" (UID: \"27fc90d6-d05c-412a-97fb-d9fe40d2a964\") " Nov 28 13:09:01 crc kubenswrapper[4779]: I1128 13:09:01.509039 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27fc90d6-d05c-412a-97fb-d9fe40d2a964-ssh-key-openstack-edpm-ipam\") pod \"27fc90d6-d05c-412a-97fb-d9fe40d2a964\" (UID: \"27fc90d6-d05c-412a-97fb-d9fe40d2a964\") " Nov 28 13:09:01 crc kubenswrapper[4779]: I1128 13:09:01.509125 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/27fc90d6-d05c-412a-97fb-d9fe40d2a964-inventory-0\") pod \"27fc90d6-d05c-412a-97fb-d9fe40d2a964\" (UID: \"27fc90d6-d05c-412a-97fb-d9fe40d2a964\") " Nov 28 13:09:01 crc kubenswrapper[4779]: I1128 13:09:01.519339 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27fc90d6-d05c-412a-97fb-d9fe40d2a964-kube-api-access-m4jfc" (OuterVolumeSpecName: "kube-api-access-m4jfc") pod "27fc90d6-d05c-412a-97fb-d9fe40d2a964" (UID: "27fc90d6-d05c-412a-97fb-d9fe40d2a964"). InnerVolumeSpecName "kube-api-access-m4jfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:09:01 crc kubenswrapper[4779]: I1128 13:09:01.541086 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27fc90d6-d05c-412a-97fb-d9fe40d2a964-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "27fc90d6-d05c-412a-97fb-d9fe40d2a964" (UID: "27fc90d6-d05c-412a-97fb-d9fe40d2a964"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:09:01 crc kubenswrapper[4779]: I1128 13:09:01.543526 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27fc90d6-d05c-412a-97fb-d9fe40d2a964-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "27fc90d6-d05c-412a-97fb-d9fe40d2a964" (UID: "27fc90d6-d05c-412a-97fb-d9fe40d2a964"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:09:01 crc kubenswrapper[4779]: I1128 13:09:01.612691 4779 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/27fc90d6-d05c-412a-97fb-d9fe40d2a964-inventory-0\") on node \"crc\" DevicePath \"\"" Nov 28 13:09:01 crc kubenswrapper[4779]: I1128 13:09:01.612743 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4jfc\" (UniqueName: \"kubernetes.io/projected/27fc90d6-d05c-412a-97fb-d9fe40d2a964-kube-api-access-m4jfc\") on node \"crc\" DevicePath \"\"" Nov 28 13:09:01 crc kubenswrapper[4779]: I1128 13:09:01.612769 4779 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27fc90d6-d05c-412a-97fb-d9fe40d2a964-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 28 13:09:01 crc kubenswrapper[4779]: I1128 13:09:01.844239 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-7mwdd" event={"ID":"27fc90d6-d05c-412a-97fb-d9fe40d2a964","Type":"ContainerDied","Data":"dcae673c4e292a8d71509017a34eced6bac700c7073495d5d6e9ac1f5359b81f"} Nov 28 13:09:01 crc kubenswrapper[4779]: I1128 13:09:01.844812 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcae673c4e292a8d71509017a34eced6bac700c7073495d5d6e9ac1f5359b81f" Nov 28 13:09:01 crc kubenswrapper[4779]: I1128 13:09:01.844346 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-7mwdd" Nov 28 13:09:01 crc kubenswrapper[4779]: I1128 13:09:01.962133 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-484k6"] Nov 28 13:09:01 crc kubenswrapper[4779]: E1128 13:09:01.962580 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27fc90d6-d05c-412a-97fb-d9fe40d2a964" containerName="ssh-known-hosts-edpm-deployment" Nov 28 13:09:01 crc kubenswrapper[4779]: I1128 13:09:01.962603 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="27fc90d6-d05c-412a-97fb-d9fe40d2a964" containerName="ssh-known-hosts-edpm-deployment" Nov 28 13:09:01 crc kubenswrapper[4779]: I1128 13:09:01.962860 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="27fc90d6-d05c-412a-97fb-d9fe40d2a964" containerName="ssh-known-hosts-edpm-deployment" Nov 28 13:09:01 crc kubenswrapper[4779]: I1128 13:09:01.963605 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-484k6" Nov 28 13:09:01 crc kubenswrapper[4779]: I1128 13:09:01.967061 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zfcth" Nov 28 13:09:01 crc kubenswrapper[4779]: I1128 13:09:01.967357 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 13:09:01 crc kubenswrapper[4779]: I1128 13:09:01.967632 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 13:09:01 crc kubenswrapper[4779]: I1128 13:09:01.967761 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 13:09:01 crc kubenswrapper[4779]: I1128 13:09:01.973708 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-484k6"] Nov 28 13:09:02 crc kubenswrapper[4779]: I1128 13:09:02.024318 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e42e6af-a9aa-47a7-86ff-980266468175-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-484k6\" (UID: \"4e42e6af-a9aa-47a7-86ff-980266468175\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-484k6" Nov 28 13:09:02 crc kubenswrapper[4779]: I1128 13:09:02.024522 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4e42e6af-a9aa-47a7-86ff-980266468175-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-484k6\" (UID: \"4e42e6af-a9aa-47a7-86ff-980266468175\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-484k6" Nov 28 13:09:02 crc kubenswrapper[4779]: I1128 13:09:02.024638 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xchvb\" (UniqueName: \"kubernetes.io/projected/4e42e6af-a9aa-47a7-86ff-980266468175-kube-api-access-xchvb\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-484k6\" (UID: \"4e42e6af-a9aa-47a7-86ff-980266468175\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-484k6" Nov 28 13:09:02 crc kubenswrapper[4779]: I1128 13:09:02.126177 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e42e6af-a9aa-47a7-86ff-980266468175-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-484k6\" (UID: \"4e42e6af-a9aa-47a7-86ff-980266468175\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-484k6" Nov 28 13:09:02 crc kubenswrapper[4779]: I1128 13:09:02.126241 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4e42e6af-a9aa-47a7-86ff-980266468175-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-484k6\" (UID: \"4e42e6af-a9aa-47a7-86ff-980266468175\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-484k6" Nov 28 13:09:02 crc kubenswrapper[4779]: I1128 13:09:02.126308 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xchvb\" (UniqueName: \"kubernetes.io/projected/4e42e6af-a9aa-47a7-86ff-980266468175-kube-api-access-xchvb\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-484k6\" (UID: \"4e42e6af-a9aa-47a7-86ff-980266468175\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-484k6" Nov 28 13:09:02 crc kubenswrapper[4779]: I1128 13:09:02.130832 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e42e6af-a9aa-47a7-86ff-980266468175-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-484k6\" (UID: \"4e42e6af-a9aa-47a7-86ff-980266468175\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-484k6" Nov 28 13:09:02 crc kubenswrapper[4779]: I1128 13:09:02.135365 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4e42e6af-a9aa-47a7-86ff-980266468175-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-484k6\" (UID: \"4e42e6af-a9aa-47a7-86ff-980266468175\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-484k6" Nov 28 13:09:02 crc kubenswrapper[4779]: I1128 13:09:02.146971 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xchvb\" (UniqueName: \"kubernetes.io/projected/4e42e6af-a9aa-47a7-86ff-980266468175-kube-api-access-xchvb\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-484k6\" (UID: \"4e42e6af-a9aa-47a7-86ff-980266468175\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-484k6" Nov 28 13:09:02 crc kubenswrapper[4779]: I1128 13:09:02.293049 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-484k6" Nov 28 13:09:02 crc kubenswrapper[4779]: I1128 13:09:02.917933 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-484k6"] Nov 28 13:09:02 crc kubenswrapper[4779]: W1128 13:09:02.926762 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e42e6af_a9aa_47a7_86ff_980266468175.slice/crio-7973026c32a2be119b5c175732e67f37b019f9edfaee49a4a84d1baaba660697 WatchSource:0}: Error finding container 7973026c32a2be119b5c175732e67f37b019f9edfaee49a4a84d1baaba660697: Status 404 returned error can't find the container with id 7973026c32a2be119b5c175732e67f37b019f9edfaee49a4a84d1baaba660697 Nov 28 13:09:03 crc kubenswrapper[4779]: I1128 13:09:03.868927 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-484k6" event={"ID":"4e42e6af-a9aa-47a7-86ff-980266468175","Type":"ContainerStarted","Data":"937b10ace5e88bd49d9e61d7e3526c20a01cc84a48604f25a7906ece3958f567"} Nov 28 13:09:03 crc kubenswrapper[4779]: I1128 13:09:03.869233 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-484k6" event={"ID":"4e42e6af-a9aa-47a7-86ff-980266468175","Type":"ContainerStarted","Data":"7973026c32a2be119b5c175732e67f37b019f9edfaee49a4a84d1baaba660697"} Nov 28 13:09:03 crc kubenswrapper[4779]: I1128 13:09:03.895163 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-484k6" podStartSLOduration=2.376240954 podStartE2EDuration="2.8951465s" podCreationTimestamp="2025-11-28 13:09:01 +0000 UTC" firstStartedPulling="2025-11-28 13:09:02.930766354 +0000 UTC m=+2003.496441718" lastFinishedPulling="2025-11-28 13:09:03.44967191 +0000 UTC m=+2004.015347264" observedRunningTime="2025-11-28 13:09:03.893637919 +0000 UTC m=+2004.459313283" watchObservedRunningTime="2025-11-28 13:09:03.8951465 +0000 UTC m=+2004.460821854" Nov 28 13:09:09 crc kubenswrapper[4779]: I1128 13:09:09.254054 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ddtmx"] Nov 28 13:09:09 crc kubenswrapper[4779]: I1128 13:09:09.256512 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ddtmx" Nov 28 13:09:09 crc kubenswrapper[4779]: I1128 13:09:09.274578 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ddtmx"] Nov 28 13:09:09 crc kubenswrapper[4779]: I1128 13:09:09.363768 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6538b6b-711d-4c8f-8550-060e3f6bf803-catalog-content\") pod \"redhat-operators-ddtmx\" (UID: \"c6538b6b-711d-4c8f-8550-060e3f6bf803\") " pod="openshift-marketplace/redhat-operators-ddtmx" Nov 28 13:09:09 crc kubenswrapper[4779]: I1128 13:09:09.363857 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjn2g\" (UniqueName: \"kubernetes.io/projected/c6538b6b-711d-4c8f-8550-060e3f6bf803-kube-api-access-vjn2g\") pod \"redhat-operators-ddtmx\" (UID: \"c6538b6b-711d-4c8f-8550-060e3f6bf803\") " pod="openshift-marketplace/redhat-operators-ddtmx" Nov 28 13:09:09 crc kubenswrapper[4779]: I1128 13:09:09.364546 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6538b6b-711d-4c8f-8550-060e3f6bf803-utilities\") pod \"redhat-operators-ddtmx\" (UID: \"c6538b6b-711d-4c8f-8550-060e3f6bf803\") " pod="openshift-marketplace/redhat-operators-ddtmx" Nov 28 13:09:09 crc kubenswrapper[4779]: I1128 13:09:09.466654 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6538b6b-711d-4c8f-8550-060e3f6bf803-catalog-content\") pod \"redhat-operators-ddtmx\" (UID: \"c6538b6b-711d-4c8f-8550-060e3f6bf803\") " pod="openshift-marketplace/redhat-operators-ddtmx" Nov 28 13:09:09 crc kubenswrapper[4779]: I1128 13:09:09.466706 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjn2g\" (UniqueName: \"kubernetes.io/projected/c6538b6b-711d-4c8f-8550-060e3f6bf803-kube-api-access-vjn2g\") pod \"redhat-operators-ddtmx\" (UID: \"c6538b6b-711d-4c8f-8550-060e3f6bf803\") " pod="openshift-marketplace/redhat-operators-ddtmx" Nov 28 13:09:09 crc kubenswrapper[4779]: I1128 13:09:09.466771 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6538b6b-711d-4c8f-8550-060e3f6bf803-utilities\") pod \"redhat-operators-ddtmx\" (UID: \"c6538b6b-711d-4c8f-8550-060e3f6bf803\") " pod="openshift-marketplace/redhat-operators-ddtmx" Nov 28 13:09:09 crc kubenswrapper[4779]: I1128 13:09:09.467143 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6538b6b-711d-4c8f-8550-060e3f6bf803-catalog-content\") pod \"redhat-operators-ddtmx\" (UID: \"c6538b6b-711d-4c8f-8550-060e3f6bf803\") " pod="openshift-marketplace/redhat-operators-ddtmx" Nov 28 13:09:09 crc kubenswrapper[4779]: I1128 13:09:09.467251 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6538b6b-711d-4c8f-8550-060e3f6bf803-utilities\") pod \"redhat-operators-ddtmx\" (UID: \"c6538b6b-711d-4c8f-8550-060e3f6bf803\") " pod="openshift-marketplace/redhat-operators-ddtmx" Nov 28 13:09:09 crc kubenswrapper[4779]: I1128 13:09:09.484944 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjn2g\" (UniqueName: \"kubernetes.io/projected/c6538b6b-711d-4c8f-8550-060e3f6bf803-kube-api-access-vjn2g\") pod \"redhat-operators-ddtmx\" (UID: \"c6538b6b-711d-4c8f-8550-060e3f6bf803\") " pod="openshift-marketplace/redhat-operators-ddtmx" Nov 28 13:09:09 crc kubenswrapper[4779]: I1128 13:09:09.580086 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ddtmx" Nov 28 13:09:10 crc kubenswrapper[4779]: W1128 13:09:10.078007 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6538b6b_711d_4c8f_8550_060e3f6bf803.slice/crio-aa9c002860a4412c3fdc26acdfcc33a3332ab9cfb174010e6ff48acc8e55cf5d WatchSource:0}: Error finding container aa9c002860a4412c3fdc26acdfcc33a3332ab9cfb174010e6ff48acc8e55cf5d: Status 404 returned error can't find the container with id aa9c002860a4412c3fdc26acdfcc33a3332ab9cfb174010e6ff48acc8e55cf5d Nov 28 13:09:10 crc kubenswrapper[4779]: I1128 13:09:10.079367 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ddtmx"] Nov 28 13:09:10 crc kubenswrapper[4779]: I1128 13:09:10.515819 4779 scope.go:117] "RemoveContainer" containerID="9fc1031303213304d75dc6c33acf72c57b51f43bc27668120a9fe3ed304736da" Nov 28 13:09:10 crc kubenswrapper[4779]: I1128 13:09:10.941175 4779 generic.go:334] "Generic (PLEG): container finished" podID="c6538b6b-711d-4c8f-8550-060e3f6bf803" containerID="0bfbeb0966d12d1c7af9f6b34aa5fb67f7c5a0c3189e0264ff651879427e37f4" exitCode=0 Nov 28 13:09:10 crc kubenswrapper[4779]: I1128 13:09:10.941445 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ddtmx" event={"ID":"c6538b6b-711d-4c8f-8550-060e3f6bf803","Type":"ContainerDied","Data":"0bfbeb0966d12d1c7af9f6b34aa5fb67f7c5a0c3189e0264ff651879427e37f4"} Nov 28 13:09:10 crc kubenswrapper[4779]: I1128 13:09:10.941470 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ddtmx" event={"ID":"c6538b6b-711d-4c8f-8550-060e3f6bf803","Type":"ContainerStarted","Data":"aa9c002860a4412c3fdc26acdfcc33a3332ab9cfb174010e6ff48acc8e55cf5d"} Nov 28 13:09:11 crc kubenswrapper[4779]: I1128 13:09:11.958355 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ddtmx" event={"ID":"c6538b6b-711d-4c8f-8550-060e3f6bf803","Type":"ContainerStarted","Data":"988542b0c1aa997dad056c182bcb90546cdd7454d7f945ecad6b0b698cd3e488"} Nov 28 13:09:13 crc kubenswrapper[4779]: I1128 13:09:13.981467 4779 generic.go:334] "Generic (PLEG): container finished" podID="4e42e6af-a9aa-47a7-86ff-980266468175" containerID="937b10ace5e88bd49d9e61d7e3526c20a01cc84a48604f25a7906ece3958f567" exitCode=0 Nov 28 13:09:13 crc kubenswrapper[4779]: I1128 13:09:13.981999 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-484k6" event={"ID":"4e42e6af-a9aa-47a7-86ff-980266468175","Type":"ContainerDied","Data":"937b10ace5e88bd49d9e61d7e3526c20a01cc84a48604f25a7906ece3958f567"} Nov 28 13:09:13 crc kubenswrapper[4779]: I1128 13:09:13.987011 4779 generic.go:334] "Generic (PLEG): container finished" podID="c6538b6b-711d-4c8f-8550-060e3f6bf803" containerID="988542b0c1aa997dad056c182bcb90546cdd7454d7f945ecad6b0b698cd3e488" exitCode=0 Nov 28 13:09:13 crc kubenswrapper[4779]: I1128 13:09:13.987072 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ddtmx" event={"ID":"c6538b6b-711d-4c8f-8550-060e3f6bf803","Type":"ContainerDied","Data":"988542b0c1aa997dad056c182bcb90546cdd7454d7f945ecad6b0b698cd3e488"} Nov 28 13:09:15 crc kubenswrapper[4779]: I1128 13:09:15.018919 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ddtmx" event={"ID":"c6538b6b-711d-4c8f-8550-060e3f6bf803","Type":"ContainerStarted","Data":"7766fd01918b2ad9fdab53866d7b66a54a904a48a888e773c39a4b79a5e029b5"} Nov 28 13:09:15 crc kubenswrapper[4779]: I1128 13:09:15.458014 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-484k6" Nov 28 13:09:15 crc kubenswrapper[4779]: I1128 13:09:15.477194 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ddtmx" podStartSLOduration=3.010642284 podStartE2EDuration="6.47717174s" podCreationTimestamp="2025-11-28 13:09:09 +0000 UTC" firstStartedPulling="2025-11-28 13:09:10.943688028 +0000 UTC m=+2011.509363392" lastFinishedPulling="2025-11-28 13:09:14.410217454 +0000 UTC m=+2014.975892848" observedRunningTime="2025-11-28 13:09:15.040486359 +0000 UTC m=+2015.606161733" watchObservedRunningTime="2025-11-28 13:09:15.47717174 +0000 UTC m=+2016.042847124" Nov 28 13:09:15 crc kubenswrapper[4779]: I1128 13:09:15.609401 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xchvb\" (UniqueName: \"kubernetes.io/projected/4e42e6af-a9aa-47a7-86ff-980266468175-kube-api-access-xchvb\") pod \"4e42e6af-a9aa-47a7-86ff-980266468175\" (UID: \"4e42e6af-a9aa-47a7-86ff-980266468175\") " Nov 28 13:09:15 crc kubenswrapper[4779]: I1128 13:09:15.609509 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4e42e6af-a9aa-47a7-86ff-980266468175-ssh-key\") pod \"4e42e6af-a9aa-47a7-86ff-980266468175\" (UID: \"4e42e6af-a9aa-47a7-86ff-980266468175\") " Nov 28 13:09:15 crc kubenswrapper[4779]: I1128 13:09:15.609580 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e42e6af-a9aa-47a7-86ff-980266468175-inventory\") pod \"4e42e6af-a9aa-47a7-86ff-980266468175\" (UID: \"4e42e6af-a9aa-47a7-86ff-980266468175\") " Nov 28 13:09:15 crc kubenswrapper[4779]: I1128 13:09:15.619965 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e42e6af-a9aa-47a7-86ff-980266468175-kube-api-access-xchvb" (OuterVolumeSpecName: "kube-api-access-xchvb") pod "4e42e6af-a9aa-47a7-86ff-980266468175" (UID: "4e42e6af-a9aa-47a7-86ff-980266468175"). InnerVolumeSpecName "kube-api-access-xchvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:09:15 crc kubenswrapper[4779]: I1128 13:09:15.645195 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e42e6af-a9aa-47a7-86ff-980266468175-inventory" (OuterVolumeSpecName: "inventory") pod "4e42e6af-a9aa-47a7-86ff-980266468175" (UID: "4e42e6af-a9aa-47a7-86ff-980266468175"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:09:15 crc kubenswrapper[4779]: I1128 13:09:15.647498 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e42e6af-a9aa-47a7-86ff-980266468175-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4e42e6af-a9aa-47a7-86ff-980266468175" (UID: "4e42e6af-a9aa-47a7-86ff-980266468175"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:09:15 crc kubenswrapper[4779]: I1128 13:09:15.711491 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xchvb\" (UniqueName: \"kubernetes.io/projected/4e42e6af-a9aa-47a7-86ff-980266468175-kube-api-access-xchvb\") on node \"crc\" DevicePath \"\"" Nov 28 13:09:15 crc kubenswrapper[4779]: I1128 13:09:15.711524 4779 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4e42e6af-a9aa-47a7-86ff-980266468175-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 13:09:15 crc kubenswrapper[4779]: I1128 13:09:15.711533 4779 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e42e6af-a9aa-47a7-86ff-980266468175-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 13:09:16 crc kubenswrapper[4779]: I1128 13:09:16.033186 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-484k6" Nov 28 13:09:16 crc kubenswrapper[4779]: I1128 13:09:16.033075 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-484k6" event={"ID":"4e42e6af-a9aa-47a7-86ff-980266468175","Type":"ContainerDied","Data":"7973026c32a2be119b5c175732e67f37b019f9edfaee49a4a84d1baaba660697"} Nov 28 13:09:16 crc kubenswrapper[4779]: I1128 13:09:16.034048 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7973026c32a2be119b5c175732e67f37b019f9edfaee49a4a84d1baaba660697" Nov 28 13:09:16 crc kubenswrapper[4779]: I1128 13:09:16.090548 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qkpsw"] Nov 28 13:09:16 crc kubenswrapper[4779]: E1128 13:09:16.091065 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e42e6af-a9aa-47a7-86ff-980266468175" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 28 13:09:16 crc kubenswrapper[4779]: I1128 13:09:16.091140 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e42e6af-a9aa-47a7-86ff-980266468175" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 28 13:09:16 crc kubenswrapper[4779]: I1128 13:09:16.091478 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e42e6af-a9aa-47a7-86ff-980266468175" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 28 13:09:16 crc kubenswrapper[4779]: I1128 13:09:16.092422 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qkpsw" Nov 28 13:09:16 crc kubenswrapper[4779]: I1128 13:09:16.095564 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 13:09:16 crc kubenswrapper[4779]: I1128 13:09:16.095845 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 13:09:16 crc kubenswrapper[4779]: I1128 13:09:16.095874 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zfcth" Nov 28 13:09:16 crc kubenswrapper[4779]: I1128 13:09:16.096718 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 13:09:16 crc kubenswrapper[4779]: I1128 13:09:16.105253 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qkpsw"] Nov 28 13:09:16 crc kubenswrapper[4779]: I1128 13:09:16.220162 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dd33d0ba-c2bf-47d8-8c6c-d1b6d9a67449-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qkpsw\" (UID: \"dd33d0ba-c2bf-47d8-8c6c-d1b6d9a67449\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qkpsw" Nov 28 13:09:16 crc kubenswrapper[4779]: I1128 13:09:16.220209 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srl8f\" (UniqueName: \"kubernetes.io/projected/dd33d0ba-c2bf-47d8-8c6c-d1b6d9a67449-kube-api-access-srl8f\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qkpsw\" (UID: \"dd33d0ba-c2bf-47d8-8c6c-d1b6d9a67449\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qkpsw" Nov 28 13:09:16 crc kubenswrapper[4779]: I1128 13:09:16.220247 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd33d0ba-c2bf-47d8-8c6c-d1b6d9a67449-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qkpsw\" (UID: \"dd33d0ba-c2bf-47d8-8c6c-d1b6d9a67449\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qkpsw" Nov 28 13:09:16 crc kubenswrapper[4779]: I1128 13:09:16.285015 4779 patch_prober.go:28] interesting pod/machine-config-daemon-kj9g2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 13:09:16 crc kubenswrapper[4779]: I1128 13:09:16.285155 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 13:09:16 crc kubenswrapper[4779]: I1128 13:09:16.322453 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dd33d0ba-c2bf-47d8-8c6c-d1b6d9a67449-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qkpsw\" (UID: \"dd33d0ba-c2bf-47d8-8c6c-d1b6d9a67449\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qkpsw" Nov 28 13:09:16 crc kubenswrapper[4779]: I1128 13:09:16.322508 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srl8f\" (UniqueName: \"kubernetes.io/projected/dd33d0ba-c2bf-47d8-8c6c-d1b6d9a67449-kube-api-access-srl8f\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qkpsw\" (UID: \"dd33d0ba-c2bf-47d8-8c6c-d1b6d9a67449\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qkpsw" Nov 28 13:09:16 crc kubenswrapper[4779]: I1128 13:09:16.322550 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd33d0ba-c2bf-47d8-8c6c-d1b6d9a67449-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qkpsw\" (UID: \"dd33d0ba-c2bf-47d8-8c6c-d1b6d9a67449\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qkpsw" Nov 28 13:09:16 crc kubenswrapper[4779]: I1128 13:09:16.326616 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd33d0ba-c2bf-47d8-8c6c-d1b6d9a67449-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qkpsw\" (UID: \"dd33d0ba-c2bf-47d8-8c6c-d1b6d9a67449\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qkpsw" Nov 28 13:09:16 crc kubenswrapper[4779]: I1128 13:09:16.332885 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dd33d0ba-c2bf-47d8-8c6c-d1b6d9a67449-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qkpsw\" (UID: \"dd33d0ba-c2bf-47d8-8c6c-d1b6d9a67449\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qkpsw" Nov 28 13:09:16 crc kubenswrapper[4779]: I1128 13:09:16.405795 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srl8f\" (UniqueName: \"kubernetes.io/projected/dd33d0ba-c2bf-47d8-8c6c-d1b6d9a67449-kube-api-access-srl8f\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qkpsw\" (UID: \"dd33d0ba-c2bf-47d8-8c6c-d1b6d9a67449\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qkpsw" Nov 28 13:09:16 crc kubenswrapper[4779]: I1128 13:09:16.417741 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qkpsw" Nov 28 13:09:17 crc kubenswrapper[4779]: I1128 13:09:17.099750 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qkpsw"] Nov 28 13:09:18 crc kubenswrapper[4779]: I1128 13:09:18.051217 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qkpsw" event={"ID":"dd33d0ba-c2bf-47d8-8c6c-d1b6d9a67449","Type":"ContainerStarted","Data":"e4c0271ee622ab634fe9875d01922af8dfcbe544dd16b78703f25dfa527ed269"} Nov 28 13:09:18 crc kubenswrapper[4779]: I1128 13:09:18.051500 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qkpsw" event={"ID":"dd33d0ba-c2bf-47d8-8c6c-d1b6d9a67449","Type":"ContainerStarted","Data":"c5e1025c05c37d0ba3b386264035d2c793aa17623972a9c2fc59725ef46a2b03"} Nov 28 13:09:18 crc kubenswrapper[4779]: I1128 13:09:18.075525 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qkpsw" podStartSLOduration=1.6285179680000001 podStartE2EDuration="2.075498083s" podCreationTimestamp="2025-11-28 13:09:16 +0000 UTC" firstStartedPulling="2025-11-28 13:09:17.092023759 +0000 UTC m=+2017.657699143" lastFinishedPulling="2025-11-28 13:09:17.539003904 +0000 UTC m=+2018.104679258" observedRunningTime="2025-11-28 13:09:18.0753999 +0000 UTC m=+2018.641075254" watchObservedRunningTime="2025-11-28 13:09:18.075498083 +0000 UTC m=+2018.641173457" Nov 28 13:09:19 crc kubenswrapper[4779]: I1128 13:09:19.580979 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ddtmx" Nov 28 13:09:19 crc kubenswrapper[4779]: I1128 13:09:19.581364 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ddtmx" Nov 28 13:09:20 crc kubenswrapper[4779]: I1128 13:09:20.657229 4779 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ddtmx" podUID="c6538b6b-711d-4c8f-8550-060e3f6bf803" containerName="registry-server" probeResult="failure" output=< Nov 28 13:09:20 crc kubenswrapper[4779]: timeout: failed to connect service ":50051" within 1s Nov 28 13:09:20 crc kubenswrapper[4779]: > Nov 28 13:09:28 crc kubenswrapper[4779]: I1128 13:09:28.197187 4779 generic.go:334] "Generic (PLEG): container finished" podID="dd33d0ba-c2bf-47d8-8c6c-d1b6d9a67449" containerID="e4c0271ee622ab634fe9875d01922af8dfcbe544dd16b78703f25dfa527ed269" exitCode=0 Nov 28 13:09:28 crc kubenswrapper[4779]: I1128 13:09:28.197558 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qkpsw" event={"ID":"dd33d0ba-c2bf-47d8-8c6c-d1b6d9a67449","Type":"ContainerDied","Data":"e4c0271ee622ab634fe9875d01922af8dfcbe544dd16b78703f25dfa527ed269"} Nov 28 13:09:29 crc kubenswrapper[4779]: I1128 13:09:29.644285 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qkpsw" Nov 28 13:09:29 crc kubenswrapper[4779]: I1128 13:09:29.646581 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ddtmx" Nov 28 13:09:29 crc kubenswrapper[4779]: I1128 13:09:29.716012 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ddtmx" Nov 28 13:09:29 crc kubenswrapper[4779]: I1128 13:09:29.840962 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dd33d0ba-c2bf-47d8-8c6c-d1b6d9a67449-ssh-key\") pod \"dd33d0ba-c2bf-47d8-8c6c-d1b6d9a67449\" (UID: \"dd33d0ba-c2bf-47d8-8c6c-d1b6d9a67449\") " Nov 28 13:09:29 crc kubenswrapper[4779]: I1128 13:09:29.841522 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srl8f\" (UniqueName: \"kubernetes.io/projected/dd33d0ba-c2bf-47d8-8c6c-d1b6d9a67449-kube-api-access-srl8f\") pod \"dd33d0ba-c2bf-47d8-8c6c-d1b6d9a67449\" (UID: \"dd33d0ba-c2bf-47d8-8c6c-d1b6d9a67449\") " Nov 28 13:09:29 crc kubenswrapper[4779]: I1128 13:09:29.841587 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd33d0ba-c2bf-47d8-8c6c-d1b6d9a67449-inventory\") pod \"dd33d0ba-c2bf-47d8-8c6c-d1b6d9a67449\" (UID: \"dd33d0ba-c2bf-47d8-8c6c-d1b6d9a67449\") " Nov 28 13:09:29 crc kubenswrapper[4779]: I1128 13:09:29.846833 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd33d0ba-c2bf-47d8-8c6c-d1b6d9a67449-kube-api-access-srl8f" (OuterVolumeSpecName: "kube-api-access-srl8f") pod "dd33d0ba-c2bf-47d8-8c6c-d1b6d9a67449" (UID: "dd33d0ba-c2bf-47d8-8c6c-d1b6d9a67449"). InnerVolumeSpecName "kube-api-access-srl8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:09:29 crc kubenswrapper[4779]: I1128 13:09:29.871764 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd33d0ba-c2bf-47d8-8c6c-d1b6d9a67449-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "dd33d0ba-c2bf-47d8-8c6c-d1b6d9a67449" (UID: "dd33d0ba-c2bf-47d8-8c6c-d1b6d9a67449"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:09:29 crc kubenswrapper[4779]: I1128 13:09:29.873007 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd33d0ba-c2bf-47d8-8c6c-d1b6d9a67449-inventory" (OuterVolumeSpecName: "inventory") pod "dd33d0ba-c2bf-47d8-8c6c-d1b6d9a67449" (UID: "dd33d0ba-c2bf-47d8-8c6c-d1b6d9a67449"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:09:29 crc kubenswrapper[4779]: I1128 13:09:29.908208 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ddtmx"] Nov 28 13:09:29 crc kubenswrapper[4779]: I1128 13:09:29.944127 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srl8f\" (UniqueName: \"kubernetes.io/projected/dd33d0ba-c2bf-47d8-8c6c-d1b6d9a67449-kube-api-access-srl8f\") on node \"crc\" DevicePath \"\"" Nov 28 13:09:29 crc kubenswrapper[4779]: I1128 13:09:29.944241 4779 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd33d0ba-c2bf-47d8-8c6c-d1b6d9a67449-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 13:09:29 crc kubenswrapper[4779]: I1128 13:09:29.944317 4779 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dd33d0ba-c2bf-47d8-8c6c-d1b6d9a67449-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.219815 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qkpsw" event={"ID":"dd33d0ba-c2bf-47d8-8c6c-d1b6d9a67449","Type":"ContainerDied","Data":"c5e1025c05c37d0ba3b386264035d2c793aa17623972a9c2fc59725ef46a2b03"} Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.219860 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qkpsw" Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.219877 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5e1025c05c37d0ba3b386264035d2c793aa17623972a9c2fc59725ef46a2b03" Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.337414 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htvcg"] Nov 28 13:09:30 crc kubenswrapper[4779]: E1128 13:09:30.337989 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd33d0ba-c2bf-47d8-8c6c-d1b6d9a67449" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.338022 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd33d0ba-c2bf-47d8-8c6c-d1b6d9a67449" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.338465 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd33d0ba-c2bf-47d8-8c6c-d1b6d9a67449" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.339570 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htvcg" Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.343892 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.344652 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.344904 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.345169 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.345456 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.345678 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zfcth" Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.345892 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.347837 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htvcg"] Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.346224 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.453173 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/85a14d9a-2667-48a0-83c1-2e37f92590fb-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htvcg\" (UID: \"85a14d9a-2667-48a0-83c1-2e37f92590fb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htvcg" Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.453225 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85a14d9a-2667-48a0-83c1-2e37f92590fb-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htvcg\" (UID: \"85a14d9a-2667-48a0-83c1-2e37f92590fb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htvcg" Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.453254 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85a14d9a-2667-48a0-83c1-2e37f92590fb-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htvcg\" (UID: \"85a14d9a-2667-48a0-83c1-2e37f92590fb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htvcg" Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.453276 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85a14d9a-2667-48a0-83c1-2e37f92590fb-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htvcg\" (UID: \"85a14d9a-2667-48a0-83c1-2e37f92590fb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htvcg" Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.453312 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85a14d9a-2667-48a0-83c1-2e37f92590fb-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htvcg\" (UID: \"85a14d9a-2667-48a0-83c1-2e37f92590fb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htvcg" Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.453419 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/85a14d9a-2667-48a0-83c1-2e37f92590fb-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htvcg\" (UID: \"85a14d9a-2667-48a0-83c1-2e37f92590fb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htvcg" Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.453532 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/85a14d9a-2667-48a0-83c1-2e37f92590fb-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htvcg\" (UID: \"85a14d9a-2667-48a0-83c1-2e37f92590fb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htvcg" Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.453566 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/85a14d9a-2667-48a0-83c1-2e37f92590fb-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htvcg\" (UID: \"85a14d9a-2667-48a0-83c1-2e37f92590fb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htvcg" Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.453879 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85a14d9a-2667-48a0-83c1-2e37f92590fb-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htvcg\" (UID: \"85a14d9a-2667-48a0-83c1-2e37f92590fb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htvcg" Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.454201 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/85a14d9a-2667-48a0-83c1-2e37f92590fb-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htvcg\" (UID: \"85a14d9a-2667-48a0-83c1-2e37f92590fb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htvcg" Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.454300 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48mdp\" (UniqueName: \"kubernetes.io/projected/85a14d9a-2667-48a0-83c1-2e37f92590fb-kube-api-access-48mdp\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htvcg\" (UID: \"85a14d9a-2667-48a0-83c1-2e37f92590fb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htvcg" Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.454368 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85a14d9a-2667-48a0-83c1-2e37f92590fb-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htvcg\" (UID: \"85a14d9a-2667-48a0-83c1-2e37f92590fb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htvcg" Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.454473 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85a14d9a-2667-48a0-83c1-2e37f92590fb-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htvcg\" (UID: \"85a14d9a-2667-48a0-83c1-2e37f92590fb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htvcg" Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.454614 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85a14d9a-2667-48a0-83c1-2e37f92590fb-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htvcg\" (UID: \"85a14d9a-2667-48a0-83c1-2e37f92590fb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htvcg" Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.556562 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85a14d9a-2667-48a0-83c1-2e37f92590fb-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htvcg\" (UID: \"85a14d9a-2667-48a0-83c1-2e37f92590fb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htvcg" Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.556715 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/85a14d9a-2667-48a0-83c1-2e37f92590fb-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htvcg\" (UID: \"85a14d9a-2667-48a0-83c1-2e37f92590fb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htvcg" Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.556771 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48mdp\" (UniqueName: \"kubernetes.io/projected/85a14d9a-2667-48a0-83c1-2e37f92590fb-kube-api-access-48mdp\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htvcg\" (UID: \"85a14d9a-2667-48a0-83c1-2e37f92590fb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htvcg" Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.556819 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85a14d9a-2667-48a0-83c1-2e37f92590fb-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htvcg\" (UID: \"85a14d9a-2667-48a0-83c1-2e37f92590fb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htvcg" Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.556885 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85a14d9a-2667-48a0-83c1-2e37f92590fb-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htvcg\" (UID: \"85a14d9a-2667-48a0-83c1-2e37f92590fb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htvcg" Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.556947 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85a14d9a-2667-48a0-83c1-2e37f92590fb-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htvcg\" (UID: \"85a14d9a-2667-48a0-83c1-2e37f92590fb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htvcg" Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.557026 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/85a14d9a-2667-48a0-83c1-2e37f92590fb-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htvcg\" (UID: \"85a14d9a-2667-48a0-83c1-2e37f92590fb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htvcg" Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.557067 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85a14d9a-2667-48a0-83c1-2e37f92590fb-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htvcg\" (UID: \"85a14d9a-2667-48a0-83c1-2e37f92590fb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htvcg" Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.557140 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85a14d9a-2667-48a0-83c1-2e37f92590fb-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htvcg\" (UID: \"85a14d9a-2667-48a0-83c1-2e37f92590fb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htvcg" Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.557171 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85a14d9a-2667-48a0-83c1-2e37f92590fb-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htvcg\" (UID: \"85a14d9a-2667-48a0-83c1-2e37f92590fb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htvcg" Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.557220 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85a14d9a-2667-48a0-83c1-2e37f92590fb-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htvcg\" (UID: \"85a14d9a-2667-48a0-83c1-2e37f92590fb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htvcg" Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.557269 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/85a14d9a-2667-48a0-83c1-2e37f92590fb-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htvcg\" (UID: \"85a14d9a-2667-48a0-83c1-2e37f92590fb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htvcg" Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.557378 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/85a14d9a-2667-48a0-83c1-2e37f92590fb-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htvcg\" (UID: \"85a14d9a-2667-48a0-83c1-2e37f92590fb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htvcg" Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.557426 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/85a14d9a-2667-48a0-83c1-2e37f92590fb-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htvcg\" (UID: \"85a14d9a-2667-48a0-83c1-2e37f92590fb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htvcg" Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.562459 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85a14d9a-2667-48a0-83c1-2e37f92590fb-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htvcg\" (UID: \"85a14d9a-2667-48a0-83c1-2e37f92590fb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htvcg" Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.564190 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/85a14d9a-2667-48a0-83c1-2e37f92590fb-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htvcg\" (UID: \"85a14d9a-2667-48a0-83c1-2e37f92590fb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htvcg" Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.564285 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85a14d9a-2667-48a0-83c1-2e37f92590fb-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htvcg\" (UID: \"85a14d9a-2667-48a0-83c1-2e37f92590fb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htvcg" Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.564485 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/85a14d9a-2667-48a0-83c1-2e37f92590fb-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htvcg\" (UID: \"85a14d9a-2667-48a0-83c1-2e37f92590fb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htvcg" Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.564548 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/85a14d9a-2667-48a0-83c1-2e37f92590fb-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htvcg\" (UID: \"85a14d9a-2667-48a0-83c1-2e37f92590fb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htvcg" Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.565796 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/85a14d9a-2667-48a0-83c1-2e37f92590fb-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htvcg\" (UID: \"85a14d9a-2667-48a0-83c1-2e37f92590fb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htvcg" Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.565876 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85a14d9a-2667-48a0-83c1-2e37f92590fb-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htvcg\" (UID: \"85a14d9a-2667-48a0-83c1-2e37f92590fb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htvcg" Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.565973 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85a14d9a-2667-48a0-83c1-2e37f92590fb-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htvcg\" (UID: \"85a14d9a-2667-48a0-83c1-2e37f92590fb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htvcg" Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.566703 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85a14d9a-2667-48a0-83c1-2e37f92590fb-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htvcg\" (UID: \"85a14d9a-2667-48a0-83c1-2e37f92590fb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htvcg" Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.567386 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85a14d9a-2667-48a0-83c1-2e37f92590fb-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htvcg\" (UID: \"85a14d9a-2667-48a0-83c1-2e37f92590fb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htvcg" Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.567521 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85a14d9a-2667-48a0-83c1-2e37f92590fb-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htvcg\" (UID: \"85a14d9a-2667-48a0-83c1-2e37f92590fb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htvcg" Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.568387 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85a14d9a-2667-48a0-83c1-2e37f92590fb-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htvcg\" (UID: \"85a14d9a-2667-48a0-83c1-2e37f92590fb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htvcg" Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.568952 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/85a14d9a-2667-48a0-83c1-2e37f92590fb-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htvcg\" (UID: \"85a14d9a-2667-48a0-83c1-2e37f92590fb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htvcg" Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.582687 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48mdp\" (UniqueName: \"kubernetes.io/projected/85a14d9a-2667-48a0-83c1-2e37f92590fb-kube-api-access-48mdp\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-htvcg\" (UID: \"85a14d9a-2667-48a0-83c1-2e37f92590fb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htvcg" Nov 28 13:09:30 crc kubenswrapper[4779]: I1128 13:09:30.687334 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htvcg" Nov 28 13:09:31 crc kubenswrapper[4779]: I1128 13:09:31.230347 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ddtmx" podUID="c6538b6b-711d-4c8f-8550-060e3f6bf803" containerName="registry-server" containerID="cri-o://7766fd01918b2ad9fdab53866d7b66a54a904a48a888e773c39a4b79a5e029b5" gracePeriod=2 Nov 28 13:09:31 crc kubenswrapper[4779]: I1128 13:09:31.290905 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htvcg"] Nov 28 13:09:31 crc kubenswrapper[4779]: W1128 13:09:31.340015 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85a14d9a_2667_48a0_83c1_2e37f92590fb.slice/crio-6e9b69d717a51718694302240e02e2b6aa619bf750205f219d43461e5446ba20 WatchSource:0}: Error finding container 6e9b69d717a51718694302240e02e2b6aa619bf750205f219d43461e5446ba20: Status 404 returned error can't find the container with id 6e9b69d717a51718694302240e02e2b6aa619bf750205f219d43461e5446ba20 Nov 28 13:09:31 crc kubenswrapper[4779]: E1128 13:09:31.481790 4779 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6538b6b_711d_4c8f_8550_060e3f6bf803.slice/crio-7766fd01918b2ad9fdab53866d7b66a54a904a48a888e773c39a4b79a5e029b5.scope\": RecentStats: unable to find data in memory cache]" Nov 28 13:09:31 crc kubenswrapper[4779]: I1128 13:09:31.732196 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ddtmx" Nov 28 13:09:31 crc kubenswrapper[4779]: I1128 13:09:31.880768 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6538b6b-711d-4c8f-8550-060e3f6bf803-utilities\") pod \"c6538b6b-711d-4c8f-8550-060e3f6bf803\" (UID: \"c6538b6b-711d-4c8f-8550-060e3f6bf803\") " Nov 28 13:09:31 crc kubenswrapper[4779]: I1128 13:09:31.881069 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjn2g\" (UniqueName: \"kubernetes.io/projected/c6538b6b-711d-4c8f-8550-060e3f6bf803-kube-api-access-vjn2g\") pod \"c6538b6b-711d-4c8f-8550-060e3f6bf803\" (UID: \"c6538b6b-711d-4c8f-8550-060e3f6bf803\") " Nov 28 13:09:31 crc kubenswrapper[4779]: I1128 13:09:31.881332 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6538b6b-711d-4c8f-8550-060e3f6bf803-catalog-content\") pod \"c6538b6b-711d-4c8f-8550-060e3f6bf803\" (UID: \"c6538b6b-711d-4c8f-8550-060e3f6bf803\") " Nov 28 13:09:31 crc kubenswrapper[4779]: I1128 13:09:31.882008 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6538b6b-711d-4c8f-8550-060e3f6bf803-utilities" (OuterVolumeSpecName: "utilities") pod "c6538b6b-711d-4c8f-8550-060e3f6bf803" (UID: "c6538b6b-711d-4c8f-8550-060e3f6bf803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 13:09:31 crc kubenswrapper[4779]: I1128 13:09:31.893491 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6538b6b-711d-4c8f-8550-060e3f6bf803-kube-api-access-vjn2g" (OuterVolumeSpecName: "kube-api-access-vjn2g") pod "c6538b6b-711d-4c8f-8550-060e3f6bf803" (UID: "c6538b6b-711d-4c8f-8550-060e3f6bf803"). InnerVolumeSpecName "kube-api-access-vjn2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:09:31 crc kubenswrapper[4779]: I1128 13:09:31.984384 4779 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6538b6b-711d-4c8f-8550-060e3f6bf803-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 13:09:31 crc kubenswrapper[4779]: I1128 13:09:31.984428 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjn2g\" (UniqueName: \"kubernetes.io/projected/c6538b6b-711d-4c8f-8550-060e3f6bf803-kube-api-access-vjn2g\") on node \"crc\" DevicePath \"\"" Nov 28 13:09:31 crc kubenswrapper[4779]: I1128 13:09:31.993575 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6538b6b-711d-4c8f-8550-060e3f6bf803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c6538b6b-711d-4c8f-8550-060e3f6bf803" (UID: "c6538b6b-711d-4c8f-8550-060e3f6bf803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 13:09:32 crc kubenswrapper[4779]: I1128 13:09:32.086253 4779 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6538b6b-711d-4c8f-8550-060e3f6bf803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 13:09:32 crc kubenswrapper[4779]: I1128 13:09:32.243728 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htvcg" event={"ID":"85a14d9a-2667-48a0-83c1-2e37f92590fb","Type":"ContainerStarted","Data":"6e9b69d717a51718694302240e02e2b6aa619bf750205f219d43461e5446ba20"} Nov 28 13:09:32 crc kubenswrapper[4779]: I1128 13:09:32.247295 4779 generic.go:334] "Generic (PLEG): container finished" podID="c6538b6b-711d-4c8f-8550-060e3f6bf803" containerID="7766fd01918b2ad9fdab53866d7b66a54a904a48a888e773c39a4b79a5e029b5" exitCode=0 Nov 28 13:09:32 crc kubenswrapper[4779]: I1128 13:09:32.247356 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ddtmx" event={"ID":"c6538b6b-711d-4c8f-8550-060e3f6bf803","Type":"ContainerDied","Data":"7766fd01918b2ad9fdab53866d7b66a54a904a48a888e773c39a4b79a5e029b5"} Nov 28 13:09:32 crc kubenswrapper[4779]: I1128 13:09:32.247376 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ddtmx" Nov 28 13:09:32 crc kubenswrapper[4779]: I1128 13:09:32.247404 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ddtmx" event={"ID":"c6538b6b-711d-4c8f-8550-060e3f6bf803","Type":"ContainerDied","Data":"aa9c002860a4412c3fdc26acdfcc33a3332ab9cfb174010e6ff48acc8e55cf5d"} Nov 28 13:09:32 crc kubenswrapper[4779]: I1128 13:09:32.247435 4779 scope.go:117] "RemoveContainer" containerID="7766fd01918b2ad9fdab53866d7b66a54a904a48a888e773c39a4b79a5e029b5" Nov 28 13:09:32 crc kubenswrapper[4779]: I1128 13:09:32.290856 4779 scope.go:117] "RemoveContainer" containerID="988542b0c1aa997dad056c182bcb90546cdd7454d7f945ecad6b0b698cd3e488" Nov 28 13:09:32 crc kubenswrapper[4779]: I1128 13:09:32.314566 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ddtmx"] Nov 28 13:09:32 crc kubenswrapper[4779]: I1128 13:09:32.324888 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ddtmx"] Nov 28 13:09:32 crc kubenswrapper[4779]: I1128 13:09:32.331404 4779 scope.go:117] "RemoveContainer" containerID="0bfbeb0966d12d1c7af9f6b34aa5fb67f7c5a0c3189e0264ff651879427e37f4" Nov 28 13:09:32 crc kubenswrapper[4779]: I1128 13:09:32.369046 4779 scope.go:117] "RemoveContainer" containerID="7766fd01918b2ad9fdab53866d7b66a54a904a48a888e773c39a4b79a5e029b5" Nov 28 13:09:32 crc kubenswrapper[4779]: E1128 13:09:32.370311 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7766fd01918b2ad9fdab53866d7b66a54a904a48a888e773c39a4b79a5e029b5\": container with ID starting with 7766fd01918b2ad9fdab53866d7b66a54a904a48a888e773c39a4b79a5e029b5 not found: ID does not exist" containerID="7766fd01918b2ad9fdab53866d7b66a54a904a48a888e773c39a4b79a5e029b5" Nov 28 13:09:32 crc kubenswrapper[4779]: I1128 13:09:32.370398 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7766fd01918b2ad9fdab53866d7b66a54a904a48a888e773c39a4b79a5e029b5"} err="failed to get container status \"7766fd01918b2ad9fdab53866d7b66a54a904a48a888e773c39a4b79a5e029b5\": rpc error: code = NotFound desc = could not find container \"7766fd01918b2ad9fdab53866d7b66a54a904a48a888e773c39a4b79a5e029b5\": container with ID starting with 7766fd01918b2ad9fdab53866d7b66a54a904a48a888e773c39a4b79a5e029b5 not found: ID does not exist" Nov 28 13:09:32 crc kubenswrapper[4779]: I1128 13:09:32.370435 4779 scope.go:117] "RemoveContainer" containerID="988542b0c1aa997dad056c182bcb90546cdd7454d7f945ecad6b0b698cd3e488" Nov 28 13:09:32 crc kubenswrapper[4779]: E1128 13:09:32.371041 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"988542b0c1aa997dad056c182bcb90546cdd7454d7f945ecad6b0b698cd3e488\": container with ID starting with 988542b0c1aa997dad056c182bcb90546cdd7454d7f945ecad6b0b698cd3e488 not found: ID does not exist" containerID="988542b0c1aa997dad056c182bcb90546cdd7454d7f945ecad6b0b698cd3e488" Nov 28 13:09:32 crc kubenswrapper[4779]: I1128 13:09:32.371083 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"988542b0c1aa997dad056c182bcb90546cdd7454d7f945ecad6b0b698cd3e488"} err="failed to get container status \"988542b0c1aa997dad056c182bcb90546cdd7454d7f945ecad6b0b698cd3e488\": rpc error: code = NotFound desc = could not find container \"988542b0c1aa997dad056c182bcb90546cdd7454d7f945ecad6b0b698cd3e488\": container with ID starting with 988542b0c1aa997dad056c182bcb90546cdd7454d7f945ecad6b0b698cd3e488 not found: ID does not exist" Nov 28 13:09:32 crc kubenswrapper[4779]: I1128 13:09:32.371154 4779 scope.go:117] "RemoveContainer" containerID="0bfbeb0966d12d1c7af9f6b34aa5fb67f7c5a0c3189e0264ff651879427e37f4" Nov 28 13:09:32 crc kubenswrapper[4779]: E1128 13:09:32.371506 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bfbeb0966d12d1c7af9f6b34aa5fb67f7c5a0c3189e0264ff651879427e37f4\": container with ID starting with 0bfbeb0966d12d1c7af9f6b34aa5fb67f7c5a0c3189e0264ff651879427e37f4 not found: ID does not exist" containerID="0bfbeb0966d12d1c7af9f6b34aa5fb67f7c5a0c3189e0264ff651879427e37f4" Nov 28 13:09:32 crc kubenswrapper[4779]: I1128 13:09:32.371544 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bfbeb0966d12d1c7af9f6b34aa5fb67f7c5a0c3189e0264ff651879427e37f4"} err="failed to get container status \"0bfbeb0966d12d1c7af9f6b34aa5fb67f7c5a0c3189e0264ff651879427e37f4\": rpc error: code = NotFound desc = could not find container \"0bfbeb0966d12d1c7af9f6b34aa5fb67f7c5a0c3189e0264ff651879427e37f4\": container with ID starting with 0bfbeb0966d12d1c7af9f6b34aa5fb67f7c5a0c3189e0264ff651879427e37f4 not found: ID does not exist" Nov 28 13:09:33 crc kubenswrapper[4779]: I1128 13:09:33.261214 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htvcg" event={"ID":"85a14d9a-2667-48a0-83c1-2e37f92590fb","Type":"ContainerStarted","Data":"52a32e307e3736ca205e6c449a82f1be332107c527c19dd3ebc54b1a0fc10211"} Nov 28 13:09:33 crc kubenswrapper[4779]: I1128 13:09:33.296918 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htvcg" podStartSLOduration=2.606727967 podStartE2EDuration="3.296890552s" podCreationTimestamp="2025-11-28 13:09:30 +0000 UTC" firstStartedPulling="2025-11-28 13:09:31.342788567 +0000 UTC m=+2031.908463921" lastFinishedPulling="2025-11-28 13:09:32.032951112 +0000 UTC m=+2032.598626506" observedRunningTime="2025-11-28 13:09:33.286183886 +0000 UTC m=+2033.851859270" watchObservedRunningTime="2025-11-28 13:09:33.296890552 +0000 UTC m=+2033.862565956" Nov 28 13:09:33 crc kubenswrapper[4779]: I1128 13:09:33.749526 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6538b6b-711d-4c8f-8550-060e3f6bf803" path="/var/lib/kubelet/pods/c6538b6b-711d-4c8f-8550-060e3f6bf803/volumes" Nov 28 13:09:46 crc kubenswrapper[4779]: I1128 13:09:46.284946 4779 patch_prober.go:28] interesting pod/machine-config-daemon-kj9g2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 13:09:46 crc kubenswrapper[4779]: I1128 13:09:46.285674 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 13:10:16 crc kubenswrapper[4779]: I1128 13:10:16.285663 4779 patch_prober.go:28] interesting pod/machine-config-daemon-kj9g2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 13:10:16 crc kubenswrapper[4779]: I1128 13:10:16.286357 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 13:10:16 crc kubenswrapper[4779]: I1128 13:10:16.286437 4779 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" Nov 28 13:10:16 crc kubenswrapper[4779]: I1128 13:10:16.288604 4779 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"618d5fbe87fa7087aedd943e484ddc1d1d52c7576dd65968b53ae378fd1610f9"} pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 13:10:16 crc kubenswrapper[4779]: I1128 13:10:16.288741 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" containerID="cri-o://618d5fbe87fa7087aedd943e484ddc1d1d52c7576dd65968b53ae378fd1610f9" gracePeriod=600 Nov 28 13:10:17 crc kubenswrapper[4779]: I1128 13:10:17.757380 4779 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-7574d9569-x822f" podUID="f1d9753d-b49d-4e32-b312-137314283984" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 13:10:17 crc kubenswrapper[4779]: I1128 13:10:17.760252 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/telemetry-operator-controller-manager-7574d9569-x822f" podUID="f1d9753d-b49d-4e32-b312-137314283984" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 13:10:18 crc kubenswrapper[4779]: I1128 13:10:18.854701 4779 generic.go:334] "Generic (PLEG): container finished" podID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerID="618d5fbe87fa7087aedd943e484ddc1d1d52c7576dd65968b53ae378fd1610f9" exitCode=0 Nov 28 13:10:18 crc kubenswrapper[4779]: I1128 13:10:18.854802 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" event={"ID":"3b2a3eb4-4de5-491b-b466-3a35b7d745ec","Type":"ContainerDied","Data":"618d5fbe87fa7087aedd943e484ddc1d1d52c7576dd65968b53ae378fd1610f9"} Nov 28 13:10:18 crc kubenswrapper[4779]: I1128 13:10:18.857196 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" event={"ID":"3b2a3eb4-4de5-491b-b466-3a35b7d745ec","Type":"ContainerStarted","Data":"0f23bfc3f21acb42c5efa181b7a3f3d8dc174ef89b59a8d8e8ca5b8924483e94"} Nov 28 13:10:18 crc kubenswrapper[4779]: I1128 13:10:18.857288 4779 scope.go:117] "RemoveContainer" containerID="3a5057813024b5f9eddaf198924d294d2253857acafc1e169a218697e2d27bcf" Nov 28 13:10:19 crc kubenswrapper[4779]: I1128 13:10:19.875408 4779 generic.go:334] "Generic (PLEG): container finished" podID="85a14d9a-2667-48a0-83c1-2e37f92590fb" containerID="52a32e307e3736ca205e6c449a82f1be332107c527c19dd3ebc54b1a0fc10211" exitCode=0 Nov 28 13:10:19 crc kubenswrapper[4779]: I1128 13:10:19.875549 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htvcg" event={"ID":"85a14d9a-2667-48a0-83c1-2e37f92590fb","Type":"ContainerDied","Data":"52a32e307e3736ca205e6c449a82f1be332107c527c19dd3ebc54b1a0fc10211"} Nov 28 13:10:21 crc kubenswrapper[4779]: I1128 13:10:21.383465 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htvcg" Nov 28 13:10:21 crc kubenswrapper[4779]: I1128 13:10:21.526331 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85a14d9a-2667-48a0-83c1-2e37f92590fb-ovn-combined-ca-bundle\") pod \"85a14d9a-2667-48a0-83c1-2e37f92590fb\" (UID: \"85a14d9a-2667-48a0-83c1-2e37f92590fb\") " Nov 28 13:10:21 crc kubenswrapper[4779]: I1128 13:10:21.526436 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85a14d9a-2667-48a0-83c1-2e37f92590fb-neutron-metadata-combined-ca-bundle\") pod \"85a14d9a-2667-48a0-83c1-2e37f92590fb\" (UID: \"85a14d9a-2667-48a0-83c1-2e37f92590fb\") " Nov 28 13:10:21 crc kubenswrapper[4779]: I1128 13:10:21.526500 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85a14d9a-2667-48a0-83c1-2e37f92590fb-telemetry-combined-ca-bundle\") pod \"85a14d9a-2667-48a0-83c1-2e37f92590fb\" (UID: \"85a14d9a-2667-48a0-83c1-2e37f92590fb\") " Nov 28 13:10:21 crc kubenswrapper[4779]: I1128 13:10:21.526536 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85a14d9a-2667-48a0-83c1-2e37f92590fb-nova-combined-ca-bundle\") pod \"85a14d9a-2667-48a0-83c1-2e37f92590fb\" (UID: \"85a14d9a-2667-48a0-83c1-2e37f92590fb\") " Nov 28 13:10:21 crc kubenswrapper[4779]: I1128 13:10:21.527803 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85a14d9a-2667-48a0-83c1-2e37f92590fb-inventory\") pod \"85a14d9a-2667-48a0-83c1-2e37f92590fb\" (UID: \"85a14d9a-2667-48a0-83c1-2e37f92590fb\") " Nov 28 13:10:21 crc kubenswrapper[4779]: I1128 13:10:21.527879 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/85a14d9a-2667-48a0-83c1-2e37f92590fb-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"85a14d9a-2667-48a0-83c1-2e37f92590fb\" (UID: \"85a14d9a-2667-48a0-83c1-2e37f92590fb\") " Nov 28 13:10:21 crc kubenswrapper[4779]: I1128 13:10:21.527928 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85a14d9a-2667-48a0-83c1-2e37f92590fb-repo-setup-combined-ca-bundle\") pod \"85a14d9a-2667-48a0-83c1-2e37f92590fb\" (UID: \"85a14d9a-2667-48a0-83c1-2e37f92590fb\") " Nov 28 13:10:21 crc kubenswrapper[4779]: I1128 13:10:21.527983 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/85a14d9a-2667-48a0-83c1-2e37f92590fb-ssh-key\") pod \"85a14d9a-2667-48a0-83c1-2e37f92590fb\" (UID: \"85a14d9a-2667-48a0-83c1-2e37f92590fb\") " Nov 28 13:10:21 crc kubenswrapper[4779]: I1128 13:10:21.528057 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/85a14d9a-2667-48a0-83c1-2e37f92590fb-openstack-edpm-ipam-ovn-default-certs-0\") pod \"85a14d9a-2667-48a0-83c1-2e37f92590fb\" (UID: \"85a14d9a-2667-48a0-83c1-2e37f92590fb\") " Nov 28 13:10:21 crc kubenswrapper[4779]: I1128 13:10:21.528145 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/85a14d9a-2667-48a0-83c1-2e37f92590fb-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"85a14d9a-2667-48a0-83c1-2e37f92590fb\" (UID: \"85a14d9a-2667-48a0-83c1-2e37f92590fb\") " Nov 28 13:10:21 crc kubenswrapper[4779]: I1128 13:10:21.528193 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/85a14d9a-2667-48a0-83c1-2e37f92590fb-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"85a14d9a-2667-48a0-83c1-2e37f92590fb\" (UID: \"85a14d9a-2667-48a0-83c1-2e37f92590fb\") " Nov 28 13:10:21 crc kubenswrapper[4779]: I1128 13:10:21.528232 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85a14d9a-2667-48a0-83c1-2e37f92590fb-libvirt-combined-ca-bundle\") pod \"85a14d9a-2667-48a0-83c1-2e37f92590fb\" (UID: \"85a14d9a-2667-48a0-83c1-2e37f92590fb\") " Nov 28 13:10:21 crc kubenswrapper[4779]: I1128 13:10:21.528325 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48mdp\" (UniqueName: \"kubernetes.io/projected/85a14d9a-2667-48a0-83c1-2e37f92590fb-kube-api-access-48mdp\") pod \"85a14d9a-2667-48a0-83c1-2e37f92590fb\" (UID: \"85a14d9a-2667-48a0-83c1-2e37f92590fb\") " Nov 28 13:10:21 crc kubenswrapper[4779]: I1128 13:10:21.528385 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85a14d9a-2667-48a0-83c1-2e37f92590fb-bootstrap-combined-ca-bundle\") pod \"85a14d9a-2667-48a0-83c1-2e37f92590fb\" (UID: \"85a14d9a-2667-48a0-83c1-2e37f92590fb\") " Nov 28 13:10:21 crc kubenswrapper[4779]: I1128 13:10:21.536040 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85a14d9a-2667-48a0-83c1-2e37f92590fb-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "85a14d9a-2667-48a0-83c1-2e37f92590fb" (UID: "85a14d9a-2667-48a0-83c1-2e37f92590fb"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:10:21 crc kubenswrapper[4779]: I1128 13:10:21.536060 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85a14d9a-2667-48a0-83c1-2e37f92590fb-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "85a14d9a-2667-48a0-83c1-2e37f92590fb" (UID: "85a14d9a-2667-48a0-83c1-2e37f92590fb"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:10:21 crc kubenswrapper[4779]: I1128 13:10:21.536300 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85a14d9a-2667-48a0-83c1-2e37f92590fb-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "85a14d9a-2667-48a0-83c1-2e37f92590fb" (UID: "85a14d9a-2667-48a0-83c1-2e37f92590fb"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:10:21 crc kubenswrapper[4779]: I1128 13:10:21.536830 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85a14d9a-2667-48a0-83c1-2e37f92590fb-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "85a14d9a-2667-48a0-83c1-2e37f92590fb" (UID: "85a14d9a-2667-48a0-83c1-2e37f92590fb"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:10:21 crc kubenswrapper[4779]: I1128 13:10:21.538173 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85a14d9a-2667-48a0-83c1-2e37f92590fb-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "85a14d9a-2667-48a0-83c1-2e37f92590fb" (UID: "85a14d9a-2667-48a0-83c1-2e37f92590fb"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:10:21 crc kubenswrapper[4779]: I1128 13:10:21.539460 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85a14d9a-2667-48a0-83c1-2e37f92590fb-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "85a14d9a-2667-48a0-83c1-2e37f92590fb" (UID: "85a14d9a-2667-48a0-83c1-2e37f92590fb"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:10:21 crc kubenswrapper[4779]: I1128 13:10:21.539562 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85a14d9a-2667-48a0-83c1-2e37f92590fb-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "85a14d9a-2667-48a0-83c1-2e37f92590fb" (UID: "85a14d9a-2667-48a0-83c1-2e37f92590fb"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:10:21 crc kubenswrapper[4779]: I1128 13:10:21.539627 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85a14d9a-2667-48a0-83c1-2e37f92590fb-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "85a14d9a-2667-48a0-83c1-2e37f92590fb" (UID: "85a14d9a-2667-48a0-83c1-2e37f92590fb"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:10:21 crc kubenswrapper[4779]: I1128 13:10:21.540553 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85a14d9a-2667-48a0-83c1-2e37f92590fb-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "85a14d9a-2667-48a0-83c1-2e37f92590fb" (UID: "85a14d9a-2667-48a0-83c1-2e37f92590fb"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:10:21 crc kubenswrapper[4779]: I1128 13:10:21.541346 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85a14d9a-2667-48a0-83c1-2e37f92590fb-kube-api-access-48mdp" (OuterVolumeSpecName: "kube-api-access-48mdp") pod "85a14d9a-2667-48a0-83c1-2e37f92590fb" (UID: "85a14d9a-2667-48a0-83c1-2e37f92590fb"). InnerVolumeSpecName "kube-api-access-48mdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:10:21 crc kubenswrapper[4779]: I1128 13:10:21.542409 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85a14d9a-2667-48a0-83c1-2e37f92590fb-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "85a14d9a-2667-48a0-83c1-2e37f92590fb" (UID: "85a14d9a-2667-48a0-83c1-2e37f92590fb"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:10:21 crc kubenswrapper[4779]: I1128 13:10:21.559377 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85a14d9a-2667-48a0-83c1-2e37f92590fb-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "85a14d9a-2667-48a0-83c1-2e37f92590fb" (UID: "85a14d9a-2667-48a0-83c1-2e37f92590fb"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:10:21 crc kubenswrapper[4779]: I1128 13:10:21.586390 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85a14d9a-2667-48a0-83c1-2e37f92590fb-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "85a14d9a-2667-48a0-83c1-2e37f92590fb" (UID: "85a14d9a-2667-48a0-83c1-2e37f92590fb"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:10:21 crc kubenswrapper[4779]: I1128 13:10:21.588411 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85a14d9a-2667-48a0-83c1-2e37f92590fb-inventory" (OuterVolumeSpecName: "inventory") pod "85a14d9a-2667-48a0-83c1-2e37f92590fb" (UID: "85a14d9a-2667-48a0-83c1-2e37f92590fb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:10:21 crc kubenswrapper[4779]: I1128 13:10:21.631220 4779 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85a14d9a-2667-48a0-83c1-2e37f92590fb-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 13:10:21 crc kubenswrapper[4779]: I1128 13:10:21.631275 4779 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85a14d9a-2667-48a0-83c1-2e37f92590fb-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 13:10:21 crc kubenswrapper[4779]: I1128 13:10:21.631290 4779 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85a14d9a-2667-48a0-83c1-2e37f92590fb-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 13:10:21 crc kubenswrapper[4779]: I1128 13:10:21.631305 4779 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85a14d9a-2667-48a0-83c1-2e37f92590fb-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 13:10:21 crc kubenswrapper[4779]: I1128 13:10:21.631317 4779 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85a14d9a-2667-48a0-83c1-2e37f92590fb-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 13:10:21 crc kubenswrapper[4779]: I1128 13:10:21.631331 4779 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/85a14d9a-2667-48a0-83c1-2e37f92590fb-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 28 13:10:21 crc kubenswrapper[4779]: I1128 13:10:21.631344 4779 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85a14d9a-2667-48a0-83c1-2e37f92590fb-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 13:10:21 crc kubenswrapper[4779]: I1128 13:10:21.631356 4779 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/85a14d9a-2667-48a0-83c1-2e37f92590fb-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 13:10:21 crc kubenswrapper[4779]: I1128 13:10:21.631367 4779 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/85a14d9a-2667-48a0-83c1-2e37f92590fb-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 28 13:10:21 crc kubenswrapper[4779]: I1128 13:10:21.631381 4779 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/85a14d9a-2667-48a0-83c1-2e37f92590fb-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 28 13:10:21 crc kubenswrapper[4779]: I1128 13:10:21.631396 4779 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/85a14d9a-2667-48a0-83c1-2e37f92590fb-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 28 13:10:21 crc kubenswrapper[4779]: I1128 13:10:21.631410 4779 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85a14d9a-2667-48a0-83c1-2e37f92590fb-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 13:10:21 crc kubenswrapper[4779]: I1128 13:10:21.631424 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48mdp\" (UniqueName: \"kubernetes.io/projected/85a14d9a-2667-48a0-83c1-2e37f92590fb-kube-api-access-48mdp\") on node \"crc\" DevicePath \"\"" Nov 28 13:10:21 crc kubenswrapper[4779]: I1128 13:10:21.631436 4779 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85a14d9a-2667-48a0-83c1-2e37f92590fb-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 13:10:21 crc kubenswrapper[4779]: I1128 13:10:21.898959 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htvcg" event={"ID":"85a14d9a-2667-48a0-83c1-2e37f92590fb","Type":"ContainerDied","Data":"6e9b69d717a51718694302240e02e2b6aa619bf750205f219d43461e5446ba20"} Nov 28 13:10:21 crc kubenswrapper[4779]: I1128 13:10:21.899007 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e9b69d717a51718694302240e02e2b6aa619bf750205f219d43461e5446ba20" Nov 28 13:10:21 crc kubenswrapper[4779]: I1128 13:10:21.899075 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-htvcg" Nov 28 13:10:22 crc kubenswrapper[4779]: I1128 13:10:22.056626 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-nbj5n"] Nov 28 13:10:22 crc kubenswrapper[4779]: E1128 13:10:22.057203 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85a14d9a-2667-48a0-83c1-2e37f92590fb" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 28 13:10:22 crc kubenswrapper[4779]: I1128 13:10:22.057235 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="85a14d9a-2667-48a0-83c1-2e37f92590fb" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 28 13:10:22 crc kubenswrapper[4779]: E1128 13:10:22.057264 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6538b6b-711d-4c8f-8550-060e3f6bf803" containerName="registry-server" Nov 28 13:10:22 crc kubenswrapper[4779]: I1128 13:10:22.057277 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6538b6b-711d-4c8f-8550-060e3f6bf803" containerName="registry-server" Nov 28 13:10:22 crc kubenswrapper[4779]: E1128 13:10:22.057310 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6538b6b-711d-4c8f-8550-060e3f6bf803" containerName="extract-content" Nov 28 13:10:22 crc kubenswrapper[4779]: I1128 13:10:22.057321 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6538b6b-711d-4c8f-8550-060e3f6bf803" containerName="extract-content" Nov 28 13:10:22 crc kubenswrapper[4779]: E1128 13:10:22.057353 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6538b6b-711d-4c8f-8550-060e3f6bf803" containerName="extract-utilities" Nov 28 13:10:22 crc kubenswrapper[4779]: I1128 13:10:22.057365 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6538b6b-711d-4c8f-8550-060e3f6bf803" containerName="extract-utilities" Nov 28 13:10:22 crc kubenswrapper[4779]: I1128 13:10:22.057693 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6538b6b-711d-4c8f-8550-060e3f6bf803" containerName="registry-server" Nov 28 13:10:22 crc kubenswrapper[4779]: I1128 13:10:22.057755 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="85a14d9a-2667-48a0-83c1-2e37f92590fb" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 28 13:10:22 crc kubenswrapper[4779]: I1128 13:10:22.058774 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nbj5n" Nov 28 13:10:22 crc kubenswrapper[4779]: I1128 13:10:22.064564 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Nov 28 13:10:22 crc kubenswrapper[4779]: I1128 13:10:22.064779 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 13:10:22 crc kubenswrapper[4779]: I1128 13:10:22.064788 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 13:10:22 crc kubenswrapper[4779]: I1128 13:10:22.064950 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zfcth" Nov 28 13:10:22 crc kubenswrapper[4779]: I1128 13:10:22.065125 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 13:10:22 crc kubenswrapper[4779]: I1128 13:10:22.087574 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-nbj5n"] Nov 28 13:10:22 crc kubenswrapper[4779]: I1128 13:10:22.143676 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6763a62-0f2c-4f57-9391-731d12201cce-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nbj5n\" (UID: \"b6763a62-0f2c-4f57-9391-731d12201cce\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nbj5n" Nov 28 13:10:22 crc kubenswrapper[4779]: I1128 13:10:22.143874 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6763a62-0f2c-4f57-9391-731d12201cce-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nbj5n\" (UID: \"b6763a62-0f2c-4f57-9391-731d12201cce\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nbj5n" Nov 28 13:10:22 crc kubenswrapper[4779]: I1128 13:10:22.143918 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/b6763a62-0f2c-4f57-9391-731d12201cce-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nbj5n\" (UID: \"b6763a62-0f2c-4f57-9391-731d12201cce\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nbj5n" Nov 28 13:10:22 crc kubenswrapper[4779]: I1128 13:10:22.144007 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b6763a62-0f2c-4f57-9391-731d12201cce-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nbj5n\" (UID: \"b6763a62-0f2c-4f57-9391-731d12201cce\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nbj5n" Nov 28 13:10:22 crc kubenswrapper[4779]: I1128 13:10:22.144205 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m6jc\" (UniqueName: \"kubernetes.io/projected/b6763a62-0f2c-4f57-9391-731d12201cce-kube-api-access-6m6jc\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nbj5n\" (UID: \"b6763a62-0f2c-4f57-9391-731d12201cce\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nbj5n" Nov 28 13:10:22 crc kubenswrapper[4779]: I1128 13:10:22.247019 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6m6jc\" (UniqueName: \"kubernetes.io/projected/b6763a62-0f2c-4f57-9391-731d12201cce-kube-api-access-6m6jc\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nbj5n\" (UID: \"b6763a62-0f2c-4f57-9391-731d12201cce\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nbj5n" Nov 28 13:10:22 crc kubenswrapper[4779]: I1128 13:10:22.247112 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6763a62-0f2c-4f57-9391-731d12201cce-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nbj5n\" (UID: \"b6763a62-0f2c-4f57-9391-731d12201cce\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nbj5n" Nov 28 13:10:22 crc kubenswrapper[4779]: I1128 13:10:22.247170 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6763a62-0f2c-4f57-9391-731d12201cce-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nbj5n\" (UID: \"b6763a62-0f2c-4f57-9391-731d12201cce\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nbj5n" Nov 28 13:10:22 crc kubenswrapper[4779]: I1128 13:10:22.247197 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/b6763a62-0f2c-4f57-9391-731d12201cce-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nbj5n\" (UID: \"b6763a62-0f2c-4f57-9391-731d12201cce\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nbj5n" Nov 28 13:10:22 crc kubenswrapper[4779]: I1128 13:10:22.247255 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b6763a62-0f2c-4f57-9391-731d12201cce-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nbj5n\" (UID: \"b6763a62-0f2c-4f57-9391-731d12201cce\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nbj5n" Nov 28 13:10:22 crc kubenswrapper[4779]: I1128 13:10:22.251595 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/b6763a62-0f2c-4f57-9391-731d12201cce-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nbj5n\" (UID: \"b6763a62-0f2c-4f57-9391-731d12201cce\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nbj5n" Nov 28 13:10:22 crc kubenswrapper[4779]: I1128 13:10:22.251787 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b6763a62-0f2c-4f57-9391-731d12201cce-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nbj5n\" (UID: \"b6763a62-0f2c-4f57-9391-731d12201cce\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nbj5n" Nov 28 13:10:22 crc kubenswrapper[4779]: I1128 13:10:22.251799 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6763a62-0f2c-4f57-9391-731d12201cce-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nbj5n\" (UID: \"b6763a62-0f2c-4f57-9391-731d12201cce\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nbj5n" Nov 28 13:10:22 crc kubenswrapper[4779]: I1128 13:10:22.252820 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6763a62-0f2c-4f57-9391-731d12201cce-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nbj5n\" (UID: \"b6763a62-0f2c-4f57-9391-731d12201cce\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nbj5n" Nov 28 13:10:22 crc kubenswrapper[4779]: I1128 13:10:22.267861 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6m6jc\" (UniqueName: \"kubernetes.io/projected/b6763a62-0f2c-4f57-9391-731d12201cce-kube-api-access-6m6jc\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-nbj5n\" (UID: \"b6763a62-0f2c-4f57-9391-731d12201cce\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nbj5n" Nov 28 13:10:22 crc kubenswrapper[4779]: I1128 13:10:22.406567 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nbj5n" Nov 28 13:10:22 crc kubenswrapper[4779]: I1128 13:10:22.794435 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-nbj5n"] Nov 28 13:10:22 crc kubenswrapper[4779]: W1128 13:10:22.796727 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6763a62_0f2c_4f57_9391_731d12201cce.slice/crio-3042dc0eb88a7d746862dc47dcc389f2fdd00e1d3a68aa688284c434b4178767 WatchSource:0}: Error finding container 3042dc0eb88a7d746862dc47dcc389f2fdd00e1d3a68aa688284c434b4178767: Status 404 returned error can't find the container with id 3042dc0eb88a7d746862dc47dcc389f2fdd00e1d3a68aa688284c434b4178767 Nov 28 13:10:22 crc kubenswrapper[4779]: I1128 13:10:22.911718 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nbj5n" event={"ID":"b6763a62-0f2c-4f57-9391-731d12201cce","Type":"ContainerStarted","Data":"3042dc0eb88a7d746862dc47dcc389f2fdd00e1d3a68aa688284c434b4178767"} Nov 28 13:10:24 crc kubenswrapper[4779]: I1128 13:10:24.947843 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nbj5n" event={"ID":"b6763a62-0f2c-4f57-9391-731d12201cce","Type":"ContainerStarted","Data":"9c3413cfb0b89eedacc790f1fc501e4356472010844b7cc0cf9b19328101cc20"} Nov 28 13:10:24 crc kubenswrapper[4779]: I1128 13:10:24.993759 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nbj5n" podStartSLOduration=1.72618629 podStartE2EDuration="2.993738467s" podCreationTimestamp="2025-11-28 13:10:22 +0000 UTC" firstStartedPulling="2025-11-28 13:10:22.799694388 +0000 UTC m=+2083.365369782" lastFinishedPulling="2025-11-28 13:10:24.067246605 +0000 UTC m=+2084.632921959" observedRunningTime="2025-11-28 13:10:24.97888395 +0000 UTC m=+2085.544559364" watchObservedRunningTime="2025-11-28 13:10:24.993738467 +0000 UTC m=+2085.559413841" Nov 28 13:11:37 crc kubenswrapper[4779]: I1128 13:11:37.778313 4779 generic.go:334] "Generic (PLEG): container finished" podID="b6763a62-0f2c-4f57-9391-731d12201cce" containerID="9c3413cfb0b89eedacc790f1fc501e4356472010844b7cc0cf9b19328101cc20" exitCode=0 Nov 28 13:11:37 crc kubenswrapper[4779]: I1128 13:11:37.778393 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nbj5n" event={"ID":"b6763a62-0f2c-4f57-9391-731d12201cce","Type":"ContainerDied","Data":"9c3413cfb0b89eedacc790f1fc501e4356472010844b7cc0cf9b19328101cc20"} Nov 28 13:11:39 crc kubenswrapper[4779]: I1128 13:11:39.335456 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nbj5n" Nov 28 13:11:39 crc kubenswrapper[4779]: I1128 13:11:39.441804 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/b6763a62-0f2c-4f57-9391-731d12201cce-ovncontroller-config-0\") pod \"b6763a62-0f2c-4f57-9391-731d12201cce\" (UID: \"b6763a62-0f2c-4f57-9391-731d12201cce\") " Nov 28 13:11:39 crc kubenswrapper[4779]: I1128 13:11:39.441891 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6m6jc\" (UniqueName: \"kubernetes.io/projected/b6763a62-0f2c-4f57-9391-731d12201cce-kube-api-access-6m6jc\") pod \"b6763a62-0f2c-4f57-9391-731d12201cce\" (UID: \"b6763a62-0f2c-4f57-9391-731d12201cce\") " Nov 28 13:11:39 crc kubenswrapper[4779]: I1128 13:11:39.442163 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6763a62-0f2c-4f57-9391-731d12201cce-inventory\") pod \"b6763a62-0f2c-4f57-9391-731d12201cce\" (UID: \"b6763a62-0f2c-4f57-9391-731d12201cce\") " Nov 28 13:11:39 crc kubenswrapper[4779]: I1128 13:11:39.442217 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b6763a62-0f2c-4f57-9391-731d12201cce-ssh-key\") pod \"b6763a62-0f2c-4f57-9391-731d12201cce\" (UID: \"b6763a62-0f2c-4f57-9391-731d12201cce\") " Nov 28 13:11:39 crc kubenswrapper[4779]: I1128 13:11:39.442415 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6763a62-0f2c-4f57-9391-731d12201cce-ovn-combined-ca-bundle\") pod \"b6763a62-0f2c-4f57-9391-731d12201cce\" (UID: \"b6763a62-0f2c-4f57-9391-731d12201cce\") " Nov 28 13:11:39 crc kubenswrapper[4779]: I1128 13:11:39.448035 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6763a62-0f2c-4f57-9391-731d12201cce-kube-api-access-6m6jc" (OuterVolumeSpecName: "kube-api-access-6m6jc") pod "b6763a62-0f2c-4f57-9391-731d12201cce" (UID: "b6763a62-0f2c-4f57-9391-731d12201cce"). InnerVolumeSpecName "kube-api-access-6m6jc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:11:39 crc kubenswrapper[4779]: I1128 13:11:39.448745 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6763a62-0f2c-4f57-9391-731d12201cce-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "b6763a62-0f2c-4f57-9391-731d12201cce" (UID: "b6763a62-0f2c-4f57-9391-731d12201cce"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:11:39 crc kubenswrapper[4779]: I1128 13:11:39.479376 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6763a62-0f2c-4f57-9391-731d12201cce-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b6763a62-0f2c-4f57-9391-731d12201cce" (UID: "b6763a62-0f2c-4f57-9391-731d12201cce"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:11:39 crc kubenswrapper[4779]: I1128 13:11:39.489939 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6763a62-0f2c-4f57-9391-731d12201cce-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "b6763a62-0f2c-4f57-9391-731d12201cce" (UID: "b6763a62-0f2c-4f57-9391-731d12201cce"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 13:11:39 crc kubenswrapper[4779]: I1128 13:11:39.498242 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6763a62-0f2c-4f57-9391-731d12201cce-inventory" (OuterVolumeSpecName: "inventory") pod "b6763a62-0f2c-4f57-9391-731d12201cce" (UID: "b6763a62-0f2c-4f57-9391-731d12201cce"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:11:39 crc kubenswrapper[4779]: I1128 13:11:39.545208 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6m6jc\" (UniqueName: \"kubernetes.io/projected/b6763a62-0f2c-4f57-9391-731d12201cce-kube-api-access-6m6jc\") on node \"crc\" DevicePath \"\"" Nov 28 13:11:39 crc kubenswrapper[4779]: I1128 13:11:39.545243 4779 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6763a62-0f2c-4f57-9391-731d12201cce-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 13:11:39 crc kubenswrapper[4779]: I1128 13:11:39.545254 4779 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b6763a62-0f2c-4f57-9391-731d12201cce-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 13:11:39 crc kubenswrapper[4779]: I1128 13:11:39.545265 4779 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6763a62-0f2c-4f57-9391-731d12201cce-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 13:11:39 crc kubenswrapper[4779]: I1128 13:11:39.545278 4779 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/b6763a62-0f2c-4f57-9391-731d12201cce-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Nov 28 13:11:39 crc kubenswrapper[4779]: I1128 13:11:39.804801 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nbj5n" event={"ID":"b6763a62-0f2c-4f57-9391-731d12201cce","Type":"ContainerDied","Data":"3042dc0eb88a7d746862dc47dcc389f2fdd00e1d3a68aa688284c434b4178767"} Nov 28 13:11:39 crc kubenswrapper[4779]: I1128 13:11:39.804861 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3042dc0eb88a7d746862dc47dcc389f2fdd00e1d3a68aa688284c434b4178767" Nov 28 13:11:39 crc kubenswrapper[4779]: I1128 13:11:39.804876 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-nbj5n" Nov 28 13:11:39 crc kubenswrapper[4779]: I1128 13:11:39.939547 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m59g8"] Nov 28 13:11:39 crc kubenswrapper[4779]: E1128 13:11:39.940061 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6763a62-0f2c-4f57-9391-731d12201cce" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 28 13:11:39 crc kubenswrapper[4779]: I1128 13:11:39.940082 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6763a62-0f2c-4f57-9391-731d12201cce" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 28 13:11:39 crc kubenswrapper[4779]: I1128 13:11:39.940358 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6763a62-0f2c-4f57-9391-731d12201cce" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 28 13:11:39 crc kubenswrapper[4779]: I1128 13:11:39.941205 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m59g8" Nov 28 13:11:39 crc kubenswrapper[4779]: I1128 13:11:39.946667 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 13:11:39 crc kubenswrapper[4779]: I1128 13:11:39.947132 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Nov 28 13:11:39 crc kubenswrapper[4779]: I1128 13:11:39.947392 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zfcth" Nov 28 13:11:39 crc kubenswrapper[4779]: I1128 13:11:39.947187 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 13:11:39 crc kubenswrapper[4779]: I1128 13:11:39.947691 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 13:11:39 crc kubenswrapper[4779]: I1128 13:11:39.947908 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Nov 28 13:11:39 crc kubenswrapper[4779]: I1128 13:11:39.964972 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m59g8"] Nov 28 13:11:40 crc kubenswrapper[4779]: I1128 13:11:40.057844 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6b291c86-c80b-41e0-9ebd-bff5f1d3de42-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m59g8\" (UID: \"6b291c86-c80b-41e0-9ebd-bff5f1d3de42\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m59g8" Nov 28 13:11:40 crc kubenswrapper[4779]: I1128 13:11:40.058259 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6b291c86-c80b-41e0-9ebd-bff5f1d3de42-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m59g8\" (UID: \"6b291c86-c80b-41e0-9ebd-bff5f1d3de42\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m59g8" Nov 28 13:11:40 crc kubenswrapper[4779]: I1128 13:11:40.058344 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b291c86-c80b-41e0-9ebd-bff5f1d3de42-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m59g8\" (UID: \"6b291c86-c80b-41e0-9ebd-bff5f1d3de42\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m59g8" Nov 28 13:11:40 crc kubenswrapper[4779]: I1128 13:11:40.058648 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6b291c86-c80b-41e0-9ebd-bff5f1d3de42-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m59g8\" (UID: \"6b291c86-c80b-41e0-9ebd-bff5f1d3de42\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m59g8" Nov 28 13:11:40 crc kubenswrapper[4779]: I1128 13:11:40.059118 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2trxc\" (UniqueName: \"kubernetes.io/projected/6b291c86-c80b-41e0-9ebd-bff5f1d3de42-kube-api-access-2trxc\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m59g8\" (UID: \"6b291c86-c80b-41e0-9ebd-bff5f1d3de42\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m59g8" Nov 28 13:11:40 crc kubenswrapper[4779]: I1128 13:11:40.059337 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6b291c86-c80b-41e0-9ebd-bff5f1d3de42-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m59g8\" (UID: \"6b291c86-c80b-41e0-9ebd-bff5f1d3de42\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m59g8" Nov 28 13:11:40 crc kubenswrapper[4779]: I1128 13:11:40.161611 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2trxc\" (UniqueName: \"kubernetes.io/projected/6b291c86-c80b-41e0-9ebd-bff5f1d3de42-kube-api-access-2trxc\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m59g8\" (UID: \"6b291c86-c80b-41e0-9ebd-bff5f1d3de42\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m59g8" Nov 28 13:11:40 crc kubenswrapper[4779]: I1128 13:11:40.161784 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6b291c86-c80b-41e0-9ebd-bff5f1d3de42-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m59g8\" (UID: \"6b291c86-c80b-41e0-9ebd-bff5f1d3de42\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m59g8" Nov 28 13:11:40 crc kubenswrapper[4779]: I1128 13:11:40.161867 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6b291c86-c80b-41e0-9ebd-bff5f1d3de42-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m59g8\" (UID: \"6b291c86-c80b-41e0-9ebd-bff5f1d3de42\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m59g8" Nov 28 13:11:40 crc kubenswrapper[4779]: I1128 13:11:40.161917 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6b291c86-c80b-41e0-9ebd-bff5f1d3de42-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m59g8\" (UID: \"6b291c86-c80b-41e0-9ebd-bff5f1d3de42\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m59g8" Nov 28 13:11:40 crc kubenswrapper[4779]: I1128 13:11:40.161991 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b291c86-c80b-41e0-9ebd-bff5f1d3de42-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m59g8\" (UID: \"6b291c86-c80b-41e0-9ebd-bff5f1d3de42\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m59g8" Nov 28 13:11:40 crc kubenswrapper[4779]: I1128 13:11:40.162135 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6b291c86-c80b-41e0-9ebd-bff5f1d3de42-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m59g8\" (UID: \"6b291c86-c80b-41e0-9ebd-bff5f1d3de42\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m59g8" Nov 28 13:11:40 crc kubenswrapper[4779]: I1128 13:11:40.167068 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6b291c86-c80b-41e0-9ebd-bff5f1d3de42-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m59g8\" (UID: \"6b291c86-c80b-41e0-9ebd-bff5f1d3de42\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m59g8" Nov 28 13:11:40 crc kubenswrapper[4779]: I1128 13:11:40.167634 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6b291c86-c80b-41e0-9ebd-bff5f1d3de42-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m59g8\" (UID: \"6b291c86-c80b-41e0-9ebd-bff5f1d3de42\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m59g8" Nov 28 13:11:40 crc kubenswrapper[4779]: I1128 13:11:40.172308 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6b291c86-c80b-41e0-9ebd-bff5f1d3de42-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m59g8\" (UID: \"6b291c86-c80b-41e0-9ebd-bff5f1d3de42\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m59g8" Nov 28 13:11:40 crc kubenswrapper[4779]: I1128 13:11:40.173516 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b291c86-c80b-41e0-9ebd-bff5f1d3de42-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m59g8\" (UID: \"6b291c86-c80b-41e0-9ebd-bff5f1d3de42\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m59g8" Nov 28 13:11:40 crc kubenswrapper[4779]: I1128 13:11:40.175078 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6b291c86-c80b-41e0-9ebd-bff5f1d3de42-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m59g8\" (UID: \"6b291c86-c80b-41e0-9ebd-bff5f1d3de42\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m59g8" Nov 28 13:11:40 crc kubenswrapper[4779]: I1128 13:11:40.181581 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2trxc\" (UniqueName: \"kubernetes.io/projected/6b291c86-c80b-41e0-9ebd-bff5f1d3de42-kube-api-access-2trxc\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m59g8\" (UID: \"6b291c86-c80b-41e0-9ebd-bff5f1d3de42\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m59g8" Nov 28 13:11:40 crc kubenswrapper[4779]: I1128 13:11:40.260839 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m59g8" Nov 28 13:11:40 crc kubenswrapper[4779]: I1128 13:11:40.884546 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m59g8"] Nov 28 13:11:40 crc kubenswrapper[4779]: W1128 13:11:40.895837 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b291c86_c80b_41e0_9ebd_bff5f1d3de42.slice/crio-1150be453aab6abf54a989e52fa8f61041d0dd8467bdcc9d7140466205bfe869 WatchSource:0}: Error finding container 1150be453aab6abf54a989e52fa8f61041d0dd8467bdcc9d7140466205bfe869: Status 404 returned error can't find the container with id 1150be453aab6abf54a989e52fa8f61041d0dd8467bdcc9d7140466205bfe869 Nov 28 13:11:41 crc kubenswrapper[4779]: I1128 13:11:41.845117 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m59g8" event={"ID":"6b291c86-c80b-41e0-9ebd-bff5f1d3de42","Type":"ContainerStarted","Data":"3d782226e748547a042e1615db326cbc03c73a1f9205cd8352afd50b12b6de42"} Nov 28 13:11:41 crc kubenswrapper[4779]: I1128 13:11:41.845858 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m59g8" event={"ID":"6b291c86-c80b-41e0-9ebd-bff5f1d3de42","Type":"ContainerStarted","Data":"1150be453aab6abf54a989e52fa8f61041d0dd8467bdcc9d7140466205bfe869"} Nov 28 13:11:41 crc kubenswrapper[4779]: I1128 13:11:41.884820 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m59g8" podStartSLOduration=2.339680685 podStartE2EDuration="2.884783834s" podCreationTimestamp="2025-11-28 13:11:39 +0000 UTC" firstStartedPulling="2025-11-28 13:11:40.898290528 +0000 UTC m=+2161.463965892" lastFinishedPulling="2025-11-28 13:11:41.443393687 +0000 UTC m=+2162.009069041" observedRunningTime="2025-11-28 13:11:41.866206527 +0000 UTC m=+2162.431881891" watchObservedRunningTime="2025-11-28 13:11:41.884783834 +0000 UTC m=+2162.450459238" Nov 28 13:11:45 crc kubenswrapper[4779]: E1128 13:11:45.023944 4779 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6763a62_0f2c_4f57_9391_731d12201cce.slice\": RecentStats: unable to find data in memory cache]" Nov 28 13:11:55 crc kubenswrapper[4779]: E1128 13:11:55.318081 4779 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6763a62_0f2c_4f57_9391_731d12201cce.slice\": RecentStats: unable to find data in memory cache]" Nov 28 13:12:02 crc kubenswrapper[4779]: I1128 13:12:02.074831 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4ptkk"] Nov 28 13:12:02 crc kubenswrapper[4779]: I1128 13:12:02.079686 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4ptkk" Nov 28 13:12:02 crc kubenswrapper[4779]: I1128 13:12:02.101264 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4ptkk"] Nov 28 13:12:02 crc kubenswrapper[4779]: I1128 13:12:02.154787 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54370cfc-b67c-4720-b096-1b2b593e0fe2-utilities\") pod \"community-operators-4ptkk\" (UID: \"54370cfc-b67c-4720-b096-1b2b593e0fe2\") " pod="openshift-marketplace/community-operators-4ptkk" Nov 28 13:12:02 crc kubenswrapper[4779]: I1128 13:12:02.154903 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54370cfc-b67c-4720-b096-1b2b593e0fe2-catalog-content\") pod \"community-operators-4ptkk\" (UID: \"54370cfc-b67c-4720-b096-1b2b593e0fe2\") " pod="openshift-marketplace/community-operators-4ptkk" Nov 28 13:12:02 crc kubenswrapper[4779]: I1128 13:12:02.154952 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvcrx\" (UniqueName: \"kubernetes.io/projected/54370cfc-b67c-4720-b096-1b2b593e0fe2-kube-api-access-lvcrx\") pod \"community-operators-4ptkk\" (UID: \"54370cfc-b67c-4720-b096-1b2b593e0fe2\") " pod="openshift-marketplace/community-operators-4ptkk" Nov 28 13:12:02 crc kubenswrapper[4779]: I1128 13:12:02.257691 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54370cfc-b67c-4720-b096-1b2b593e0fe2-catalog-content\") pod \"community-operators-4ptkk\" (UID: \"54370cfc-b67c-4720-b096-1b2b593e0fe2\") " pod="openshift-marketplace/community-operators-4ptkk" Nov 28 13:12:02 crc kubenswrapper[4779]: I1128 13:12:02.257756 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvcrx\" (UniqueName: \"kubernetes.io/projected/54370cfc-b67c-4720-b096-1b2b593e0fe2-kube-api-access-lvcrx\") pod \"community-operators-4ptkk\" (UID: \"54370cfc-b67c-4720-b096-1b2b593e0fe2\") " pod="openshift-marketplace/community-operators-4ptkk" Nov 28 13:12:02 crc kubenswrapper[4779]: I1128 13:12:02.257813 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54370cfc-b67c-4720-b096-1b2b593e0fe2-utilities\") pod \"community-operators-4ptkk\" (UID: \"54370cfc-b67c-4720-b096-1b2b593e0fe2\") " pod="openshift-marketplace/community-operators-4ptkk" Nov 28 13:12:02 crc kubenswrapper[4779]: I1128 13:12:02.258177 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54370cfc-b67c-4720-b096-1b2b593e0fe2-catalog-content\") pod \"community-operators-4ptkk\" (UID: \"54370cfc-b67c-4720-b096-1b2b593e0fe2\") " pod="openshift-marketplace/community-operators-4ptkk" Nov 28 13:12:02 crc kubenswrapper[4779]: I1128 13:12:02.258261 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54370cfc-b67c-4720-b096-1b2b593e0fe2-utilities\") pod \"community-operators-4ptkk\" (UID: \"54370cfc-b67c-4720-b096-1b2b593e0fe2\") " pod="openshift-marketplace/community-operators-4ptkk" Nov 28 13:12:02 crc kubenswrapper[4779]: I1128 13:12:02.277029 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvcrx\" (UniqueName: \"kubernetes.io/projected/54370cfc-b67c-4720-b096-1b2b593e0fe2-kube-api-access-lvcrx\") pod \"community-operators-4ptkk\" (UID: \"54370cfc-b67c-4720-b096-1b2b593e0fe2\") " pod="openshift-marketplace/community-operators-4ptkk" Nov 28 13:12:02 crc kubenswrapper[4779]: I1128 13:12:02.459795 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4ptkk" Nov 28 13:12:02 crc kubenswrapper[4779]: I1128 13:12:02.968848 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4ptkk"] Nov 28 13:12:02 crc kubenswrapper[4779]: W1128 13:12:02.992639 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54370cfc_b67c_4720_b096_1b2b593e0fe2.slice/crio-ea96c2c4e95b28f15c37388d2f25346dddce49a267718e3c74ace381252ac34e WatchSource:0}: Error finding container ea96c2c4e95b28f15c37388d2f25346dddce49a267718e3c74ace381252ac34e: Status 404 returned error can't find the container with id ea96c2c4e95b28f15c37388d2f25346dddce49a267718e3c74ace381252ac34e Nov 28 13:12:03 crc kubenswrapper[4779]: I1128 13:12:03.085357 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4ptkk" event={"ID":"54370cfc-b67c-4720-b096-1b2b593e0fe2","Type":"ContainerStarted","Data":"ea96c2c4e95b28f15c37388d2f25346dddce49a267718e3c74ace381252ac34e"} Nov 28 13:12:04 crc kubenswrapper[4779]: I1128 13:12:04.100804 4779 generic.go:334] "Generic (PLEG): container finished" podID="54370cfc-b67c-4720-b096-1b2b593e0fe2" containerID="0fb089a4b7d4449df32af3f91e59054419093b656a13be483a8204c807db8a0e" exitCode=0 Nov 28 13:12:04 crc kubenswrapper[4779]: I1128 13:12:04.100923 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4ptkk" event={"ID":"54370cfc-b67c-4720-b096-1b2b593e0fe2","Type":"ContainerDied","Data":"0fb089a4b7d4449df32af3f91e59054419093b656a13be483a8204c807db8a0e"} Nov 28 13:12:05 crc kubenswrapper[4779]: E1128 13:12:05.628517 4779 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6763a62_0f2c_4f57_9391_731d12201cce.slice\": RecentStats: unable to find data in memory cache]" Nov 28 13:12:06 crc kubenswrapper[4779]: I1128 13:12:06.133718 4779 generic.go:334] "Generic (PLEG): container finished" podID="54370cfc-b67c-4720-b096-1b2b593e0fe2" containerID="3eb6aa0fff70b3313dc3dd02547cf188439ef93cbdea960b5179de34822d1f3a" exitCode=0 Nov 28 13:12:06 crc kubenswrapper[4779]: I1128 13:12:06.133850 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4ptkk" event={"ID":"54370cfc-b67c-4720-b096-1b2b593e0fe2","Type":"ContainerDied","Data":"3eb6aa0fff70b3313dc3dd02547cf188439ef93cbdea960b5179de34822d1f3a"} Nov 28 13:12:07 crc kubenswrapper[4779]: I1128 13:12:07.146336 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4ptkk" event={"ID":"54370cfc-b67c-4720-b096-1b2b593e0fe2","Type":"ContainerStarted","Data":"c959182650fa0a559d257beb91418f8e6e2ee291dd1c69213c328df91ab9e1b3"} Nov 28 13:12:07 crc kubenswrapper[4779]: I1128 13:12:07.169581 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4ptkk" podStartSLOduration=2.672559735 podStartE2EDuration="5.169557299s" podCreationTimestamp="2025-11-28 13:12:02 +0000 UTC" firstStartedPulling="2025-11-28 13:12:04.103065764 +0000 UTC m=+2184.668741118" lastFinishedPulling="2025-11-28 13:12:06.600063318 +0000 UTC m=+2187.165738682" observedRunningTime="2025-11-28 13:12:07.164812332 +0000 UTC m=+2187.730487686" watchObservedRunningTime="2025-11-28 13:12:07.169557299 +0000 UTC m=+2187.735232693" Nov 28 13:12:12 crc kubenswrapper[4779]: I1128 13:12:12.435780 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qmrzh"] Nov 28 13:12:12 crc kubenswrapper[4779]: I1128 13:12:12.438213 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qmrzh" Nov 28 13:12:12 crc kubenswrapper[4779]: I1128 13:12:12.455997 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qmrzh"] Nov 28 13:12:12 crc kubenswrapper[4779]: I1128 13:12:12.461241 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4ptkk" Nov 28 13:12:12 crc kubenswrapper[4779]: I1128 13:12:12.461304 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4ptkk" Nov 28 13:12:12 crc kubenswrapper[4779]: I1128 13:12:12.519502 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4ptkk" Nov 28 13:12:12 crc kubenswrapper[4779]: I1128 13:12:12.587624 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6206e27-63ca-4f34-a479-6a460ca9b8b5-catalog-content\") pod \"certified-operators-qmrzh\" (UID: \"d6206e27-63ca-4f34-a479-6a460ca9b8b5\") " pod="openshift-marketplace/certified-operators-qmrzh" Nov 28 13:12:12 crc kubenswrapper[4779]: I1128 13:12:12.587800 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44lch\" (UniqueName: \"kubernetes.io/projected/d6206e27-63ca-4f34-a479-6a460ca9b8b5-kube-api-access-44lch\") pod \"certified-operators-qmrzh\" (UID: \"d6206e27-63ca-4f34-a479-6a460ca9b8b5\") " pod="openshift-marketplace/certified-operators-qmrzh" Nov 28 13:12:12 crc kubenswrapper[4779]: I1128 13:12:12.587858 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6206e27-63ca-4f34-a479-6a460ca9b8b5-utilities\") pod \"certified-operators-qmrzh\" (UID: \"d6206e27-63ca-4f34-a479-6a460ca9b8b5\") " pod="openshift-marketplace/certified-operators-qmrzh" Nov 28 13:12:12 crc kubenswrapper[4779]: I1128 13:12:12.690028 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6206e27-63ca-4f34-a479-6a460ca9b8b5-catalog-content\") pod \"certified-operators-qmrzh\" (UID: \"d6206e27-63ca-4f34-a479-6a460ca9b8b5\") " pod="openshift-marketplace/certified-operators-qmrzh" Nov 28 13:12:12 crc kubenswrapper[4779]: I1128 13:12:12.690148 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44lch\" (UniqueName: \"kubernetes.io/projected/d6206e27-63ca-4f34-a479-6a460ca9b8b5-kube-api-access-44lch\") pod \"certified-operators-qmrzh\" (UID: \"d6206e27-63ca-4f34-a479-6a460ca9b8b5\") " pod="openshift-marketplace/certified-operators-qmrzh" Nov 28 13:12:12 crc kubenswrapper[4779]: I1128 13:12:12.690186 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6206e27-63ca-4f34-a479-6a460ca9b8b5-utilities\") pod \"certified-operators-qmrzh\" (UID: \"d6206e27-63ca-4f34-a479-6a460ca9b8b5\") " pod="openshift-marketplace/certified-operators-qmrzh" Nov 28 13:12:12 crc kubenswrapper[4779]: I1128 13:12:12.690605 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6206e27-63ca-4f34-a479-6a460ca9b8b5-catalog-content\") pod \"certified-operators-qmrzh\" (UID: \"d6206e27-63ca-4f34-a479-6a460ca9b8b5\") " pod="openshift-marketplace/certified-operators-qmrzh" Nov 28 13:12:12 crc kubenswrapper[4779]: I1128 13:12:12.690615 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6206e27-63ca-4f34-a479-6a460ca9b8b5-utilities\") pod \"certified-operators-qmrzh\" (UID: \"d6206e27-63ca-4f34-a479-6a460ca9b8b5\") " pod="openshift-marketplace/certified-operators-qmrzh" Nov 28 13:12:12 crc kubenswrapper[4779]: I1128 13:12:12.718369 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44lch\" (UniqueName: \"kubernetes.io/projected/d6206e27-63ca-4f34-a479-6a460ca9b8b5-kube-api-access-44lch\") pod \"certified-operators-qmrzh\" (UID: \"d6206e27-63ca-4f34-a479-6a460ca9b8b5\") " pod="openshift-marketplace/certified-operators-qmrzh" Nov 28 13:12:12 crc kubenswrapper[4779]: I1128 13:12:12.768038 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qmrzh" Nov 28 13:12:13 crc kubenswrapper[4779]: I1128 13:12:13.259553 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4ptkk" Nov 28 13:12:13 crc kubenswrapper[4779]: I1128 13:12:13.259920 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qmrzh"] Nov 28 13:12:14 crc kubenswrapper[4779]: I1128 13:12:14.219669 4779 generic.go:334] "Generic (PLEG): container finished" podID="d6206e27-63ca-4f34-a479-6a460ca9b8b5" containerID="d2f8eb02e8ca19a5d2cb66bc3e5a4aea2d067b4fcc7489223b36485f0f70b179" exitCode=0 Nov 28 13:12:14 crc kubenswrapper[4779]: I1128 13:12:14.219703 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qmrzh" event={"ID":"d6206e27-63ca-4f34-a479-6a460ca9b8b5","Type":"ContainerDied","Data":"d2f8eb02e8ca19a5d2cb66bc3e5a4aea2d067b4fcc7489223b36485f0f70b179"} Nov 28 13:12:14 crc kubenswrapper[4779]: I1128 13:12:14.220131 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qmrzh" event={"ID":"d6206e27-63ca-4f34-a479-6a460ca9b8b5","Type":"ContainerStarted","Data":"8dadc377bd6b70431ab034d608d3968ef6832a5cf3d08b68171d01463d88be6b"} Nov 28 13:12:14 crc kubenswrapper[4779]: I1128 13:12:14.827938 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4ptkk"] Nov 28 13:12:15 crc kubenswrapper[4779]: I1128 13:12:15.234510 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4ptkk" podUID="54370cfc-b67c-4720-b096-1b2b593e0fe2" containerName="registry-server" containerID="cri-o://c959182650fa0a559d257beb91418f8e6e2ee291dd1c69213c328df91ab9e1b3" gracePeriod=2 Nov 28 13:12:15 crc kubenswrapper[4779]: I1128 13:12:15.874281 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4ptkk" Nov 28 13:12:15 crc kubenswrapper[4779]: E1128 13:12:15.919411 4779 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6206e27_63ca_4f34_a479_6a460ca9b8b5.slice/crio-conmon-fe719ce2469bb8237a1e4615826a8a05b8b5e57f8a7566f2127649d5be4894c3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6206e27_63ca_4f34_a479_6a460ca9b8b5.slice/crio-fe719ce2469bb8237a1e4615826a8a05b8b5e57f8a7566f2127649d5be4894c3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6763a62_0f2c_4f57_9391_731d12201cce.slice\": RecentStats: unable to find data in memory cache]" Nov 28 13:12:15 crc kubenswrapper[4779]: I1128 13:12:15.953971 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvcrx\" (UniqueName: \"kubernetes.io/projected/54370cfc-b67c-4720-b096-1b2b593e0fe2-kube-api-access-lvcrx\") pod \"54370cfc-b67c-4720-b096-1b2b593e0fe2\" (UID: \"54370cfc-b67c-4720-b096-1b2b593e0fe2\") " Nov 28 13:12:15 crc kubenswrapper[4779]: I1128 13:12:15.954037 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54370cfc-b67c-4720-b096-1b2b593e0fe2-utilities\") pod \"54370cfc-b67c-4720-b096-1b2b593e0fe2\" (UID: \"54370cfc-b67c-4720-b096-1b2b593e0fe2\") " Nov 28 13:12:15 crc kubenswrapper[4779]: I1128 13:12:15.954204 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54370cfc-b67c-4720-b096-1b2b593e0fe2-catalog-content\") pod \"54370cfc-b67c-4720-b096-1b2b593e0fe2\" (UID: \"54370cfc-b67c-4720-b096-1b2b593e0fe2\") " Nov 28 13:12:15 crc kubenswrapper[4779]: I1128 13:12:15.954995 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54370cfc-b67c-4720-b096-1b2b593e0fe2-utilities" (OuterVolumeSpecName: "utilities") pod "54370cfc-b67c-4720-b096-1b2b593e0fe2" (UID: "54370cfc-b67c-4720-b096-1b2b593e0fe2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 13:12:15 crc kubenswrapper[4779]: I1128 13:12:15.964022 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54370cfc-b67c-4720-b096-1b2b593e0fe2-kube-api-access-lvcrx" (OuterVolumeSpecName: "kube-api-access-lvcrx") pod "54370cfc-b67c-4720-b096-1b2b593e0fe2" (UID: "54370cfc-b67c-4720-b096-1b2b593e0fe2"). InnerVolumeSpecName "kube-api-access-lvcrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:12:16 crc kubenswrapper[4779]: I1128 13:12:16.056322 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvcrx\" (UniqueName: \"kubernetes.io/projected/54370cfc-b67c-4720-b096-1b2b593e0fe2-kube-api-access-lvcrx\") on node \"crc\" DevicePath \"\"" Nov 28 13:12:16 crc kubenswrapper[4779]: I1128 13:12:16.056349 4779 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54370cfc-b67c-4720-b096-1b2b593e0fe2-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 13:12:16 crc kubenswrapper[4779]: I1128 13:12:16.251147 4779 generic.go:334] "Generic (PLEG): container finished" podID="54370cfc-b67c-4720-b096-1b2b593e0fe2" containerID="c959182650fa0a559d257beb91418f8e6e2ee291dd1c69213c328df91ab9e1b3" exitCode=0 Nov 28 13:12:16 crc kubenswrapper[4779]: I1128 13:12:16.251247 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4ptkk" Nov 28 13:12:16 crc kubenswrapper[4779]: I1128 13:12:16.251293 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4ptkk" event={"ID":"54370cfc-b67c-4720-b096-1b2b593e0fe2","Type":"ContainerDied","Data":"c959182650fa0a559d257beb91418f8e6e2ee291dd1c69213c328df91ab9e1b3"} Nov 28 13:12:16 crc kubenswrapper[4779]: I1128 13:12:16.251378 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4ptkk" event={"ID":"54370cfc-b67c-4720-b096-1b2b593e0fe2","Type":"ContainerDied","Data":"ea96c2c4e95b28f15c37388d2f25346dddce49a267718e3c74ace381252ac34e"} Nov 28 13:12:16 crc kubenswrapper[4779]: I1128 13:12:16.251420 4779 scope.go:117] "RemoveContainer" containerID="c959182650fa0a559d257beb91418f8e6e2ee291dd1c69213c328df91ab9e1b3" Nov 28 13:12:16 crc kubenswrapper[4779]: I1128 13:12:16.254447 4779 generic.go:334] "Generic (PLEG): container finished" podID="d6206e27-63ca-4f34-a479-6a460ca9b8b5" containerID="fe719ce2469bb8237a1e4615826a8a05b8b5e57f8a7566f2127649d5be4894c3" exitCode=0 Nov 28 13:12:16 crc kubenswrapper[4779]: I1128 13:12:16.254502 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qmrzh" event={"ID":"d6206e27-63ca-4f34-a479-6a460ca9b8b5","Type":"ContainerDied","Data":"fe719ce2469bb8237a1e4615826a8a05b8b5e57f8a7566f2127649d5be4894c3"} Nov 28 13:12:16 crc kubenswrapper[4779]: I1128 13:12:16.305401 4779 scope.go:117] "RemoveContainer" containerID="3eb6aa0fff70b3313dc3dd02547cf188439ef93cbdea960b5179de34822d1f3a" Nov 28 13:12:16 crc kubenswrapper[4779]: I1128 13:12:16.330888 4779 scope.go:117] "RemoveContainer" containerID="0fb089a4b7d4449df32af3f91e59054419093b656a13be483a8204c807db8a0e" Nov 28 13:12:16 crc kubenswrapper[4779]: I1128 13:12:16.356073 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54370cfc-b67c-4720-b096-1b2b593e0fe2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "54370cfc-b67c-4720-b096-1b2b593e0fe2" (UID: "54370cfc-b67c-4720-b096-1b2b593e0fe2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 13:12:16 crc kubenswrapper[4779]: I1128 13:12:16.363356 4779 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54370cfc-b67c-4720-b096-1b2b593e0fe2-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 13:12:16 crc kubenswrapper[4779]: I1128 13:12:16.377008 4779 scope.go:117] "RemoveContainer" containerID="c959182650fa0a559d257beb91418f8e6e2ee291dd1c69213c328df91ab9e1b3" Nov 28 13:12:16 crc kubenswrapper[4779]: E1128 13:12:16.377540 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c959182650fa0a559d257beb91418f8e6e2ee291dd1c69213c328df91ab9e1b3\": container with ID starting with c959182650fa0a559d257beb91418f8e6e2ee291dd1c69213c328df91ab9e1b3 not found: ID does not exist" containerID="c959182650fa0a559d257beb91418f8e6e2ee291dd1c69213c328df91ab9e1b3" Nov 28 13:12:16 crc kubenswrapper[4779]: I1128 13:12:16.377582 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c959182650fa0a559d257beb91418f8e6e2ee291dd1c69213c328df91ab9e1b3"} err="failed to get container status \"c959182650fa0a559d257beb91418f8e6e2ee291dd1c69213c328df91ab9e1b3\": rpc error: code = NotFound desc = could not find container \"c959182650fa0a559d257beb91418f8e6e2ee291dd1c69213c328df91ab9e1b3\": container with ID starting with c959182650fa0a559d257beb91418f8e6e2ee291dd1c69213c328df91ab9e1b3 not found: ID does not exist" Nov 28 13:12:16 crc kubenswrapper[4779]: I1128 13:12:16.377608 4779 scope.go:117] "RemoveContainer" containerID="3eb6aa0fff70b3313dc3dd02547cf188439ef93cbdea960b5179de34822d1f3a" Nov 28 13:12:16 crc kubenswrapper[4779]: E1128 13:12:16.377896 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3eb6aa0fff70b3313dc3dd02547cf188439ef93cbdea960b5179de34822d1f3a\": container with ID starting with 3eb6aa0fff70b3313dc3dd02547cf188439ef93cbdea960b5179de34822d1f3a not found: ID does not exist" containerID="3eb6aa0fff70b3313dc3dd02547cf188439ef93cbdea960b5179de34822d1f3a" Nov 28 13:12:16 crc kubenswrapper[4779]: I1128 13:12:16.377919 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3eb6aa0fff70b3313dc3dd02547cf188439ef93cbdea960b5179de34822d1f3a"} err="failed to get container status \"3eb6aa0fff70b3313dc3dd02547cf188439ef93cbdea960b5179de34822d1f3a\": rpc error: code = NotFound desc = could not find container \"3eb6aa0fff70b3313dc3dd02547cf188439ef93cbdea960b5179de34822d1f3a\": container with ID starting with 3eb6aa0fff70b3313dc3dd02547cf188439ef93cbdea960b5179de34822d1f3a not found: ID does not exist" Nov 28 13:12:16 crc kubenswrapper[4779]: I1128 13:12:16.377933 4779 scope.go:117] "RemoveContainer" containerID="0fb089a4b7d4449df32af3f91e59054419093b656a13be483a8204c807db8a0e" Nov 28 13:12:16 crc kubenswrapper[4779]: E1128 13:12:16.378162 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fb089a4b7d4449df32af3f91e59054419093b656a13be483a8204c807db8a0e\": container with ID starting with 0fb089a4b7d4449df32af3f91e59054419093b656a13be483a8204c807db8a0e not found: ID does not exist" containerID="0fb089a4b7d4449df32af3f91e59054419093b656a13be483a8204c807db8a0e" Nov 28 13:12:16 crc kubenswrapper[4779]: I1128 13:12:16.378186 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fb089a4b7d4449df32af3f91e59054419093b656a13be483a8204c807db8a0e"} err="failed to get container status \"0fb089a4b7d4449df32af3f91e59054419093b656a13be483a8204c807db8a0e\": rpc error: code = NotFound desc = could not find container \"0fb089a4b7d4449df32af3f91e59054419093b656a13be483a8204c807db8a0e\": container with ID starting with 0fb089a4b7d4449df32af3f91e59054419093b656a13be483a8204c807db8a0e not found: ID does not exist" Nov 28 13:12:16 crc kubenswrapper[4779]: I1128 13:12:16.593810 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4ptkk"] Nov 28 13:12:16 crc kubenswrapper[4779]: I1128 13:12:16.605261 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4ptkk"] Nov 28 13:12:17 crc kubenswrapper[4779]: I1128 13:12:17.743294 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54370cfc-b67c-4720-b096-1b2b593e0fe2" path="/var/lib/kubelet/pods/54370cfc-b67c-4720-b096-1b2b593e0fe2/volumes" Nov 28 13:12:18 crc kubenswrapper[4779]: I1128 13:12:18.284171 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qmrzh" event={"ID":"d6206e27-63ca-4f34-a479-6a460ca9b8b5","Type":"ContainerStarted","Data":"7fb5c486a8bf9f701369718f44944c97bc3c2303fc2ab0d1a7af630b9a48bd77"} Nov 28 13:12:18 crc kubenswrapper[4779]: I1128 13:12:18.332497 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qmrzh" podStartSLOduration=3.326921122 podStartE2EDuration="6.332472829s" podCreationTimestamp="2025-11-28 13:12:12 +0000 UTC" firstStartedPulling="2025-11-28 13:12:14.222964768 +0000 UTC m=+2194.788640122" lastFinishedPulling="2025-11-28 13:12:17.228516475 +0000 UTC m=+2197.794191829" observedRunningTime="2025-11-28 13:12:18.31266086 +0000 UTC m=+2198.878336284" watchObservedRunningTime="2025-11-28 13:12:18.332472829 +0000 UTC m=+2198.898148223" Nov 28 13:12:22 crc kubenswrapper[4779]: I1128 13:12:22.769384 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qmrzh" Nov 28 13:12:22 crc kubenswrapper[4779]: I1128 13:12:22.770180 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qmrzh" Nov 28 13:12:22 crc kubenswrapper[4779]: I1128 13:12:22.874338 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qmrzh" Nov 28 13:12:23 crc kubenswrapper[4779]: I1128 13:12:23.436226 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qmrzh" Nov 28 13:12:26 crc kubenswrapper[4779]: E1128 13:12:26.210079 4779 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6763a62_0f2c_4f57_9391_731d12201cce.slice\": RecentStats: unable to find data in memory cache]" Nov 28 13:12:26 crc kubenswrapper[4779]: I1128 13:12:26.441015 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qmrzh"] Nov 28 13:12:26 crc kubenswrapper[4779]: I1128 13:12:26.441419 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qmrzh" podUID="d6206e27-63ca-4f34-a479-6a460ca9b8b5" containerName="registry-server" containerID="cri-o://7fb5c486a8bf9f701369718f44944c97bc3c2303fc2ab0d1a7af630b9a48bd77" gracePeriod=2 Nov 28 13:12:27 crc kubenswrapper[4779]: I1128 13:12:27.395203 4779 generic.go:334] "Generic (PLEG): container finished" podID="d6206e27-63ca-4f34-a479-6a460ca9b8b5" containerID="7fb5c486a8bf9f701369718f44944c97bc3c2303fc2ab0d1a7af630b9a48bd77" exitCode=0 Nov 28 13:12:27 crc kubenswrapper[4779]: I1128 13:12:27.395292 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qmrzh" event={"ID":"d6206e27-63ca-4f34-a479-6a460ca9b8b5","Type":"ContainerDied","Data":"7fb5c486a8bf9f701369718f44944c97bc3c2303fc2ab0d1a7af630b9a48bd77"} Nov 28 13:12:27 crc kubenswrapper[4779]: I1128 13:12:27.525509 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qmrzh" Nov 28 13:12:27 crc kubenswrapper[4779]: I1128 13:12:27.601967 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44lch\" (UniqueName: \"kubernetes.io/projected/d6206e27-63ca-4f34-a479-6a460ca9b8b5-kube-api-access-44lch\") pod \"d6206e27-63ca-4f34-a479-6a460ca9b8b5\" (UID: \"d6206e27-63ca-4f34-a479-6a460ca9b8b5\") " Nov 28 13:12:27 crc kubenswrapper[4779]: I1128 13:12:27.602166 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6206e27-63ca-4f34-a479-6a460ca9b8b5-catalog-content\") pod \"d6206e27-63ca-4f34-a479-6a460ca9b8b5\" (UID: \"d6206e27-63ca-4f34-a479-6a460ca9b8b5\") " Nov 28 13:12:27 crc kubenswrapper[4779]: I1128 13:12:27.602241 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6206e27-63ca-4f34-a479-6a460ca9b8b5-utilities\") pod \"d6206e27-63ca-4f34-a479-6a460ca9b8b5\" (UID: \"d6206e27-63ca-4f34-a479-6a460ca9b8b5\") " Nov 28 13:12:27 crc kubenswrapper[4779]: I1128 13:12:27.603408 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6206e27-63ca-4f34-a479-6a460ca9b8b5-utilities" (OuterVolumeSpecName: "utilities") pod "d6206e27-63ca-4f34-a479-6a460ca9b8b5" (UID: "d6206e27-63ca-4f34-a479-6a460ca9b8b5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 13:12:27 crc kubenswrapper[4779]: I1128 13:12:27.625664 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6206e27-63ca-4f34-a479-6a460ca9b8b5-kube-api-access-44lch" (OuterVolumeSpecName: "kube-api-access-44lch") pod "d6206e27-63ca-4f34-a479-6a460ca9b8b5" (UID: "d6206e27-63ca-4f34-a479-6a460ca9b8b5"). InnerVolumeSpecName "kube-api-access-44lch". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:12:27 crc kubenswrapper[4779]: I1128 13:12:27.650148 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6206e27-63ca-4f34-a479-6a460ca9b8b5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d6206e27-63ca-4f34-a479-6a460ca9b8b5" (UID: "d6206e27-63ca-4f34-a479-6a460ca9b8b5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 13:12:27 crc kubenswrapper[4779]: I1128 13:12:27.704841 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44lch\" (UniqueName: \"kubernetes.io/projected/d6206e27-63ca-4f34-a479-6a460ca9b8b5-kube-api-access-44lch\") on node \"crc\" DevicePath \"\"" Nov 28 13:12:27 crc kubenswrapper[4779]: I1128 13:12:27.704885 4779 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6206e27-63ca-4f34-a479-6a460ca9b8b5-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 13:12:27 crc kubenswrapper[4779]: I1128 13:12:27.704898 4779 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6206e27-63ca-4f34-a479-6a460ca9b8b5-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 13:12:28 crc kubenswrapper[4779]: I1128 13:12:28.409943 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qmrzh" event={"ID":"d6206e27-63ca-4f34-a479-6a460ca9b8b5","Type":"ContainerDied","Data":"8dadc377bd6b70431ab034d608d3968ef6832a5cf3d08b68171d01463d88be6b"} Nov 28 13:12:28 crc kubenswrapper[4779]: I1128 13:12:28.410814 4779 scope.go:117] "RemoveContainer" containerID="7fb5c486a8bf9f701369718f44944c97bc3c2303fc2ab0d1a7af630b9a48bd77" Nov 28 13:12:28 crc kubenswrapper[4779]: I1128 13:12:28.410037 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qmrzh" Nov 28 13:12:28 crc kubenswrapper[4779]: I1128 13:12:28.442719 4779 scope.go:117] "RemoveContainer" containerID="fe719ce2469bb8237a1e4615826a8a05b8b5e57f8a7566f2127649d5be4894c3" Nov 28 13:12:28 crc kubenswrapper[4779]: I1128 13:12:28.455150 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qmrzh"] Nov 28 13:12:28 crc kubenswrapper[4779]: I1128 13:12:28.463637 4779 scope.go:117] "RemoveContainer" containerID="d2f8eb02e8ca19a5d2cb66bc3e5a4aea2d067b4fcc7489223b36485f0f70b179" Nov 28 13:12:28 crc kubenswrapper[4779]: I1128 13:12:28.465017 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qmrzh"] Nov 28 13:12:29 crc kubenswrapper[4779]: I1128 13:12:29.742952 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6206e27-63ca-4f34-a479-6a460ca9b8b5" path="/var/lib/kubelet/pods/d6206e27-63ca-4f34-a479-6a460ca9b8b5/volumes" Nov 28 13:12:36 crc kubenswrapper[4779]: E1128 13:12:36.494480 4779 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6763a62_0f2c_4f57_9391_731d12201cce.slice\": RecentStats: unable to find data in memory cache]" Nov 28 13:12:39 crc kubenswrapper[4779]: I1128 13:12:39.540643 4779 generic.go:334] "Generic (PLEG): container finished" podID="6b291c86-c80b-41e0-9ebd-bff5f1d3de42" containerID="3d782226e748547a042e1615db326cbc03c73a1f9205cd8352afd50b12b6de42" exitCode=0 Nov 28 13:12:39 crc kubenswrapper[4779]: I1128 13:12:39.540751 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m59g8" event={"ID":"6b291c86-c80b-41e0-9ebd-bff5f1d3de42","Type":"ContainerDied","Data":"3d782226e748547a042e1615db326cbc03c73a1f9205cd8352afd50b12b6de42"} Nov 28 13:12:40 crc kubenswrapper[4779]: I1128 13:12:40.977497 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m59g8" Nov 28 13:12:41 crc kubenswrapper[4779]: I1128 13:12:41.074922 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6b291c86-c80b-41e0-9ebd-bff5f1d3de42-ssh-key\") pod \"6b291c86-c80b-41e0-9ebd-bff5f1d3de42\" (UID: \"6b291c86-c80b-41e0-9ebd-bff5f1d3de42\") " Nov 28 13:12:41 crc kubenswrapper[4779]: I1128 13:12:41.075147 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b291c86-c80b-41e0-9ebd-bff5f1d3de42-neutron-metadata-combined-ca-bundle\") pod \"6b291c86-c80b-41e0-9ebd-bff5f1d3de42\" (UID: \"6b291c86-c80b-41e0-9ebd-bff5f1d3de42\") " Nov 28 13:12:41 crc kubenswrapper[4779]: I1128 13:12:41.075188 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6b291c86-c80b-41e0-9ebd-bff5f1d3de42-nova-metadata-neutron-config-0\") pod \"6b291c86-c80b-41e0-9ebd-bff5f1d3de42\" (UID: \"6b291c86-c80b-41e0-9ebd-bff5f1d3de42\") " Nov 28 13:12:41 crc kubenswrapper[4779]: I1128 13:12:41.075303 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2trxc\" (UniqueName: \"kubernetes.io/projected/6b291c86-c80b-41e0-9ebd-bff5f1d3de42-kube-api-access-2trxc\") pod \"6b291c86-c80b-41e0-9ebd-bff5f1d3de42\" (UID: \"6b291c86-c80b-41e0-9ebd-bff5f1d3de42\") " Nov 28 13:12:41 crc kubenswrapper[4779]: I1128 13:12:41.075362 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6b291c86-c80b-41e0-9ebd-bff5f1d3de42-neutron-ovn-metadata-agent-neutron-config-0\") pod \"6b291c86-c80b-41e0-9ebd-bff5f1d3de42\" (UID: \"6b291c86-c80b-41e0-9ebd-bff5f1d3de42\") " Nov 28 13:12:41 crc kubenswrapper[4779]: I1128 13:12:41.075418 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6b291c86-c80b-41e0-9ebd-bff5f1d3de42-inventory\") pod \"6b291c86-c80b-41e0-9ebd-bff5f1d3de42\" (UID: \"6b291c86-c80b-41e0-9ebd-bff5f1d3de42\") " Nov 28 13:12:41 crc kubenswrapper[4779]: I1128 13:12:41.108688 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b291c86-c80b-41e0-9ebd-bff5f1d3de42-kube-api-access-2trxc" (OuterVolumeSpecName: "kube-api-access-2trxc") pod "6b291c86-c80b-41e0-9ebd-bff5f1d3de42" (UID: "6b291c86-c80b-41e0-9ebd-bff5f1d3de42"). InnerVolumeSpecName "kube-api-access-2trxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:12:41 crc kubenswrapper[4779]: I1128 13:12:41.113397 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b291c86-c80b-41e0-9ebd-bff5f1d3de42-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "6b291c86-c80b-41e0-9ebd-bff5f1d3de42" (UID: "6b291c86-c80b-41e0-9ebd-bff5f1d3de42"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:12:41 crc kubenswrapper[4779]: I1128 13:12:41.115064 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b291c86-c80b-41e0-9ebd-bff5f1d3de42-inventory" (OuterVolumeSpecName: "inventory") pod "6b291c86-c80b-41e0-9ebd-bff5f1d3de42" (UID: "6b291c86-c80b-41e0-9ebd-bff5f1d3de42"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:12:41 crc kubenswrapper[4779]: I1128 13:12:41.115154 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b291c86-c80b-41e0-9ebd-bff5f1d3de42-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "6b291c86-c80b-41e0-9ebd-bff5f1d3de42" (UID: "6b291c86-c80b-41e0-9ebd-bff5f1d3de42"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:12:41 crc kubenswrapper[4779]: I1128 13:12:41.130575 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b291c86-c80b-41e0-9ebd-bff5f1d3de42-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "6b291c86-c80b-41e0-9ebd-bff5f1d3de42" (UID: "6b291c86-c80b-41e0-9ebd-bff5f1d3de42"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:12:41 crc kubenswrapper[4779]: I1128 13:12:41.130658 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b291c86-c80b-41e0-9ebd-bff5f1d3de42-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6b291c86-c80b-41e0-9ebd-bff5f1d3de42" (UID: "6b291c86-c80b-41e0-9ebd-bff5f1d3de42"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:12:41 crc kubenswrapper[4779]: I1128 13:12:41.177433 4779 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b291c86-c80b-41e0-9ebd-bff5f1d3de42-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 13:12:41 crc kubenswrapper[4779]: I1128 13:12:41.177466 4779 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6b291c86-c80b-41e0-9ebd-bff5f1d3de42-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 28 13:12:41 crc kubenswrapper[4779]: I1128 13:12:41.177478 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2trxc\" (UniqueName: \"kubernetes.io/projected/6b291c86-c80b-41e0-9ebd-bff5f1d3de42-kube-api-access-2trxc\") on node \"crc\" DevicePath \"\"" Nov 28 13:12:41 crc kubenswrapper[4779]: I1128 13:12:41.177489 4779 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6b291c86-c80b-41e0-9ebd-bff5f1d3de42-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 28 13:12:41 crc kubenswrapper[4779]: I1128 13:12:41.177500 4779 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6b291c86-c80b-41e0-9ebd-bff5f1d3de42-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 13:12:41 crc kubenswrapper[4779]: I1128 13:12:41.177508 4779 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6b291c86-c80b-41e0-9ebd-bff5f1d3de42-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 13:12:41 crc kubenswrapper[4779]: I1128 13:12:41.564367 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m59g8" event={"ID":"6b291c86-c80b-41e0-9ebd-bff5f1d3de42","Type":"ContainerDied","Data":"1150be453aab6abf54a989e52fa8f61041d0dd8467bdcc9d7140466205bfe869"} Nov 28 13:12:41 crc kubenswrapper[4779]: I1128 13:12:41.564428 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1150be453aab6abf54a989e52fa8f61041d0dd8467bdcc9d7140466205bfe869" Nov 28 13:12:41 crc kubenswrapper[4779]: I1128 13:12:41.564467 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m59g8" Nov 28 13:12:41 crc kubenswrapper[4779]: I1128 13:12:41.675444 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vs9sh"] Nov 28 13:12:41 crc kubenswrapper[4779]: E1128 13:12:41.675843 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6206e27-63ca-4f34-a479-6a460ca9b8b5" containerName="extract-utilities" Nov 28 13:12:41 crc kubenswrapper[4779]: I1128 13:12:41.675863 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6206e27-63ca-4f34-a479-6a460ca9b8b5" containerName="extract-utilities" Nov 28 13:12:41 crc kubenswrapper[4779]: E1128 13:12:41.675877 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54370cfc-b67c-4720-b096-1b2b593e0fe2" containerName="extract-utilities" Nov 28 13:12:41 crc kubenswrapper[4779]: I1128 13:12:41.675885 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="54370cfc-b67c-4720-b096-1b2b593e0fe2" containerName="extract-utilities" Nov 28 13:12:41 crc kubenswrapper[4779]: E1128 13:12:41.675906 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6206e27-63ca-4f34-a479-6a460ca9b8b5" containerName="registry-server" Nov 28 13:12:41 crc kubenswrapper[4779]: I1128 13:12:41.675919 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6206e27-63ca-4f34-a479-6a460ca9b8b5" containerName="registry-server" Nov 28 13:12:41 crc kubenswrapper[4779]: E1128 13:12:41.675939 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54370cfc-b67c-4720-b096-1b2b593e0fe2" containerName="extract-content" Nov 28 13:12:41 crc kubenswrapper[4779]: I1128 13:12:41.675948 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="54370cfc-b67c-4720-b096-1b2b593e0fe2" containerName="extract-content" Nov 28 13:12:41 crc kubenswrapper[4779]: E1128 13:12:41.675965 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6206e27-63ca-4f34-a479-6a460ca9b8b5" containerName="extract-content" Nov 28 13:12:41 crc kubenswrapper[4779]: I1128 13:12:41.675977 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6206e27-63ca-4f34-a479-6a460ca9b8b5" containerName="extract-content" Nov 28 13:12:41 crc kubenswrapper[4779]: E1128 13:12:41.676002 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b291c86-c80b-41e0-9ebd-bff5f1d3de42" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 28 13:12:41 crc kubenswrapper[4779]: I1128 13:12:41.676012 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b291c86-c80b-41e0-9ebd-bff5f1d3de42" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 28 13:12:41 crc kubenswrapper[4779]: E1128 13:12:41.676032 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54370cfc-b67c-4720-b096-1b2b593e0fe2" containerName="registry-server" Nov 28 13:12:41 crc kubenswrapper[4779]: I1128 13:12:41.676039 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="54370cfc-b67c-4720-b096-1b2b593e0fe2" containerName="registry-server" Nov 28 13:12:41 crc kubenswrapper[4779]: I1128 13:12:41.676296 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="54370cfc-b67c-4720-b096-1b2b593e0fe2" containerName="registry-server" Nov 28 13:12:41 crc kubenswrapper[4779]: I1128 13:12:41.676314 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6206e27-63ca-4f34-a479-6a460ca9b8b5" containerName="registry-server" Nov 28 13:12:41 crc kubenswrapper[4779]: I1128 13:12:41.676333 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b291c86-c80b-41e0-9ebd-bff5f1d3de42" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 28 13:12:41 crc kubenswrapper[4779]: I1128 13:12:41.676999 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vs9sh" Nov 28 13:12:41 crc kubenswrapper[4779]: I1128 13:12:41.679292 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 13:12:41 crc kubenswrapper[4779]: I1128 13:12:41.679591 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 13:12:41 crc kubenswrapper[4779]: I1128 13:12:41.679814 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zfcth" Nov 28 13:12:41 crc kubenswrapper[4779]: I1128 13:12:41.679973 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Nov 28 13:12:41 crc kubenswrapper[4779]: I1128 13:12:41.680687 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 13:12:41 crc kubenswrapper[4779]: I1128 13:12:41.698788 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vs9sh"] Nov 28 13:12:41 crc kubenswrapper[4779]: I1128 13:12:41.789350 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/303327cf-5fdb-49b9-a9ee-f8498657b10d-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vs9sh\" (UID: \"303327cf-5fdb-49b9-a9ee-f8498657b10d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vs9sh" Nov 28 13:12:41 crc kubenswrapper[4779]: I1128 13:12:41.789405 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/303327cf-5fdb-49b9-a9ee-f8498657b10d-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vs9sh\" (UID: \"303327cf-5fdb-49b9-a9ee-f8498657b10d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vs9sh" Nov 28 13:12:41 crc kubenswrapper[4779]: I1128 13:12:41.789491 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/303327cf-5fdb-49b9-a9ee-f8498657b10d-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vs9sh\" (UID: \"303327cf-5fdb-49b9-a9ee-f8498657b10d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vs9sh" Nov 28 13:12:41 crc kubenswrapper[4779]: I1128 13:12:41.789527 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/303327cf-5fdb-49b9-a9ee-f8498657b10d-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vs9sh\" (UID: \"303327cf-5fdb-49b9-a9ee-f8498657b10d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vs9sh" Nov 28 13:12:41 crc kubenswrapper[4779]: I1128 13:12:41.789562 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w86mn\" (UniqueName: \"kubernetes.io/projected/303327cf-5fdb-49b9-a9ee-f8498657b10d-kube-api-access-w86mn\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vs9sh\" (UID: \"303327cf-5fdb-49b9-a9ee-f8498657b10d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vs9sh" Nov 28 13:12:41 crc kubenswrapper[4779]: I1128 13:12:41.892338 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/303327cf-5fdb-49b9-a9ee-f8498657b10d-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vs9sh\" (UID: \"303327cf-5fdb-49b9-a9ee-f8498657b10d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vs9sh" Nov 28 13:12:41 crc kubenswrapper[4779]: I1128 13:12:41.892397 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w86mn\" (UniqueName: \"kubernetes.io/projected/303327cf-5fdb-49b9-a9ee-f8498657b10d-kube-api-access-w86mn\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vs9sh\" (UID: \"303327cf-5fdb-49b9-a9ee-f8498657b10d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vs9sh" Nov 28 13:12:41 crc kubenswrapper[4779]: I1128 13:12:41.892491 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/303327cf-5fdb-49b9-a9ee-f8498657b10d-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vs9sh\" (UID: \"303327cf-5fdb-49b9-a9ee-f8498657b10d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vs9sh" Nov 28 13:12:41 crc kubenswrapper[4779]: I1128 13:12:41.892511 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/303327cf-5fdb-49b9-a9ee-f8498657b10d-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vs9sh\" (UID: \"303327cf-5fdb-49b9-a9ee-f8498657b10d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vs9sh" Nov 28 13:12:41 crc kubenswrapper[4779]: I1128 13:12:41.892574 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/303327cf-5fdb-49b9-a9ee-f8498657b10d-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vs9sh\" (UID: \"303327cf-5fdb-49b9-a9ee-f8498657b10d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vs9sh" Nov 28 13:12:41 crc kubenswrapper[4779]: I1128 13:12:41.896667 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/303327cf-5fdb-49b9-a9ee-f8498657b10d-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vs9sh\" (UID: \"303327cf-5fdb-49b9-a9ee-f8498657b10d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vs9sh" Nov 28 13:12:41 crc kubenswrapper[4779]: I1128 13:12:41.897221 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/303327cf-5fdb-49b9-a9ee-f8498657b10d-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vs9sh\" (UID: \"303327cf-5fdb-49b9-a9ee-f8498657b10d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vs9sh" Nov 28 13:12:41 crc kubenswrapper[4779]: I1128 13:12:41.899314 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/303327cf-5fdb-49b9-a9ee-f8498657b10d-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vs9sh\" (UID: \"303327cf-5fdb-49b9-a9ee-f8498657b10d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vs9sh" Nov 28 13:12:41 crc kubenswrapper[4779]: I1128 13:12:41.908601 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/303327cf-5fdb-49b9-a9ee-f8498657b10d-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vs9sh\" (UID: \"303327cf-5fdb-49b9-a9ee-f8498657b10d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vs9sh" Nov 28 13:12:41 crc kubenswrapper[4779]: I1128 13:12:41.913606 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w86mn\" (UniqueName: \"kubernetes.io/projected/303327cf-5fdb-49b9-a9ee-f8498657b10d-kube-api-access-w86mn\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vs9sh\" (UID: \"303327cf-5fdb-49b9-a9ee-f8498657b10d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vs9sh" Nov 28 13:12:42 crc kubenswrapper[4779]: I1128 13:12:42.006137 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vs9sh" Nov 28 13:12:42 crc kubenswrapper[4779]: I1128 13:12:42.607451 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vs9sh"] Nov 28 13:12:42 crc kubenswrapper[4779]: W1128 13:12:42.615937 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod303327cf_5fdb_49b9_a9ee_f8498657b10d.slice/crio-7129f6120baff7c62cd615a9005a76e3980d91eef1a3338e766abebbe231d94f WatchSource:0}: Error finding container 7129f6120baff7c62cd615a9005a76e3980d91eef1a3338e766abebbe231d94f: Status 404 returned error can't find the container with id 7129f6120baff7c62cd615a9005a76e3980d91eef1a3338e766abebbe231d94f Nov 28 13:12:43 crc kubenswrapper[4779]: I1128 13:12:43.594971 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vs9sh" event={"ID":"303327cf-5fdb-49b9-a9ee-f8498657b10d","Type":"ContainerStarted","Data":"7129f6120baff7c62cd615a9005a76e3980d91eef1a3338e766abebbe231d94f"} Nov 28 13:12:44 crc kubenswrapper[4779]: I1128 13:12:44.608990 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vs9sh" event={"ID":"303327cf-5fdb-49b9-a9ee-f8498657b10d","Type":"ContainerStarted","Data":"3aaeba4755683d8bc727764f72f5a4713c329fefb9475f2085cad2db3460be09"} Nov 28 13:12:44 crc kubenswrapper[4779]: I1128 13:12:44.630014 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vs9sh" podStartSLOduration=2.835209778 podStartE2EDuration="3.62999412s" podCreationTimestamp="2025-11-28 13:12:41 +0000 UTC" firstStartedPulling="2025-11-28 13:12:42.618051428 +0000 UTC m=+2223.183726802" lastFinishedPulling="2025-11-28 13:12:43.41283579 +0000 UTC m=+2223.978511144" observedRunningTime="2025-11-28 13:12:44.629298652 +0000 UTC m=+2225.194974046" watchObservedRunningTime="2025-11-28 13:12:44.62999412 +0000 UTC m=+2225.195669494" Nov 28 13:12:46 crc kubenswrapper[4779]: I1128 13:12:46.284865 4779 patch_prober.go:28] interesting pod/machine-config-daemon-kj9g2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 13:12:46 crc kubenswrapper[4779]: I1128 13:12:46.284918 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 13:13:06 crc kubenswrapper[4779]: I1128 13:13:06.058169 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xc75g"] Nov 28 13:13:06 crc kubenswrapper[4779]: I1128 13:13:06.061956 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xc75g" Nov 28 13:13:06 crc kubenswrapper[4779]: I1128 13:13:06.077626 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xc75g"] Nov 28 13:13:06 crc kubenswrapper[4779]: I1128 13:13:06.092159 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/960ee051-0f53-4e4a-87ee-6d31b4bbae8a-catalog-content\") pod \"redhat-marketplace-xc75g\" (UID: \"960ee051-0f53-4e4a-87ee-6d31b4bbae8a\") " pod="openshift-marketplace/redhat-marketplace-xc75g" Nov 28 13:13:06 crc kubenswrapper[4779]: I1128 13:13:06.092443 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l6xt\" (UniqueName: \"kubernetes.io/projected/960ee051-0f53-4e4a-87ee-6d31b4bbae8a-kube-api-access-2l6xt\") pod \"redhat-marketplace-xc75g\" (UID: \"960ee051-0f53-4e4a-87ee-6d31b4bbae8a\") " pod="openshift-marketplace/redhat-marketplace-xc75g" Nov 28 13:13:06 crc kubenswrapper[4779]: I1128 13:13:06.092517 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/960ee051-0f53-4e4a-87ee-6d31b4bbae8a-utilities\") pod \"redhat-marketplace-xc75g\" (UID: \"960ee051-0f53-4e4a-87ee-6d31b4bbae8a\") " pod="openshift-marketplace/redhat-marketplace-xc75g" Nov 28 13:13:06 crc kubenswrapper[4779]: I1128 13:13:06.194462 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/960ee051-0f53-4e4a-87ee-6d31b4bbae8a-catalog-content\") pod \"redhat-marketplace-xc75g\" (UID: \"960ee051-0f53-4e4a-87ee-6d31b4bbae8a\") " pod="openshift-marketplace/redhat-marketplace-xc75g" Nov 28 13:13:06 crc kubenswrapper[4779]: I1128 13:13:06.194615 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2l6xt\" (UniqueName: \"kubernetes.io/projected/960ee051-0f53-4e4a-87ee-6d31b4bbae8a-kube-api-access-2l6xt\") pod \"redhat-marketplace-xc75g\" (UID: \"960ee051-0f53-4e4a-87ee-6d31b4bbae8a\") " pod="openshift-marketplace/redhat-marketplace-xc75g" Nov 28 13:13:06 crc kubenswrapper[4779]: I1128 13:13:06.194644 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/960ee051-0f53-4e4a-87ee-6d31b4bbae8a-utilities\") pod \"redhat-marketplace-xc75g\" (UID: \"960ee051-0f53-4e4a-87ee-6d31b4bbae8a\") " pod="openshift-marketplace/redhat-marketplace-xc75g" Nov 28 13:13:06 crc kubenswrapper[4779]: I1128 13:13:06.194915 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/960ee051-0f53-4e4a-87ee-6d31b4bbae8a-catalog-content\") pod \"redhat-marketplace-xc75g\" (UID: \"960ee051-0f53-4e4a-87ee-6d31b4bbae8a\") " pod="openshift-marketplace/redhat-marketplace-xc75g" Nov 28 13:13:06 crc kubenswrapper[4779]: I1128 13:13:06.195190 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/960ee051-0f53-4e4a-87ee-6d31b4bbae8a-utilities\") pod \"redhat-marketplace-xc75g\" (UID: \"960ee051-0f53-4e4a-87ee-6d31b4bbae8a\") " pod="openshift-marketplace/redhat-marketplace-xc75g" Nov 28 13:13:06 crc kubenswrapper[4779]: I1128 13:13:06.218582 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2l6xt\" (UniqueName: \"kubernetes.io/projected/960ee051-0f53-4e4a-87ee-6d31b4bbae8a-kube-api-access-2l6xt\") pod \"redhat-marketplace-xc75g\" (UID: \"960ee051-0f53-4e4a-87ee-6d31b4bbae8a\") " pod="openshift-marketplace/redhat-marketplace-xc75g" Nov 28 13:13:06 crc kubenswrapper[4779]: I1128 13:13:06.408815 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xc75g" Nov 28 13:13:06 crc kubenswrapper[4779]: I1128 13:13:06.695515 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xc75g"] Nov 28 13:13:06 crc kubenswrapper[4779]: I1128 13:13:06.878747 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xc75g" event={"ID":"960ee051-0f53-4e4a-87ee-6d31b4bbae8a","Type":"ContainerStarted","Data":"9892bd5f50728d1284ffd12e04e14afc205ce962ebe4afb54df5035698f9f045"} Nov 28 13:13:07 crc kubenswrapper[4779]: I1128 13:13:07.891884 4779 generic.go:334] "Generic (PLEG): container finished" podID="960ee051-0f53-4e4a-87ee-6d31b4bbae8a" containerID="02d66a412b3fe894d29c4b376de08ae8701a218a79bad6fcde0f9a00b1585f19" exitCode=0 Nov 28 13:13:07 crc kubenswrapper[4779]: I1128 13:13:07.892337 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xc75g" event={"ID":"960ee051-0f53-4e4a-87ee-6d31b4bbae8a","Type":"ContainerDied","Data":"02d66a412b3fe894d29c4b376de08ae8701a218a79bad6fcde0f9a00b1585f19"} Nov 28 13:13:08 crc kubenswrapper[4779]: I1128 13:13:08.905990 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xc75g" event={"ID":"960ee051-0f53-4e4a-87ee-6d31b4bbae8a","Type":"ContainerStarted","Data":"e1e78b56d8dcbc280cda8038354ada2a6155a3cfc5d03f313737121cd4e6e139"} Nov 28 13:13:09 crc kubenswrapper[4779]: I1128 13:13:09.917413 4779 generic.go:334] "Generic (PLEG): container finished" podID="960ee051-0f53-4e4a-87ee-6d31b4bbae8a" containerID="e1e78b56d8dcbc280cda8038354ada2a6155a3cfc5d03f313737121cd4e6e139" exitCode=0 Nov 28 13:13:09 crc kubenswrapper[4779]: I1128 13:13:09.917507 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xc75g" event={"ID":"960ee051-0f53-4e4a-87ee-6d31b4bbae8a","Type":"ContainerDied","Data":"e1e78b56d8dcbc280cda8038354ada2a6155a3cfc5d03f313737121cd4e6e139"} Nov 28 13:13:10 crc kubenswrapper[4779]: I1128 13:13:10.933654 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xc75g" event={"ID":"960ee051-0f53-4e4a-87ee-6d31b4bbae8a","Type":"ContainerStarted","Data":"7d2d0e7e792cfe2c056aa2ccf5f981a4248a34af6912ef9d8c779f64e47193fa"} Nov 28 13:13:10 crc kubenswrapper[4779]: I1128 13:13:10.959893 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xc75g" podStartSLOduration=2.319754814 podStartE2EDuration="4.959868213s" podCreationTimestamp="2025-11-28 13:13:06 +0000 UTC" firstStartedPulling="2025-11-28 13:13:07.895529246 +0000 UTC m=+2248.461204640" lastFinishedPulling="2025-11-28 13:13:10.535642645 +0000 UTC m=+2251.101318039" observedRunningTime="2025-11-28 13:13:10.953511023 +0000 UTC m=+2251.519186407" watchObservedRunningTime="2025-11-28 13:13:10.959868213 +0000 UTC m=+2251.525543587" Nov 28 13:13:16 crc kubenswrapper[4779]: I1128 13:13:16.285791 4779 patch_prober.go:28] interesting pod/machine-config-daemon-kj9g2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 13:13:16 crc kubenswrapper[4779]: I1128 13:13:16.286693 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 13:13:16 crc kubenswrapper[4779]: I1128 13:13:16.414592 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xc75g" Nov 28 13:13:16 crc kubenswrapper[4779]: I1128 13:13:16.414674 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xc75g" Nov 28 13:13:16 crc kubenswrapper[4779]: I1128 13:13:16.510797 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xc75g" Nov 28 13:13:17 crc kubenswrapper[4779]: I1128 13:13:17.067761 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xc75g" Nov 28 13:13:17 crc kubenswrapper[4779]: I1128 13:13:17.122748 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xc75g"] Nov 28 13:13:19 crc kubenswrapper[4779]: I1128 13:13:19.019040 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xc75g" podUID="960ee051-0f53-4e4a-87ee-6d31b4bbae8a" containerName="registry-server" containerID="cri-o://7d2d0e7e792cfe2c056aa2ccf5f981a4248a34af6912ef9d8c779f64e47193fa" gracePeriod=2 Nov 28 13:13:19 crc kubenswrapper[4779]: I1128 13:13:19.466056 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xc75g" Nov 28 13:13:19 crc kubenswrapper[4779]: I1128 13:13:19.531443 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2l6xt\" (UniqueName: \"kubernetes.io/projected/960ee051-0f53-4e4a-87ee-6d31b4bbae8a-kube-api-access-2l6xt\") pod \"960ee051-0f53-4e4a-87ee-6d31b4bbae8a\" (UID: \"960ee051-0f53-4e4a-87ee-6d31b4bbae8a\") " Nov 28 13:13:19 crc kubenswrapper[4779]: I1128 13:13:19.532372 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/960ee051-0f53-4e4a-87ee-6d31b4bbae8a-catalog-content\") pod \"960ee051-0f53-4e4a-87ee-6d31b4bbae8a\" (UID: \"960ee051-0f53-4e4a-87ee-6d31b4bbae8a\") " Nov 28 13:13:19 crc kubenswrapper[4779]: I1128 13:13:19.532645 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/960ee051-0f53-4e4a-87ee-6d31b4bbae8a-utilities\") pod \"960ee051-0f53-4e4a-87ee-6d31b4bbae8a\" (UID: \"960ee051-0f53-4e4a-87ee-6d31b4bbae8a\") " Nov 28 13:13:19 crc kubenswrapper[4779]: I1128 13:13:19.534273 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/960ee051-0f53-4e4a-87ee-6d31b4bbae8a-utilities" (OuterVolumeSpecName: "utilities") pod "960ee051-0f53-4e4a-87ee-6d31b4bbae8a" (UID: "960ee051-0f53-4e4a-87ee-6d31b4bbae8a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 13:13:19 crc kubenswrapper[4779]: I1128 13:13:19.540844 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/960ee051-0f53-4e4a-87ee-6d31b4bbae8a-kube-api-access-2l6xt" (OuterVolumeSpecName: "kube-api-access-2l6xt") pod "960ee051-0f53-4e4a-87ee-6d31b4bbae8a" (UID: "960ee051-0f53-4e4a-87ee-6d31b4bbae8a"). InnerVolumeSpecName "kube-api-access-2l6xt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:13:19 crc kubenswrapper[4779]: I1128 13:13:19.555448 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/960ee051-0f53-4e4a-87ee-6d31b4bbae8a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "960ee051-0f53-4e4a-87ee-6d31b4bbae8a" (UID: "960ee051-0f53-4e4a-87ee-6d31b4bbae8a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 13:13:19 crc kubenswrapper[4779]: I1128 13:13:19.635072 4779 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/960ee051-0f53-4e4a-87ee-6d31b4bbae8a-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 13:13:19 crc kubenswrapper[4779]: I1128 13:13:19.635124 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2l6xt\" (UniqueName: \"kubernetes.io/projected/960ee051-0f53-4e4a-87ee-6d31b4bbae8a-kube-api-access-2l6xt\") on node \"crc\" DevicePath \"\"" Nov 28 13:13:19 crc kubenswrapper[4779]: I1128 13:13:19.635136 4779 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/960ee051-0f53-4e4a-87ee-6d31b4bbae8a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 13:13:20 crc kubenswrapper[4779]: I1128 13:13:20.034490 4779 generic.go:334] "Generic (PLEG): container finished" podID="960ee051-0f53-4e4a-87ee-6d31b4bbae8a" containerID="7d2d0e7e792cfe2c056aa2ccf5f981a4248a34af6912ef9d8c779f64e47193fa" exitCode=0 Nov 28 13:13:20 crc kubenswrapper[4779]: I1128 13:13:20.034589 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xc75g" Nov 28 13:13:20 crc kubenswrapper[4779]: I1128 13:13:20.034616 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xc75g" event={"ID":"960ee051-0f53-4e4a-87ee-6d31b4bbae8a","Type":"ContainerDied","Data":"7d2d0e7e792cfe2c056aa2ccf5f981a4248a34af6912ef9d8c779f64e47193fa"} Nov 28 13:13:20 crc kubenswrapper[4779]: I1128 13:13:20.035037 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xc75g" event={"ID":"960ee051-0f53-4e4a-87ee-6d31b4bbae8a","Type":"ContainerDied","Data":"9892bd5f50728d1284ffd12e04e14afc205ce962ebe4afb54df5035698f9f045"} Nov 28 13:13:20 crc kubenswrapper[4779]: I1128 13:13:20.035087 4779 scope.go:117] "RemoveContainer" containerID="7d2d0e7e792cfe2c056aa2ccf5f981a4248a34af6912ef9d8c779f64e47193fa" Nov 28 13:13:20 crc kubenswrapper[4779]: I1128 13:13:20.073301 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xc75g"] Nov 28 13:13:20 crc kubenswrapper[4779]: I1128 13:13:20.082296 4779 scope.go:117] "RemoveContainer" containerID="e1e78b56d8dcbc280cda8038354ada2a6155a3cfc5d03f313737121cd4e6e139" Nov 28 13:13:20 crc kubenswrapper[4779]: I1128 13:13:20.086561 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xc75g"] Nov 28 13:13:20 crc kubenswrapper[4779]: I1128 13:13:20.105692 4779 scope.go:117] "RemoveContainer" containerID="02d66a412b3fe894d29c4b376de08ae8701a218a79bad6fcde0f9a00b1585f19" Nov 28 13:13:20 crc kubenswrapper[4779]: I1128 13:13:20.166059 4779 scope.go:117] "RemoveContainer" containerID="7d2d0e7e792cfe2c056aa2ccf5f981a4248a34af6912ef9d8c779f64e47193fa" Nov 28 13:13:20 crc kubenswrapper[4779]: E1128 13:13:20.166588 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d2d0e7e792cfe2c056aa2ccf5f981a4248a34af6912ef9d8c779f64e47193fa\": container with ID starting with 7d2d0e7e792cfe2c056aa2ccf5f981a4248a34af6912ef9d8c779f64e47193fa not found: ID does not exist" containerID="7d2d0e7e792cfe2c056aa2ccf5f981a4248a34af6912ef9d8c779f64e47193fa" Nov 28 13:13:20 crc kubenswrapper[4779]: I1128 13:13:20.166626 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d2d0e7e792cfe2c056aa2ccf5f981a4248a34af6912ef9d8c779f64e47193fa"} err="failed to get container status \"7d2d0e7e792cfe2c056aa2ccf5f981a4248a34af6912ef9d8c779f64e47193fa\": rpc error: code = NotFound desc = could not find container \"7d2d0e7e792cfe2c056aa2ccf5f981a4248a34af6912ef9d8c779f64e47193fa\": container with ID starting with 7d2d0e7e792cfe2c056aa2ccf5f981a4248a34af6912ef9d8c779f64e47193fa not found: ID does not exist" Nov 28 13:13:20 crc kubenswrapper[4779]: I1128 13:13:20.166657 4779 scope.go:117] "RemoveContainer" containerID="e1e78b56d8dcbc280cda8038354ada2a6155a3cfc5d03f313737121cd4e6e139" Nov 28 13:13:20 crc kubenswrapper[4779]: E1128 13:13:20.167057 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1e78b56d8dcbc280cda8038354ada2a6155a3cfc5d03f313737121cd4e6e139\": container with ID starting with e1e78b56d8dcbc280cda8038354ada2a6155a3cfc5d03f313737121cd4e6e139 not found: ID does not exist" containerID="e1e78b56d8dcbc280cda8038354ada2a6155a3cfc5d03f313737121cd4e6e139" Nov 28 13:13:20 crc kubenswrapper[4779]: I1128 13:13:20.167116 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1e78b56d8dcbc280cda8038354ada2a6155a3cfc5d03f313737121cd4e6e139"} err="failed to get container status \"e1e78b56d8dcbc280cda8038354ada2a6155a3cfc5d03f313737121cd4e6e139\": rpc error: code = NotFound desc = could not find container \"e1e78b56d8dcbc280cda8038354ada2a6155a3cfc5d03f313737121cd4e6e139\": container with ID starting with e1e78b56d8dcbc280cda8038354ada2a6155a3cfc5d03f313737121cd4e6e139 not found: ID does not exist" Nov 28 13:13:20 crc kubenswrapper[4779]: I1128 13:13:20.167136 4779 scope.go:117] "RemoveContainer" containerID="02d66a412b3fe894d29c4b376de08ae8701a218a79bad6fcde0f9a00b1585f19" Nov 28 13:13:20 crc kubenswrapper[4779]: E1128 13:13:20.167472 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02d66a412b3fe894d29c4b376de08ae8701a218a79bad6fcde0f9a00b1585f19\": container with ID starting with 02d66a412b3fe894d29c4b376de08ae8701a218a79bad6fcde0f9a00b1585f19 not found: ID does not exist" containerID="02d66a412b3fe894d29c4b376de08ae8701a218a79bad6fcde0f9a00b1585f19" Nov 28 13:13:20 crc kubenswrapper[4779]: I1128 13:13:20.167523 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02d66a412b3fe894d29c4b376de08ae8701a218a79bad6fcde0f9a00b1585f19"} err="failed to get container status \"02d66a412b3fe894d29c4b376de08ae8701a218a79bad6fcde0f9a00b1585f19\": rpc error: code = NotFound desc = could not find container \"02d66a412b3fe894d29c4b376de08ae8701a218a79bad6fcde0f9a00b1585f19\": container with ID starting with 02d66a412b3fe894d29c4b376de08ae8701a218a79bad6fcde0f9a00b1585f19 not found: ID does not exist" Nov 28 13:13:21 crc kubenswrapper[4779]: I1128 13:13:21.747465 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="960ee051-0f53-4e4a-87ee-6d31b4bbae8a" path="/var/lib/kubelet/pods/960ee051-0f53-4e4a-87ee-6d31b4bbae8a/volumes" Nov 28 13:13:46 crc kubenswrapper[4779]: I1128 13:13:46.285515 4779 patch_prober.go:28] interesting pod/machine-config-daemon-kj9g2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 13:13:46 crc kubenswrapper[4779]: I1128 13:13:46.286268 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 13:13:46 crc kubenswrapper[4779]: I1128 13:13:46.286333 4779 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" Nov 28 13:13:46 crc kubenswrapper[4779]: I1128 13:13:46.287462 4779 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0f23bfc3f21acb42c5efa181b7a3f3d8dc174ef89b59a8d8e8ca5b8924483e94"} pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 13:13:46 crc kubenswrapper[4779]: I1128 13:13:46.287560 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" containerID="cri-o://0f23bfc3f21acb42c5efa181b7a3f3d8dc174ef89b59a8d8e8ca5b8924483e94" gracePeriod=600 Nov 28 13:13:46 crc kubenswrapper[4779]: E1128 13:13:46.410828 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:13:47 crc kubenswrapper[4779]: I1128 13:13:47.327884 4779 generic.go:334] "Generic (PLEG): container finished" podID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerID="0f23bfc3f21acb42c5efa181b7a3f3d8dc174ef89b59a8d8e8ca5b8924483e94" exitCode=0 Nov 28 13:13:47 crc kubenswrapper[4779]: I1128 13:13:47.327956 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" event={"ID":"3b2a3eb4-4de5-491b-b466-3a35b7d745ec","Type":"ContainerDied","Data":"0f23bfc3f21acb42c5efa181b7a3f3d8dc174ef89b59a8d8e8ca5b8924483e94"} Nov 28 13:13:47 crc kubenswrapper[4779]: I1128 13:13:47.328008 4779 scope.go:117] "RemoveContainer" containerID="618d5fbe87fa7087aedd943e484ddc1d1d52c7576dd65968b53ae378fd1610f9" Nov 28 13:13:47 crc kubenswrapper[4779]: I1128 13:13:47.328935 4779 scope.go:117] "RemoveContainer" containerID="0f23bfc3f21acb42c5efa181b7a3f3d8dc174ef89b59a8d8e8ca5b8924483e94" Nov 28 13:13:47 crc kubenswrapper[4779]: E1128 13:13:47.329567 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:13:57 crc kubenswrapper[4779]: I1128 13:13:57.726245 4779 scope.go:117] "RemoveContainer" containerID="0f23bfc3f21acb42c5efa181b7a3f3d8dc174ef89b59a8d8e8ca5b8924483e94" Nov 28 13:13:57 crc kubenswrapper[4779]: E1128 13:13:57.726970 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:14:12 crc kubenswrapper[4779]: I1128 13:14:12.726726 4779 scope.go:117] "RemoveContainer" containerID="0f23bfc3f21acb42c5efa181b7a3f3d8dc174ef89b59a8d8e8ca5b8924483e94" Nov 28 13:14:12 crc kubenswrapper[4779]: E1128 13:14:12.727444 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:14:23 crc kubenswrapper[4779]: I1128 13:14:23.727074 4779 scope.go:117] "RemoveContainer" containerID="0f23bfc3f21acb42c5efa181b7a3f3d8dc174ef89b59a8d8e8ca5b8924483e94" Nov 28 13:14:23 crc kubenswrapper[4779]: E1128 13:14:23.727991 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:14:34 crc kubenswrapper[4779]: I1128 13:14:34.727307 4779 scope.go:117] "RemoveContainer" containerID="0f23bfc3f21acb42c5efa181b7a3f3d8dc174ef89b59a8d8e8ca5b8924483e94" Nov 28 13:14:34 crc kubenswrapper[4779]: E1128 13:14:34.728441 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:14:49 crc kubenswrapper[4779]: I1128 13:14:49.735424 4779 scope.go:117] "RemoveContainer" containerID="0f23bfc3f21acb42c5efa181b7a3f3d8dc174ef89b59a8d8e8ca5b8924483e94" Nov 28 13:14:49 crc kubenswrapper[4779]: E1128 13:14:49.736436 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:15:00 crc kubenswrapper[4779]: I1128 13:15:00.173905 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405595-8tcp5"] Nov 28 13:15:00 crc kubenswrapper[4779]: E1128 13:15:00.175062 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="960ee051-0f53-4e4a-87ee-6d31b4bbae8a" containerName="extract-utilities" Nov 28 13:15:00 crc kubenswrapper[4779]: I1128 13:15:00.175083 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="960ee051-0f53-4e4a-87ee-6d31b4bbae8a" containerName="extract-utilities" Nov 28 13:15:00 crc kubenswrapper[4779]: E1128 13:15:00.175126 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="960ee051-0f53-4e4a-87ee-6d31b4bbae8a" containerName="extract-content" Nov 28 13:15:00 crc kubenswrapper[4779]: I1128 13:15:00.175139 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="960ee051-0f53-4e4a-87ee-6d31b4bbae8a" containerName="extract-content" Nov 28 13:15:00 crc kubenswrapper[4779]: E1128 13:15:00.175171 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="960ee051-0f53-4e4a-87ee-6d31b4bbae8a" containerName="registry-server" Nov 28 13:15:00 crc kubenswrapper[4779]: I1128 13:15:00.175183 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="960ee051-0f53-4e4a-87ee-6d31b4bbae8a" containerName="registry-server" Nov 28 13:15:00 crc kubenswrapper[4779]: I1128 13:15:00.175515 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="960ee051-0f53-4e4a-87ee-6d31b4bbae8a" containerName="registry-server" Nov 28 13:15:00 crc kubenswrapper[4779]: I1128 13:15:00.176691 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405595-8tcp5" Nov 28 13:15:00 crc kubenswrapper[4779]: I1128 13:15:00.179587 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 13:15:00 crc kubenswrapper[4779]: I1128 13:15:00.179999 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 13:15:00 crc kubenswrapper[4779]: I1128 13:15:00.197276 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405595-8tcp5"] Nov 28 13:15:00 crc kubenswrapper[4779]: I1128 13:15:00.319712 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whxg8\" (UniqueName: \"kubernetes.io/projected/8df347a4-5fcc-46f8-a24d-f2a26e44a039-kube-api-access-whxg8\") pod \"collect-profiles-29405595-8tcp5\" (UID: \"8df347a4-5fcc-46f8-a24d-f2a26e44a039\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405595-8tcp5" Nov 28 13:15:00 crc kubenswrapper[4779]: I1128 13:15:00.319790 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8df347a4-5fcc-46f8-a24d-f2a26e44a039-config-volume\") pod \"collect-profiles-29405595-8tcp5\" (UID: \"8df347a4-5fcc-46f8-a24d-f2a26e44a039\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405595-8tcp5" Nov 28 13:15:00 crc kubenswrapper[4779]: I1128 13:15:00.320037 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8df347a4-5fcc-46f8-a24d-f2a26e44a039-secret-volume\") pod \"collect-profiles-29405595-8tcp5\" (UID: \"8df347a4-5fcc-46f8-a24d-f2a26e44a039\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405595-8tcp5" Nov 28 13:15:00 crc kubenswrapper[4779]: I1128 13:15:00.423061 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whxg8\" (UniqueName: \"kubernetes.io/projected/8df347a4-5fcc-46f8-a24d-f2a26e44a039-kube-api-access-whxg8\") pod \"collect-profiles-29405595-8tcp5\" (UID: \"8df347a4-5fcc-46f8-a24d-f2a26e44a039\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405595-8tcp5" Nov 28 13:15:00 crc kubenswrapper[4779]: I1128 13:15:00.423270 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8df347a4-5fcc-46f8-a24d-f2a26e44a039-config-volume\") pod \"collect-profiles-29405595-8tcp5\" (UID: \"8df347a4-5fcc-46f8-a24d-f2a26e44a039\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405595-8tcp5" Nov 28 13:15:00 crc kubenswrapper[4779]: I1128 13:15:00.423382 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8df347a4-5fcc-46f8-a24d-f2a26e44a039-secret-volume\") pod \"collect-profiles-29405595-8tcp5\" (UID: \"8df347a4-5fcc-46f8-a24d-f2a26e44a039\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405595-8tcp5" Nov 28 13:15:00 crc kubenswrapper[4779]: I1128 13:15:00.424865 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8df347a4-5fcc-46f8-a24d-f2a26e44a039-config-volume\") pod \"collect-profiles-29405595-8tcp5\" (UID: \"8df347a4-5fcc-46f8-a24d-f2a26e44a039\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405595-8tcp5" Nov 28 13:15:00 crc kubenswrapper[4779]: I1128 13:15:00.431450 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8df347a4-5fcc-46f8-a24d-f2a26e44a039-secret-volume\") pod \"collect-profiles-29405595-8tcp5\" (UID: \"8df347a4-5fcc-46f8-a24d-f2a26e44a039\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405595-8tcp5" Nov 28 13:15:00 crc kubenswrapper[4779]: I1128 13:15:00.441589 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whxg8\" (UniqueName: \"kubernetes.io/projected/8df347a4-5fcc-46f8-a24d-f2a26e44a039-kube-api-access-whxg8\") pod \"collect-profiles-29405595-8tcp5\" (UID: \"8df347a4-5fcc-46f8-a24d-f2a26e44a039\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405595-8tcp5" Nov 28 13:15:00 crc kubenswrapper[4779]: I1128 13:15:00.536085 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405595-8tcp5" Nov 28 13:15:00 crc kubenswrapper[4779]: I1128 13:15:00.725756 4779 scope.go:117] "RemoveContainer" containerID="0f23bfc3f21acb42c5efa181b7a3f3d8dc174ef89b59a8d8e8ca5b8924483e94" Nov 28 13:15:00 crc kubenswrapper[4779]: E1128 13:15:00.726043 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:15:01 crc kubenswrapper[4779]: I1128 13:15:01.029411 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405595-8tcp5"] Nov 28 13:15:01 crc kubenswrapper[4779]: W1128 13:15:01.036773 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8df347a4_5fcc_46f8_a24d_f2a26e44a039.slice/crio-a660e19f552188cfb3c31ccffcf521d6288be013cffe56e75ccda3c108b41236 WatchSource:0}: Error finding container a660e19f552188cfb3c31ccffcf521d6288be013cffe56e75ccda3c108b41236: Status 404 returned error can't find the container with id a660e19f552188cfb3c31ccffcf521d6288be013cffe56e75ccda3c108b41236 Nov 28 13:15:01 crc kubenswrapper[4779]: I1128 13:15:01.116480 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405595-8tcp5" event={"ID":"8df347a4-5fcc-46f8-a24d-f2a26e44a039","Type":"ContainerStarted","Data":"a660e19f552188cfb3c31ccffcf521d6288be013cffe56e75ccda3c108b41236"} Nov 28 13:15:02 crc kubenswrapper[4779]: I1128 13:15:02.127064 4779 generic.go:334] "Generic (PLEG): container finished" podID="8df347a4-5fcc-46f8-a24d-f2a26e44a039" containerID="7519d59a0a7018b62411a3fa74964a575046be943b39bc136970e4e7bf33891d" exitCode=0 Nov 28 13:15:02 crc kubenswrapper[4779]: I1128 13:15:02.127154 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405595-8tcp5" event={"ID":"8df347a4-5fcc-46f8-a24d-f2a26e44a039","Type":"ContainerDied","Data":"7519d59a0a7018b62411a3fa74964a575046be943b39bc136970e4e7bf33891d"} Nov 28 13:15:03 crc kubenswrapper[4779]: I1128 13:15:03.481989 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405595-8tcp5" Nov 28 13:15:03 crc kubenswrapper[4779]: I1128 13:15:03.578933 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whxg8\" (UniqueName: \"kubernetes.io/projected/8df347a4-5fcc-46f8-a24d-f2a26e44a039-kube-api-access-whxg8\") pod \"8df347a4-5fcc-46f8-a24d-f2a26e44a039\" (UID: \"8df347a4-5fcc-46f8-a24d-f2a26e44a039\") " Nov 28 13:15:03 crc kubenswrapper[4779]: I1128 13:15:03.579041 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8df347a4-5fcc-46f8-a24d-f2a26e44a039-secret-volume\") pod \"8df347a4-5fcc-46f8-a24d-f2a26e44a039\" (UID: \"8df347a4-5fcc-46f8-a24d-f2a26e44a039\") " Nov 28 13:15:03 crc kubenswrapper[4779]: I1128 13:15:03.579126 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8df347a4-5fcc-46f8-a24d-f2a26e44a039-config-volume\") pod \"8df347a4-5fcc-46f8-a24d-f2a26e44a039\" (UID: \"8df347a4-5fcc-46f8-a24d-f2a26e44a039\") " Nov 28 13:15:03 crc kubenswrapper[4779]: I1128 13:15:03.579753 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8df347a4-5fcc-46f8-a24d-f2a26e44a039-config-volume" (OuterVolumeSpecName: "config-volume") pod "8df347a4-5fcc-46f8-a24d-f2a26e44a039" (UID: "8df347a4-5fcc-46f8-a24d-f2a26e44a039"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 13:15:03 crc kubenswrapper[4779]: I1128 13:15:03.583880 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8df347a4-5fcc-46f8-a24d-f2a26e44a039-kube-api-access-whxg8" (OuterVolumeSpecName: "kube-api-access-whxg8") pod "8df347a4-5fcc-46f8-a24d-f2a26e44a039" (UID: "8df347a4-5fcc-46f8-a24d-f2a26e44a039"). InnerVolumeSpecName "kube-api-access-whxg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:15:03 crc kubenswrapper[4779]: I1128 13:15:03.583994 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8df347a4-5fcc-46f8-a24d-f2a26e44a039-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8df347a4-5fcc-46f8-a24d-f2a26e44a039" (UID: "8df347a4-5fcc-46f8-a24d-f2a26e44a039"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:15:03 crc kubenswrapper[4779]: I1128 13:15:03.681222 4779 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8df347a4-5fcc-46f8-a24d-f2a26e44a039-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 13:15:03 crc kubenswrapper[4779]: I1128 13:15:03.681261 4779 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8df347a4-5fcc-46f8-a24d-f2a26e44a039-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 13:15:03 crc kubenswrapper[4779]: I1128 13:15:03.681283 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whxg8\" (UniqueName: \"kubernetes.io/projected/8df347a4-5fcc-46f8-a24d-f2a26e44a039-kube-api-access-whxg8\") on node \"crc\" DevicePath \"\"" Nov 28 13:15:04 crc kubenswrapper[4779]: I1128 13:15:04.153302 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405595-8tcp5" event={"ID":"8df347a4-5fcc-46f8-a24d-f2a26e44a039","Type":"ContainerDied","Data":"a660e19f552188cfb3c31ccffcf521d6288be013cffe56e75ccda3c108b41236"} Nov 28 13:15:04 crc kubenswrapper[4779]: I1128 13:15:04.153364 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a660e19f552188cfb3c31ccffcf521d6288be013cffe56e75ccda3c108b41236" Nov 28 13:15:04 crc kubenswrapper[4779]: I1128 13:15:04.153446 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405595-8tcp5" Nov 28 13:15:04 crc kubenswrapper[4779]: I1128 13:15:04.556345 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405550-kkxf5"] Nov 28 13:15:04 crc kubenswrapper[4779]: I1128 13:15:04.566560 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405550-kkxf5"] Nov 28 13:15:05 crc kubenswrapper[4779]: I1128 13:15:05.738768 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="189cc15e-4851-49ad-a757-49451158a3d7" path="/var/lib/kubelet/pods/189cc15e-4851-49ad-a757-49451158a3d7/volumes" Nov 28 13:15:10 crc kubenswrapper[4779]: I1128 13:15:10.778821 4779 scope.go:117] "RemoveContainer" containerID="ed8d289ec1b39b0e1bc6891ba419f36988de748661a8dc25c8bdb04b750af4db" Nov 28 13:15:13 crc kubenswrapper[4779]: I1128 13:15:13.733380 4779 scope.go:117] "RemoveContainer" containerID="0f23bfc3f21acb42c5efa181b7a3f3d8dc174ef89b59a8d8e8ca5b8924483e94" Nov 28 13:15:13 crc kubenswrapper[4779]: E1128 13:15:13.734229 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:15:25 crc kubenswrapper[4779]: I1128 13:15:25.726421 4779 scope.go:117] "RemoveContainer" containerID="0f23bfc3f21acb42c5efa181b7a3f3d8dc174ef89b59a8d8e8ca5b8924483e94" Nov 28 13:15:25 crc kubenswrapper[4779]: E1128 13:15:25.727459 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:15:37 crc kubenswrapper[4779]: I1128 13:15:37.727837 4779 scope.go:117] "RemoveContainer" containerID="0f23bfc3f21acb42c5efa181b7a3f3d8dc174ef89b59a8d8e8ca5b8924483e94" Nov 28 13:15:37 crc kubenswrapper[4779]: E1128 13:15:37.729223 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:15:49 crc kubenswrapper[4779]: I1128 13:15:49.737760 4779 scope.go:117] "RemoveContainer" containerID="0f23bfc3f21acb42c5efa181b7a3f3d8dc174ef89b59a8d8e8ca5b8924483e94" Nov 28 13:15:49 crc kubenswrapper[4779]: E1128 13:15:49.738719 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:16:02 crc kubenswrapper[4779]: I1128 13:16:02.726517 4779 scope.go:117] "RemoveContainer" containerID="0f23bfc3f21acb42c5efa181b7a3f3d8dc174ef89b59a8d8e8ca5b8924483e94" Nov 28 13:16:02 crc kubenswrapper[4779]: E1128 13:16:02.727578 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:16:13 crc kubenswrapper[4779]: I1128 13:16:13.726605 4779 scope.go:117] "RemoveContainer" containerID="0f23bfc3f21acb42c5efa181b7a3f3d8dc174ef89b59a8d8e8ca5b8924483e94" Nov 28 13:16:13 crc kubenswrapper[4779]: E1128 13:16:13.727610 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:16:26 crc kubenswrapper[4779]: I1128 13:16:26.726585 4779 scope.go:117] "RemoveContainer" containerID="0f23bfc3f21acb42c5efa181b7a3f3d8dc174ef89b59a8d8e8ca5b8924483e94" Nov 28 13:16:26 crc kubenswrapper[4779]: E1128 13:16:26.729530 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:16:37 crc kubenswrapper[4779]: I1128 13:16:37.726735 4779 scope.go:117] "RemoveContainer" containerID="0f23bfc3f21acb42c5efa181b7a3f3d8dc174ef89b59a8d8e8ca5b8924483e94" Nov 28 13:16:37 crc kubenswrapper[4779]: E1128 13:16:37.727884 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:16:52 crc kubenswrapper[4779]: I1128 13:16:52.727701 4779 scope.go:117] "RemoveContainer" containerID="0f23bfc3f21acb42c5efa181b7a3f3d8dc174ef89b59a8d8e8ca5b8924483e94" Nov 28 13:16:52 crc kubenswrapper[4779]: E1128 13:16:52.728944 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:17:04 crc kubenswrapper[4779]: I1128 13:17:04.727122 4779 scope.go:117] "RemoveContainer" containerID="0f23bfc3f21acb42c5efa181b7a3f3d8dc174ef89b59a8d8e8ca5b8924483e94" Nov 28 13:17:04 crc kubenswrapper[4779]: E1128 13:17:04.728022 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:17:17 crc kubenswrapper[4779]: I1128 13:17:17.726838 4779 scope.go:117] "RemoveContainer" containerID="0f23bfc3f21acb42c5efa181b7a3f3d8dc174ef89b59a8d8e8ca5b8924483e94" Nov 28 13:17:17 crc kubenswrapper[4779]: E1128 13:17:17.728159 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:17:29 crc kubenswrapper[4779]: I1128 13:17:29.646769 4779 generic.go:334] "Generic (PLEG): container finished" podID="303327cf-5fdb-49b9-a9ee-f8498657b10d" containerID="3aaeba4755683d8bc727764f72f5a4713c329fefb9475f2085cad2db3460be09" exitCode=0 Nov 28 13:17:29 crc kubenswrapper[4779]: I1128 13:17:29.646960 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vs9sh" event={"ID":"303327cf-5fdb-49b9-a9ee-f8498657b10d","Type":"ContainerDied","Data":"3aaeba4755683d8bc727764f72f5a4713c329fefb9475f2085cad2db3460be09"} Nov 28 13:17:29 crc kubenswrapper[4779]: I1128 13:17:29.733805 4779 scope.go:117] "RemoveContainer" containerID="0f23bfc3f21acb42c5efa181b7a3f3d8dc174ef89b59a8d8e8ca5b8924483e94" Nov 28 13:17:29 crc kubenswrapper[4779]: E1128 13:17:29.734187 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:17:31 crc kubenswrapper[4779]: I1128 13:17:31.198035 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vs9sh" Nov 28 13:17:31 crc kubenswrapper[4779]: I1128 13:17:31.280224 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w86mn\" (UniqueName: \"kubernetes.io/projected/303327cf-5fdb-49b9-a9ee-f8498657b10d-kube-api-access-w86mn\") pod \"303327cf-5fdb-49b9-a9ee-f8498657b10d\" (UID: \"303327cf-5fdb-49b9-a9ee-f8498657b10d\") " Nov 28 13:17:31 crc kubenswrapper[4779]: I1128 13:17:31.280842 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/303327cf-5fdb-49b9-a9ee-f8498657b10d-libvirt-combined-ca-bundle\") pod \"303327cf-5fdb-49b9-a9ee-f8498657b10d\" (UID: \"303327cf-5fdb-49b9-a9ee-f8498657b10d\") " Nov 28 13:17:31 crc kubenswrapper[4779]: I1128 13:17:31.281982 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/303327cf-5fdb-49b9-a9ee-f8498657b10d-libvirt-secret-0\") pod \"303327cf-5fdb-49b9-a9ee-f8498657b10d\" (UID: \"303327cf-5fdb-49b9-a9ee-f8498657b10d\") " Nov 28 13:17:31 crc kubenswrapper[4779]: I1128 13:17:31.282400 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/303327cf-5fdb-49b9-a9ee-f8498657b10d-inventory\") pod \"303327cf-5fdb-49b9-a9ee-f8498657b10d\" (UID: \"303327cf-5fdb-49b9-a9ee-f8498657b10d\") " Nov 28 13:17:31 crc kubenswrapper[4779]: I1128 13:17:31.282756 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/303327cf-5fdb-49b9-a9ee-f8498657b10d-ssh-key\") pod \"303327cf-5fdb-49b9-a9ee-f8498657b10d\" (UID: \"303327cf-5fdb-49b9-a9ee-f8498657b10d\") " Nov 28 13:17:31 crc kubenswrapper[4779]: I1128 13:17:31.287117 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/303327cf-5fdb-49b9-a9ee-f8498657b10d-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "303327cf-5fdb-49b9-a9ee-f8498657b10d" (UID: "303327cf-5fdb-49b9-a9ee-f8498657b10d"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:17:31 crc kubenswrapper[4779]: I1128 13:17:31.306627 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/303327cf-5fdb-49b9-a9ee-f8498657b10d-kube-api-access-w86mn" (OuterVolumeSpecName: "kube-api-access-w86mn") pod "303327cf-5fdb-49b9-a9ee-f8498657b10d" (UID: "303327cf-5fdb-49b9-a9ee-f8498657b10d"). InnerVolumeSpecName "kube-api-access-w86mn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:17:31 crc kubenswrapper[4779]: I1128 13:17:31.309001 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/303327cf-5fdb-49b9-a9ee-f8498657b10d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "303327cf-5fdb-49b9-a9ee-f8498657b10d" (UID: "303327cf-5fdb-49b9-a9ee-f8498657b10d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:17:31 crc kubenswrapper[4779]: I1128 13:17:31.324193 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/303327cf-5fdb-49b9-a9ee-f8498657b10d-inventory" (OuterVolumeSpecName: "inventory") pod "303327cf-5fdb-49b9-a9ee-f8498657b10d" (UID: "303327cf-5fdb-49b9-a9ee-f8498657b10d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:17:31 crc kubenswrapper[4779]: I1128 13:17:31.341317 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/303327cf-5fdb-49b9-a9ee-f8498657b10d-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "303327cf-5fdb-49b9-a9ee-f8498657b10d" (UID: "303327cf-5fdb-49b9-a9ee-f8498657b10d"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:17:31 crc kubenswrapper[4779]: I1128 13:17:31.387666 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w86mn\" (UniqueName: \"kubernetes.io/projected/303327cf-5fdb-49b9-a9ee-f8498657b10d-kube-api-access-w86mn\") on node \"crc\" DevicePath \"\"" Nov 28 13:17:31 crc kubenswrapper[4779]: I1128 13:17:31.387717 4779 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/303327cf-5fdb-49b9-a9ee-f8498657b10d-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 13:17:31 crc kubenswrapper[4779]: I1128 13:17:31.387747 4779 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/303327cf-5fdb-49b9-a9ee-f8498657b10d-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Nov 28 13:17:31 crc kubenswrapper[4779]: I1128 13:17:31.387764 4779 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/303327cf-5fdb-49b9-a9ee-f8498657b10d-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 13:17:31 crc kubenswrapper[4779]: I1128 13:17:31.387777 4779 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/303327cf-5fdb-49b9-a9ee-f8498657b10d-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 13:17:31 crc kubenswrapper[4779]: I1128 13:17:31.671610 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vs9sh" event={"ID":"303327cf-5fdb-49b9-a9ee-f8498657b10d","Type":"ContainerDied","Data":"7129f6120baff7c62cd615a9005a76e3980d91eef1a3338e766abebbe231d94f"} Nov 28 13:17:31 crc kubenswrapper[4779]: I1128 13:17:31.671711 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7129f6120baff7c62cd615a9005a76e3980d91eef1a3338e766abebbe231d94f" Nov 28 13:17:31 crc kubenswrapper[4779]: I1128 13:17:31.671741 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vs9sh" Nov 28 13:17:31 crc kubenswrapper[4779]: I1128 13:17:31.791167 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-k9nk4"] Nov 28 13:17:31 crc kubenswrapper[4779]: E1128 13:17:31.791631 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="303327cf-5fdb-49b9-a9ee-f8498657b10d" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 28 13:17:31 crc kubenswrapper[4779]: I1128 13:17:31.791653 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="303327cf-5fdb-49b9-a9ee-f8498657b10d" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 28 13:17:31 crc kubenswrapper[4779]: E1128 13:17:31.791682 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8df347a4-5fcc-46f8-a24d-f2a26e44a039" containerName="collect-profiles" Nov 28 13:17:31 crc kubenswrapper[4779]: I1128 13:17:31.791691 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="8df347a4-5fcc-46f8-a24d-f2a26e44a039" containerName="collect-profiles" Nov 28 13:17:31 crc kubenswrapper[4779]: I1128 13:17:31.791927 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="303327cf-5fdb-49b9-a9ee-f8498657b10d" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 28 13:17:31 crc kubenswrapper[4779]: I1128 13:17:31.791963 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="8df347a4-5fcc-46f8-a24d-f2a26e44a039" containerName="collect-profiles" Nov 28 13:17:31 crc kubenswrapper[4779]: I1128 13:17:31.792693 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k9nk4" Nov 28 13:17:31 crc kubenswrapper[4779]: I1128 13:17:31.795531 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 13:17:31 crc kubenswrapper[4779]: I1128 13:17:31.795669 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 13:17:31 crc kubenswrapper[4779]: I1128 13:17:31.795965 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zfcth" Nov 28 13:17:31 crc kubenswrapper[4779]: I1128 13:17:31.796217 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Nov 28 13:17:31 crc kubenswrapper[4779]: I1128 13:17:31.796528 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 13:17:31 crc kubenswrapper[4779]: I1128 13:17:31.800932 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Nov 28 13:17:31 crc kubenswrapper[4779]: I1128 13:17:31.800963 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Nov 28 13:17:31 crc kubenswrapper[4779]: I1128 13:17:31.804416 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-k9nk4"] Nov 28 13:17:31 crc kubenswrapper[4779]: I1128 13:17:31.908020 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/42f930a2-ac0c-43b5-ab17-1ccd2f30340e-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k9nk4\" (UID: \"42f930a2-ac0c-43b5-ab17-1ccd2f30340e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k9nk4" Nov 28 13:17:31 crc kubenswrapper[4779]: I1128 13:17:31.908142 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/42f930a2-ac0c-43b5-ab17-1ccd2f30340e-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k9nk4\" (UID: \"42f930a2-ac0c-43b5-ab17-1ccd2f30340e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k9nk4" Nov 28 13:17:31 crc kubenswrapper[4779]: I1128 13:17:31.908226 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/42f930a2-ac0c-43b5-ab17-1ccd2f30340e-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k9nk4\" (UID: \"42f930a2-ac0c-43b5-ab17-1ccd2f30340e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k9nk4" Nov 28 13:17:31 crc kubenswrapper[4779]: I1128 13:17:31.908286 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/42f930a2-ac0c-43b5-ab17-1ccd2f30340e-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k9nk4\" (UID: \"42f930a2-ac0c-43b5-ab17-1ccd2f30340e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k9nk4" Nov 28 13:17:31 crc kubenswrapper[4779]: I1128 13:17:31.908319 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/42f930a2-ac0c-43b5-ab17-1ccd2f30340e-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k9nk4\" (UID: \"42f930a2-ac0c-43b5-ab17-1ccd2f30340e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k9nk4" Nov 28 13:17:31 crc kubenswrapper[4779]: I1128 13:17:31.908347 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42f930a2-ac0c-43b5-ab17-1ccd2f30340e-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k9nk4\" (UID: \"42f930a2-ac0c-43b5-ab17-1ccd2f30340e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k9nk4" Nov 28 13:17:31 crc kubenswrapper[4779]: I1128 13:17:31.908401 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/42f930a2-ac0c-43b5-ab17-1ccd2f30340e-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k9nk4\" (UID: \"42f930a2-ac0c-43b5-ab17-1ccd2f30340e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k9nk4" Nov 28 13:17:31 crc kubenswrapper[4779]: I1128 13:17:31.908526 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/42f930a2-ac0c-43b5-ab17-1ccd2f30340e-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k9nk4\" (UID: \"42f930a2-ac0c-43b5-ab17-1ccd2f30340e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k9nk4" Nov 28 13:17:31 crc kubenswrapper[4779]: I1128 13:17:31.908568 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2j99\" (UniqueName: \"kubernetes.io/projected/42f930a2-ac0c-43b5-ab17-1ccd2f30340e-kube-api-access-q2j99\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k9nk4\" (UID: \"42f930a2-ac0c-43b5-ab17-1ccd2f30340e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k9nk4" Nov 28 13:17:32 crc kubenswrapper[4779]: I1128 13:17:32.010339 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/42f930a2-ac0c-43b5-ab17-1ccd2f30340e-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k9nk4\" (UID: \"42f930a2-ac0c-43b5-ab17-1ccd2f30340e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k9nk4" Nov 28 13:17:32 crc kubenswrapper[4779]: I1128 13:17:32.010663 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/42f930a2-ac0c-43b5-ab17-1ccd2f30340e-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k9nk4\" (UID: \"42f930a2-ac0c-43b5-ab17-1ccd2f30340e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k9nk4" Nov 28 13:17:32 crc kubenswrapper[4779]: I1128 13:17:32.010765 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42f930a2-ac0c-43b5-ab17-1ccd2f30340e-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k9nk4\" (UID: \"42f930a2-ac0c-43b5-ab17-1ccd2f30340e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k9nk4" Nov 28 13:17:32 crc kubenswrapper[4779]: I1128 13:17:32.010841 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/42f930a2-ac0c-43b5-ab17-1ccd2f30340e-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k9nk4\" (UID: \"42f930a2-ac0c-43b5-ab17-1ccd2f30340e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k9nk4" Nov 28 13:17:32 crc kubenswrapper[4779]: I1128 13:17:32.010943 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/42f930a2-ac0c-43b5-ab17-1ccd2f30340e-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k9nk4\" (UID: \"42f930a2-ac0c-43b5-ab17-1ccd2f30340e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k9nk4" Nov 28 13:17:32 crc kubenswrapper[4779]: I1128 13:17:32.011175 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/42f930a2-ac0c-43b5-ab17-1ccd2f30340e-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k9nk4\" (UID: \"42f930a2-ac0c-43b5-ab17-1ccd2f30340e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k9nk4" Nov 28 13:17:32 crc kubenswrapper[4779]: I1128 13:17:32.011258 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2j99\" (UniqueName: \"kubernetes.io/projected/42f930a2-ac0c-43b5-ab17-1ccd2f30340e-kube-api-access-q2j99\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k9nk4\" (UID: \"42f930a2-ac0c-43b5-ab17-1ccd2f30340e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k9nk4" Nov 28 13:17:32 crc kubenswrapper[4779]: I1128 13:17:32.011351 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/42f930a2-ac0c-43b5-ab17-1ccd2f30340e-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k9nk4\" (UID: \"42f930a2-ac0c-43b5-ab17-1ccd2f30340e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k9nk4" Nov 28 13:17:32 crc kubenswrapper[4779]: I1128 13:17:32.011509 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/42f930a2-ac0c-43b5-ab17-1ccd2f30340e-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k9nk4\" (UID: \"42f930a2-ac0c-43b5-ab17-1ccd2f30340e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k9nk4" Nov 28 13:17:32 crc kubenswrapper[4779]: I1128 13:17:32.016012 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/42f930a2-ac0c-43b5-ab17-1ccd2f30340e-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k9nk4\" (UID: \"42f930a2-ac0c-43b5-ab17-1ccd2f30340e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k9nk4" Nov 28 13:17:32 crc kubenswrapper[4779]: I1128 13:17:32.019939 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/42f930a2-ac0c-43b5-ab17-1ccd2f30340e-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k9nk4\" (UID: \"42f930a2-ac0c-43b5-ab17-1ccd2f30340e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k9nk4" Nov 28 13:17:32 crc kubenswrapper[4779]: I1128 13:17:32.020791 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/42f930a2-ac0c-43b5-ab17-1ccd2f30340e-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k9nk4\" (UID: \"42f930a2-ac0c-43b5-ab17-1ccd2f30340e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k9nk4" Nov 28 13:17:32 crc kubenswrapper[4779]: I1128 13:17:32.022910 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/42f930a2-ac0c-43b5-ab17-1ccd2f30340e-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k9nk4\" (UID: \"42f930a2-ac0c-43b5-ab17-1ccd2f30340e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k9nk4" Nov 28 13:17:32 crc kubenswrapper[4779]: I1128 13:17:32.023764 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/42f930a2-ac0c-43b5-ab17-1ccd2f30340e-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k9nk4\" (UID: \"42f930a2-ac0c-43b5-ab17-1ccd2f30340e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k9nk4" Nov 28 13:17:32 crc kubenswrapper[4779]: I1128 13:17:32.023862 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42f930a2-ac0c-43b5-ab17-1ccd2f30340e-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k9nk4\" (UID: \"42f930a2-ac0c-43b5-ab17-1ccd2f30340e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k9nk4" Nov 28 13:17:32 crc kubenswrapper[4779]: I1128 13:17:32.023915 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/42f930a2-ac0c-43b5-ab17-1ccd2f30340e-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k9nk4\" (UID: \"42f930a2-ac0c-43b5-ab17-1ccd2f30340e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k9nk4" Nov 28 13:17:32 crc kubenswrapper[4779]: I1128 13:17:32.025498 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/42f930a2-ac0c-43b5-ab17-1ccd2f30340e-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k9nk4\" (UID: \"42f930a2-ac0c-43b5-ab17-1ccd2f30340e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k9nk4" Nov 28 13:17:32 crc kubenswrapper[4779]: I1128 13:17:32.038387 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2j99\" (UniqueName: \"kubernetes.io/projected/42f930a2-ac0c-43b5-ab17-1ccd2f30340e-kube-api-access-q2j99\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k9nk4\" (UID: \"42f930a2-ac0c-43b5-ab17-1ccd2f30340e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k9nk4" Nov 28 13:17:32 crc kubenswrapper[4779]: I1128 13:17:32.111556 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k9nk4" Nov 28 13:17:32 crc kubenswrapper[4779]: I1128 13:17:32.712320 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-k9nk4"] Nov 28 13:17:32 crc kubenswrapper[4779]: I1128 13:17:32.721642 4779 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 13:17:33 crc kubenswrapper[4779]: I1128 13:17:33.691893 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k9nk4" event={"ID":"42f930a2-ac0c-43b5-ab17-1ccd2f30340e","Type":"ContainerStarted","Data":"f39888a642cd0c698f64d1328981261e6bda29b972b364c2fd92d547f14fabb5"} Nov 28 13:17:33 crc kubenswrapper[4779]: I1128 13:17:33.692271 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k9nk4" event={"ID":"42f930a2-ac0c-43b5-ab17-1ccd2f30340e","Type":"ContainerStarted","Data":"2f782ecd9743b4e0b9b8d73d88687690cbf0de57234dcaf85e9457797f1e1ef0"} Nov 28 13:17:33 crc kubenswrapper[4779]: I1128 13:17:33.716750 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k9nk4" podStartSLOduration=2.262370482 podStartE2EDuration="2.716733257s" podCreationTimestamp="2025-11-28 13:17:31 +0000 UTC" firstStartedPulling="2025-11-28 13:17:32.721069936 +0000 UTC m=+2513.286745330" lastFinishedPulling="2025-11-28 13:17:33.175432711 +0000 UTC m=+2513.741108105" observedRunningTime="2025-11-28 13:17:33.711429945 +0000 UTC m=+2514.277105319" watchObservedRunningTime="2025-11-28 13:17:33.716733257 +0000 UTC m=+2514.282408601" Nov 28 13:17:44 crc kubenswrapper[4779]: I1128 13:17:44.726153 4779 scope.go:117] "RemoveContainer" containerID="0f23bfc3f21acb42c5efa181b7a3f3d8dc174ef89b59a8d8e8ca5b8924483e94" Nov 28 13:17:44 crc kubenswrapper[4779]: E1128 13:17:44.727006 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:17:55 crc kubenswrapper[4779]: I1128 13:17:55.726541 4779 scope.go:117] "RemoveContainer" containerID="0f23bfc3f21acb42c5efa181b7a3f3d8dc174ef89b59a8d8e8ca5b8924483e94" Nov 28 13:17:55 crc kubenswrapper[4779]: E1128 13:17:55.727695 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:18:07 crc kubenswrapper[4779]: I1128 13:18:07.726865 4779 scope.go:117] "RemoveContainer" containerID="0f23bfc3f21acb42c5efa181b7a3f3d8dc174ef89b59a8d8e8ca5b8924483e94" Nov 28 13:18:07 crc kubenswrapper[4779]: E1128 13:18:07.727818 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:18:19 crc kubenswrapper[4779]: I1128 13:18:19.750456 4779 scope.go:117] "RemoveContainer" containerID="0f23bfc3f21acb42c5efa181b7a3f3d8dc174ef89b59a8d8e8ca5b8924483e94" Nov 28 13:18:19 crc kubenswrapper[4779]: E1128 13:18:19.751393 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:18:33 crc kubenswrapper[4779]: I1128 13:18:33.726945 4779 scope.go:117] "RemoveContainer" containerID="0f23bfc3f21acb42c5efa181b7a3f3d8dc174ef89b59a8d8e8ca5b8924483e94" Nov 28 13:18:33 crc kubenswrapper[4779]: E1128 13:18:33.727586 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:18:46 crc kubenswrapper[4779]: I1128 13:18:46.726713 4779 scope.go:117] "RemoveContainer" containerID="0f23bfc3f21acb42c5efa181b7a3f3d8dc174ef89b59a8d8e8ca5b8924483e94" Nov 28 13:18:47 crc kubenswrapper[4779]: I1128 13:18:47.529519 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" event={"ID":"3b2a3eb4-4de5-491b-b466-3a35b7d745ec","Type":"ContainerStarted","Data":"10993b5460bb0b6d68e62c004e58a6435fe581e4d0f8daaff2aaa3cfe0a29888"} Nov 28 13:20:01 crc kubenswrapper[4779]: I1128 13:20:01.228414 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gshph"] Nov 28 13:20:01 crc kubenswrapper[4779]: I1128 13:20:01.231915 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gshph" Nov 28 13:20:01 crc kubenswrapper[4779]: I1128 13:20:01.263006 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gshph"] Nov 28 13:20:01 crc kubenswrapper[4779]: I1128 13:20:01.272524 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0-utilities\") pod \"redhat-operators-gshph\" (UID: \"bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0\") " pod="openshift-marketplace/redhat-operators-gshph" Nov 28 13:20:01 crc kubenswrapper[4779]: I1128 13:20:01.272591 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0-catalog-content\") pod \"redhat-operators-gshph\" (UID: \"bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0\") " pod="openshift-marketplace/redhat-operators-gshph" Nov 28 13:20:01 crc kubenswrapper[4779]: I1128 13:20:01.272880 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7dv5\" (UniqueName: \"kubernetes.io/projected/bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0-kube-api-access-g7dv5\") pod \"redhat-operators-gshph\" (UID: \"bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0\") " pod="openshift-marketplace/redhat-operators-gshph" Nov 28 13:20:01 crc kubenswrapper[4779]: I1128 13:20:01.374081 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7dv5\" (UniqueName: \"kubernetes.io/projected/bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0-kube-api-access-g7dv5\") pod \"redhat-operators-gshph\" (UID: \"bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0\") " pod="openshift-marketplace/redhat-operators-gshph" Nov 28 13:20:01 crc kubenswrapper[4779]: I1128 13:20:01.374160 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0-utilities\") pod \"redhat-operators-gshph\" (UID: \"bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0\") " pod="openshift-marketplace/redhat-operators-gshph" Nov 28 13:20:01 crc kubenswrapper[4779]: I1128 13:20:01.374181 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0-catalog-content\") pod \"redhat-operators-gshph\" (UID: \"bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0\") " pod="openshift-marketplace/redhat-operators-gshph" Nov 28 13:20:01 crc kubenswrapper[4779]: I1128 13:20:01.374659 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0-utilities\") pod \"redhat-operators-gshph\" (UID: \"bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0\") " pod="openshift-marketplace/redhat-operators-gshph" Nov 28 13:20:01 crc kubenswrapper[4779]: I1128 13:20:01.374696 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0-catalog-content\") pod \"redhat-operators-gshph\" (UID: \"bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0\") " pod="openshift-marketplace/redhat-operators-gshph" Nov 28 13:20:01 crc kubenswrapper[4779]: I1128 13:20:01.394467 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7dv5\" (UniqueName: \"kubernetes.io/projected/bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0-kube-api-access-g7dv5\") pod \"redhat-operators-gshph\" (UID: \"bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0\") " pod="openshift-marketplace/redhat-operators-gshph" Nov 28 13:20:01 crc kubenswrapper[4779]: I1128 13:20:01.586177 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gshph" Nov 28 13:20:02 crc kubenswrapper[4779]: I1128 13:20:02.081170 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gshph"] Nov 28 13:20:02 crc kubenswrapper[4779]: I1128 13:20:02.392627 4779 generic.go:334] "Generic (PLEG): container finished" podID="bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0" containerID="e6e4b99de9f3635da20886450f66449b0d83111ee350c3dddb9ef99536322f29" exitCode=0 Nov 28 13:20:02 crc kubenswrapper[4779]: I1128 13:20:02.392936 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gshph" event={"ID":"bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0","Type":"ContainerDied","Data":"e6e4b99de9f3635da20886450f66449b0d83111ee350c3dddb9ef99536322f29"} Nov 28 13:20:02 crc kubenswrapper[4779]: I1128 13:20:02.393019 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gshph" event={"ID":"bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0","Type":"ContainerStarted","Data":"099c093498494a952fc6da459c505f9462a9c255f124fa6c473725dc7d9d503a"} Nov 28 13:20:04 crc kubenswrapper[4779]: I1128 13:20:04.413989 4779 generic.go:334] "Generic (PLEG): container finished" podID="bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0" containerID="0c73cd7642988e971ba1bb9f0c6a61dbea8785a5d7bda1b67fcb639b4f3ea405" exitCode=0 Nov 28 13:20:04 crc kubenswrapper[4779]: I1128 13:20:04.414077 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gshph" event={"ID":"bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0","Type":"ContainerDied","Data":"0c73cd7642988e971ba1bb9f0c6a61dbea8785a5d7bda1b67fcb639b4f3ea405"} Nov 28 13:20:06 crc kubenswrapper[4779]: I1128 13:20:06.434294 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gshph" event={"ID":"bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0","Type":"ContainerStarted","Data":"7e3e97160b8feda21dbd7f09d7024facbc3628c8e2a1a1870f0cc48f3f9bfb73"} Nov 28 13:20:06 crc kubenswrapper[4779]: I1128 13:20:06.467535 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gshph" podStartSLOduration=2.653293877 podStartE2EDuration="5.467509128s" podCreationTimestamp="2025-11-28 13:20:01 +0000 UTC" firstStartedPulling="2025-11-28 13:20:02.395575248 +0000 UTC m=+2662.961250602" lastFinishedPulling="2025-11-28 13:20:05.209790499 +0000 UTC m=+2665.775465853" observedRunningTime="2025-11-28 13:20:06.459342529 +0000 UTC m=+2667.025017903" watchObservedRunningTime="2025-11-28 13:20:06.467509128 +0000 UTC m=+2667.033184522" Nov 28 13:20:11 crc kubenswrapper[4779]: I1128 13:20:11.587254 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gshph" Nov 28 13:20:11 crc kubenswrapper[4779]: I1128 13:20:11.587731 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gshph" Nov 28 13:20:12 crc kubenswrapper[4779]: I1128 13:20:12.655842 4779 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gshph" podUID="bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0" containerName="registry-server" probeResult="failure" output=< Nov 28 13:20:12 crc kubenswrapper[4779]: timeout: failed to connect service ":50051" within 1s Nov 28 13:20:12 crc kubenswrapper[4779]: > Nov 28 13:20:21 crc kubenswrapper[4779]: I1128 13:20:21.641004 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gshph" Nov 28 13:20:21 crc kubenswrapper[4779]: I1128 13:20:21.698533 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gshph" Nov 28 13:20:21 crc kubenswrapper[4779]: I1128 13:20:21.881286 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gshph"] Nov 28 13:20:23 crc kubenswrapper[4779]: I1128 13:20:23.630741 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gshph" podUID="bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0" containerName="registry-server" containerID="cri-o://7e3e97160b8feda21dbd7f09d7024facbc3628c8e2a1a1870f0cc48f3f9bfb73" gracePeriod=2 Nov 28 13:20:24 crc kubenswrapper[4779]: I1128 13:20:24.115697 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gshph" Nov 28 13:20:24 crc kubenswrapper[4779]: I1128 13:20:24.154711 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0-catalog-content\") pod \"bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0\" (UID: \"bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0\") " Nov 28 13:20:24 crc kubenswrapper[4779]: I1128 13:20:24.154765 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7dv5\" (UniqueName: \"kubernetes.io/projected/bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0-kube-api-access-g7dv5\") pod \"bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0\" (UID: \"bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0\") " Nov 28 13:20:24 crc kubenswrapper[4779]: I1128 13:20:24.154924 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0-utilities\") pod \"bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0\" (UID: \"bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0\") " Nov 28 13:20:24 crc kubenswrapper[4779]: I1128 13:20:24.156248 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0-utilities" (OuterVolumeSpecName: "utilities") pod "bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0" (UID: "bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 13:20:24 crc kubenswrapper[4779]: I1128 13:20:24.161839 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0-kube-api-access-g7dv5" (OuterVolumeSpecName: "kube-api-access-g7dv5") pod "bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0" (UID: "bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0"). InnerVolumeSpecName "kube-api-access-g7dv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:20:24 crc kubenswrapper[4779]: I1128 13:20:24.263726 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0" (UID: "bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 13:20:24 crc kubenswrapper[4779]: I1128 13:20:24.265470 4779 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 13:20:24 crc kubenswrapper[4779]: I1128 13:20:24.265494 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7dv5\" (UniqueName: \"kubernetes.io/projected/bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0-kube-api-access-g7dv5\") on node \"crc\" DevicePath \"\"" Nov 28 13:20:24 crc kubenswrapper[4779]: I1128 13:20:24.265510 4779 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 13:20:24 crc kubenswrapper[4779]: I1128 13:20:24.639547 4779 generic.go:334] "Generic (PLEG): container finished" podID="bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0" containerID="7e3e97160b8feda21dbd7f09d7024facbc3628c8e2a1a1870f0cc48f3f9bfb73" exitCode=0 Nov 28 13:20:24 crc kubenswrapper[4779]: I1128 13:20:24.639590 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gshph" Nov 28 13:20:24 crc kubenswrapper[4779]: I1128 13:20:24.639615 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gshph" event={"ID":"bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0","Type":"ContainerDied","Data":"7e3e97160b8feda21dbd7f09d7024facbc3628c8e2a1a1870f0cc48f3f9bfb73"} Nov 28 13:20:24 crc kubenswrapper[4779]: I1128 13:20:24.639659 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gshph" event={"ID":"bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0","Type":"ContainerDied","Data":"099c093498494a952fc6da459c505f9462a9c255f124fa6c473725dc7d9d503a"} Nov 28 13:20:24 crc kubenswrapper[4779]: I1128 13:20:24.639691 4779 scope.go:117] "RemoveContainer" containerID="7e3e97160b8feda21dbd7f09d7024facbc3628c8e2a1a1870f0cc48f3f9bfb73" Nov 28 13:20:24 crc kubenswrapper[4779]: I1128 13:20:24.657993 4779 scope.go:117] "RemoveContainer" containerID="0c73cd7642988e971ba1bb9f0c6a61dbea8785a5d7bda1b67fcb639b4f3ea405" Nov 28 13:20:24 crc kubenswrapper[4779]: I1128 13:20:24.674059 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gshph"] Nov 28 13:20:24 crc kubenswrapper[4779]: I1128 13:20:24.684895 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gshph"] Nov 28 13:20:24 crc kubenswrapper[4779]: I1128 13:20:24.692933 4779 scope.go:117] "RemoveContainer" containerID="e6e4b99de9f3635da20886450f66449b0d83111ee350c3dddb9ef99536322f29" Nov 28 13:20:24 crc kubenswrapper[4779]: I1128 13:20:24.775362 4779 scope.go:117] "RemoveContainer" containerID="7e3e97160b8feda21dbd7f09d7024facbc3628c8e2a1a1870f0cc48f3f9bfb73" Nov 28 13:20:24 crc kubenswrapper[4779]: E1128 13:20:24.775720 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e3e97160b8feda21dbd7f09d7024facbc3628c8e2a1a1870f0cc48f3f9bfb73\": container with ID starting with 7e3e97160b8feda21dbd7f09d7024facbc3628c8e2a1a1870f0cc48f3f9bfb73 not found: ID does not exist" containerID="7e3e97160b8feda21dbd7f09d7024facbc3628c8e2a1a1870f0cc48f3f9bfb73" Nov 28 13:20:24 crc kubenswrapper[4779]: I1128 13:20:24.775756 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e3e97160b8feda21dbd7f09d7024facbc3628c8e2a1a1870f0cc48f3f9bfb73"} err="failed to get container status \"7e3e97160b8feda21dbd7f09d7024facbc3628c8e2a1a1870f0cc48f3f9bfb73\": rpc error: code = NotFound desc = could not find container \"7e3e97160b8feda21dbd7f09d7024facbc3628c8e2a1a1870f0cc48f3f9bfb73\": container with ID starting with 7e3e97160b8feda21dbd7f09d7024facbc3628c8e2a1a1870f0cc48f3f9bfb73 not found: ID does not exist" Nov 28 13:20:24 crc kubenswrapper[4779]: I1128 13:20:24.775779 4779 scope.go:117] "RemoveContainer" containerID="0c73cd7642988e971ba1bb9f0c6a61dbea8785a5d7bda1b67fcb639b4f3ea405" Nov 28 13:20:24 crc kubenswrapper[4779]: E1128 13:20:24.776367 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c73cd7642988e971ba1bb9f0c6a61dbea8785a5d7bda1b67fcb639b4f3ea405\": container with ID starting with 0c73cd7642988e971ba1bb9f0c6a61dbea8785a5d7bda1b67fcb639b4f3ea405 not found: ID does not exist" containerID="0c73cd7642988e971ba1bb9f0c6a61dbea8785a5d7bda1b67fcb639b4f3ea405" Nov 28 13:20:24 crc kubenswrapper[4779]: I1128 13:20:24.776397 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c73cd7642988e971ba1bb9f0c6a61dbea8785a5d7bda1b67fcb639b4f3ea405"} err="failed to get container status \"0c73cd7642988e971ba1bb9f0c6a61dbea8785a5d7bda1b67fcb639b4f3ea405\": rpc error: code = NotFound desc = could not find container \"0c73cd7642988e971ba1bb9f0c6a61dbea8785a5d7bda1b67fcb639b4f3ea405\": container with ID starting with 0c73cd7642988e971ba1bb9f0c6a61dbea8785a5d7bda1b67fcb639b4f3ea405 not found: ID does not exist" Nov 28 13:20:24 crc kubenswrapper[4779]: I1128 13:20:24.776420 4779 scope.go:117] "RemoveContainer" containerID="e6e4b99de9f3635da20886450f66449b0d83111ee350c3dddb9ef99536322f29" Nov 28 13:20:24 crc kubenswrapper[4779]: E1128 13:20:24.776670 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6e4b99de9f3635da20886450f66449b0d83111ee350c3dddb9ef99536322f29\": container with ID starting with e6e4b99de9f3635da20886450f66449b0d83111ee350c3dddb9ef99536322f29 not found: ID does not exist" containerID="e6e4b99de9f3635da20886450f66449b0d83111ee350c3dddb9ef99536322f29" Nov 28 13:20:24 crc kubenswrapper[4779]: I1128 13:20:24.776698 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6e4b99de9f3635da20886450f66449b0d83111ee350c3dddb9ef99536322f29"} err="failed to get container status \"e6e4b99de9f3635da20886450f66449b0d83111ee350c3dddb9ef99536322f29\": rpc error: code = NotFound desc = could not find container \"e6e4b99de9f3635da20886450f66449b0d83111ee350c3dddb9ef99536322f29\": container with ID starting with e6e4b99de9f3635da20886450f66449b0d83111ee350c3dddb9ef99536322f29 not found: ID does not exist" Nov 28 13:20:25 crc kubenswrapper[4779]: I1128 13:20:25.739757 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0" path="/var/lib/kubelet/pods/bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0/volumes" Nov 28 13:20:46 crc kubenswrapper[4779]: I1128 13:20:46.284826 4779 patch_prober.go:28] interesting pod/machine-config-daemon-kj9g2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 13:20:46 crc kubenswrapper[4779]: I1128 13:20:46.285908 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 13:20:52 crc kubenswrapper[4779]: I1128 13:20:52.969538 4779 generic.go:334] "Generic (PLEG): container finished" podID="42f930a2-ac0c-43b5-ab17-1ccd2f30340e" containerID="f39888a642cd0c698f64d1328981261e6bda29b972b364c2fd92d547f14fabb5" exitCode=0 Nov 28 13:20:52 crc kubenswrapper[4779]: I1128 13:20:52.969663 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k9nk4" event={"ID":"42f930a2-ac0c-43b5-ab17-1ccd2f30340e","Type":"ContainerDied","Data":"f39888a642cd0c698f64d1328981261e6bda29b972b364c2fd92d547f14fabb5"} Nov 28 13:20:54 crc kubenswrapper[4779]: I1128 13:20:54.412665 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k9nk4" Nov 28 13:20:54 crc kubenswrapper[4779]: I1128 13:20:54.540931 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/42f930a2-ac0c-43b5-ab17-1ccd2f30340e-nova-extra-config-0\") pod \"42f930a2-ac0c-43b5-ab17-1ccd2f30340e\" (UID: \"42f930a2-ac0c-43b5-ab17-1ccd2f30340e\") " Nov 28 13:20:54 crc kubenswrapper[4779]: I1128 13:20:54.540991 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/42f930a2-ac0c-43b5-ab17-1ccd2f30340e-nova-cell1-compute-config-1\") pod \"42f930a2-ac0c-43b5-ab17-1ccd2f30340e\" (UID: \"42f930a2-ac0c-43b5-ab17-1ccd2f30340e\") " Nov 28 13:20:54 crc kubenswrapper[4779]: I1128 13:20:54.541034 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/42f930a2-ac0c-43b5-ab17-1ccd2f30340e-ssh-key\") pod \"42f930a2-ac0c-43b5-ab17-1ccd2f30340e\" (UID: \"42f930a2-ac0c-43b5-ab17-1ccd2f30340e\") " Nov 28 13:20:54 crc kubenswrapper[4779]: I1128 13:20:54.541065 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/42f930a2-ac0c-43b5-ab17-1ccd2f30340e-nova-cell1-compute-config-0\") pod \"42f930a2-ac0c-43b5-ab17-1ccd2f30340e\" (UID: \"42f930a2-ac0c-43b5-ab17-1ccd2f30340e\") " Nov 28 13:20:54 crc kubenswrapper[4779]: I1128 13:20:54.541115 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/42f930a2-ac0c-43b5-ab17-1ccd2f30340e-inventory\") pod \"42f930a2-ac0c-43b5-ab17-1ccd2f30340e\" (UID: \"42f930a2-ac0c-43b5-ab17-1ccd2f30340e\") " Nov 28 13:20:54 crc kubenswrapper[4779]: I1128 13:20:54.541168 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42f930a2-ac0c-43b5-ab17-1ccd2f30340e-nova-combined-ca-bundle\") pod \"42f930a2-ac0c-43b5-ab17-1ccd2f30340e\" (UID: \"42f930a2-ac0c-43b5-ab17-1ccd2f30340e\") " Nov 28 13:20:54 crc kubenswrapper[4779]: I1128 13:20:54.541193 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2j99\" (UniqueName: \"kubernetes.io/projected/42f930a2-ac0c-43b5-ab17-1ccd2f30340e-kube-api-access-q2j99\") pod \"42f930a2-ac0c-43b5-ab17-1ccd2f30340e\" (UID: \"42f930a2-ac0c-43b5-ab17-1ccd2f30340e\") " Nov 28 13:20:54 crc kubenswrapper[4779]: I1128 13:20:54.541229 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/42f930a2-ac0c-43b5-ab17-1ccd2f30340e-nova-migration-ssh-key-0\") pod \"42f930a2-ac0c-43b5-ab17-1ccd2f30340e\" (UID: \"42f930a2-ac0c-43b5-ab17-1ccd2f30340e\") " Nov 28 13:20:54 crc kubenswrapper[4779]: I1128 13:20:54.541293 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/42f930a2-ac0c-43b5-ab17-1ccd2f30340e-nova-migration-ssh-key-1\") pod \"42f930a2-ac0c-43b5-ab17-1ccd2f30340e\" (UID: \"42f930a2-ac0c-43b5-ab17-1ccd2f30340e\") " Nov 28 13:20:54 crc kubenswrapper[4779]: I1128 13:20:54.552414 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42f930a2-ac0c-43b5-ab17-1ccd2f30340e-kube-api-access-q2j99" (OuterVolumeSpecName: "kube-api-access-q2j99") pod "42f930a2-ac0c-43b5-ab17-1ccd2f30340e" (UID: "42f930a2-ac0c-43b5-ab17-1ccd2f30340e"). InnerVolumeSpecName "kube-api-access-q2j99". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:20:54 crc kubenswrapper[4779]: I1128 13:20:54.552805 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42f930a2-ac0c-43b5-ab17-1ccd2f30340e-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "42f930a2-ac0c-43b5-ab17-1ccd2f30340e" (UID: "42f930a2-ac0c-43b5-ab17-1ccd2f30340e"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:20:54 crc kubenswrapper[4779]: I1128 13:20:54.569007 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42f930a2-ac0c-43b5-ab17-1ccd2f30340e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "42f930a2-ac0c-43b5-ab17-1ccd2f30340e" (UID: "42f930a2-ac0c-43b5-ab17-1ccd2f30340e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:20:54 crc kubenswrapper[4779]: I1128 13:20:54.576732 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42f930a2-ac0c-43b5-ab17-1ccd2f30340e-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "42f930a2-ac0c-43b5-ab17-1ccd2f30340e" (UID: "42f930a2-ac0c-43b5-ab17-1ccd2f30340e"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 13:20:54 crc kubenswrapper[4779]: I1128 13:20:54.582792 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42f930a2-ac0c-43b5-ab17-1ccd2f30340e-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "42f930a2-ac0c-43b5-ab17-1ccd2f30340e" (UID: "42f930a2-ac0c-43b5-ab17-1ccd2f30340e"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:20:54 crc kubenswrapper[4779]: I1128 13:20:54.584363 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42f930a2-ac0c-43b5-ab17-1ccd2f30340e-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "42f930a2-ac0c-43b5-ab17-1ccd2f30340e" (UID: "42f930a2-ac0c-43b5-ab17-1ccd2f30340e"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:20:54 crc kubenswrapper[4779]: I1128 13:20:54.587390 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42f930a2-ac0c-43b5-ab17-1ccd2f30340e-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "42f930a2-ac0c-43b5-ab17-1ccd2f30340e" (UID: "42f930a2-ac0c-43b5-ab17-1ccd2f30340e"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:20:54 crc kubenswrapper[4779]: I1128 13:20:54.587777 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42f930a2-ac0c-43b5-ab17-1ccd2f30340e-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "42f930a2-ac0c-43b5-ab17-1ccd2f30340e" (UID: "42f930a2-ac0c-43b5-ab17-1ccd2f30340e"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:20:54 crc kubenswrapper[4779]: I1128 13:20:54.613879 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42f930a2-ac0c-43b5-ab17-1ccd2f30340e-inventory" (OuterVolumeSpecName: "inventory") pod "42f930a2-ac0c-43b5-ab17-1ccd2f30340e" (UID: "42f930a2-ac0c-43b5-ab17-1ccd2f30340e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:20:54 crc kubenswrapper[4779]: I1128 13:20:54.643655 4779 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/42f930a2-ac0c-43b5-ab17-1ccd2f30340e-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Nov 28 13:20:54 crc kubenswrapper[4779]: I1128 13:20:54.643699 4779 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/42f930a2-ac0c-43b5-ab17-1ccd2f30340e-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Nov 28 13:20:54 crc kubenswrapper[4779]: I1128 13:20:54.643712 4779 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/42f930a2-ac0c-43b5-ab17-1ccd2f30340e-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Nov 28 13:20:54 crc kubenswrapper[4779]: I1128 13:20:54.643725 4779 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/42f930a2-ac0c-43b5-ab17-1ccd2f30340e-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 13:20:54 crc kubenswrapper[4779]: I1128 13:20:54.643737 4779 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/42f930a2-ac0c-43b5-ab17-1ccd2f30340e-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Nov 28 13:20:54 crc kubenswrapper[4779]: I1128 13:20:54.643749 4779 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/42f930a2-ac0c-43b5-ab17-1ccd2f30340e-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 13:20:54 crc kubenswrapper[4779]: I1128 13:20:54.643760 4779 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42f930a2-ac0c-43b5-ab17-1ccd2f30340e-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 13:20:54 crc kubenswrapper[4779]: I1128 13:20:54.643771 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2j99\" (UniqueName: \"kubernetes.io/projected/42f930a2-ac0c-43b5-ab17-1ccd2f30340e-kube-api-access-q2j99\") on node \"crc\" DevicePath \"\"" Nov 28 13:20:54 crc kubenswrapper[4779]: I1128 13:20:54.643784 4779 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/42f930a2-ac0c-43b5-ab17-1ccd2f30340e-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Nov 28 13:20:54 crc kubenswrapper[4779]: I1128 13:20:54.990117 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k9nk4" event={"ID":"42f930a2-ac0c-43b5-ab17-1ccd2f30340e","Type":"ContainerDied","Data":"2f782ecd9743b4e0b9b8d73d88687690cbf0de57234dcaf85e9457797f1e1ef0"} Nov 28 13:20:54 crc kubenswrapper[4779]: I1128 13:20:54.990650 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f782ecd9743b4e0b9b8d73d88687690cbf0de57234dcaf85e9457797f1e1ef0" Nov 28 13:20:54 crc kubenswrapper[4779]: I1128 13:20:54.990781 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k9nk4" Nov 28 13:20:55 crc kubenswrapper[4779]: I1128 13:20:55.154904 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rs7mr"] Nov 28 13:20:55 crc kubenswrapper[4779]: E1128 13:20:55.155452 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0" containerName="extract-content" Nov 28 13:20:55 crc kubenswrapper[4779]: I1128 13:20:55.155486 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0" containerName="extract-content" Nov 28 13:20:55 crc kubenswrapper[4779]: E1128 13:20:55.155516 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0" containerName="extract-utilities" Nov 28 13:20:55 crc kubenswrapper[4779]: I1128 13:20:55.155529 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0" containerName="extract-utilities" Nov 28 13:20:55 crc kubenswrapper[4779]: E1128 13:20:55.155552 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0" containerName="registry-server" Nov 28 13:20:55 crc kubenswrapper[4779]: I1128 13:20:55.155562 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0" containerName="registry-server" Nov 28 13:20:55 crc kubenswrapper[4779]: E1128 13:20:55.155594 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42f930a2-ac0c-43b5-ab17-1ccd2f30340e" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 28 13:20:55 crc kubenswrapper[4779]: I1128 13:20:55.155607 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="42f930a2-ac0c-43b5-ab17-1ccd2f30340e" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 28 13:20:55 crc kubenswrapper[4779]: I1128 13:20:55.155943 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="42f930a2-ac0c-43b5-ab17-1ccd2f30340e" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 28 13:20:55 crc kubenswrapper[4779]: I1128 13:20:55.155969 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc45ad5e-ca6e-4f30-a01e-93fbee1ad2e0" containerName="registry-server" Nov 28 13:20:55 crc kubenswrapper[4779]: I1128 13:20:55.157064 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rs7mr" Nov 28 13:20:55 crc kubenswrapper[4779]: I1128 13:20:55.159501 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zfcth" Nov 28 13:20:55 crc kubenswrapper[4779]: I1128 13:20:55.160627 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 13:20:55 crc kubenswrapper[4779]: I1128 13:20:55.161208 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 13:20:55 crc kubenswrapper[4779]: I1128 13:20:55.161450 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 13:20:55 crc kubenswrapper[4779]: I1128 13:20:55.161551 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Nov 28 13:20:55 crc kubenswrapper[4779]: I1128 13:20:55.173349 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rs7mr"] Nov 28 13:20:55 crc kubenswrapper[4779]: I1128 13:20:55.254785 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/1e165593-fee0-4b82-87b3-6f102fdabe4f-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rs7mr\" (UID: \"1e165593-fee0-4b82-87b3-6f102fdabe4f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rs7mr" Nov 28 13:20:55 crc kubenswrapper[4779]: I1128 13:20:55.254887 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1e165593-fee0-4b82-87b3-6f102fdabe4f-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rs7mr\" (UID: \"1e165593-fee0-4b82-87b3-6f102fdabe4f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rs7mr" Nov 28 13:20:55 crc kubenswrapper[4779]: I1128 13:20:55.254987 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/1e165593-fee0-4b82-87b3-6f102fdabe4f-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rs7mr\" (UID: \"1e165593-fee0-4b82-87b3-6f102fdabe4f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rs7mr" Nov 28 13:20:55 crc kubenswrapper[4779]: I1128 13:20:55.255054 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bpgd\" (UniqueName: \"kubernetes.io/projected/1e165593-fee0-4b82-87b3-6f102fdabe4f-kube-api-access-8bpgd\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rs7mr\" (UID: \"1e165593-fee0-4b82-87b3-6f102fdabe4f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rs7mr" Nov 28 13:20:55 crc kubenswrapper[4779]: I1128 13:20:55.255081 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/1e165593-fee0-4b82-87b3-6f102fdabe4f-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rs7mr\" (UID: \"1e165593-fee0-4b82-87b3-6f102fdabe4f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rs7mr" Nov 28 13:20:55 crc kubenswrapper[4779]: I1128 13:20:55.255189 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e165593-fee0-4b82-87b3-6f102fdabe4f-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rs7mr\" (UID: \"1e165593-fee0-4b82-87b3-6f102fdabe4f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rs7mr" Nov 28 13:20:55 crc kubenswrapper[4779]: I1128 13:20:55.255213 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e165593-fee0-4b82-87b3-6f102fdabe4f-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rs7mr\" (UID: \"1e165593-fee0-4b82-87b3-6f102fdabe4f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rs7mr" Nov 28 13:20:55 crc kubenswrapper[4779]: I1128 13:20:55.356967 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/1e165593-fee0-4b82-87b3-6f102fdabe4f-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rs7mr\" (UID: \"1e165593-fee0-4b82-87b3-6f102fdabe4f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rs7mr" Nov 28 13:20:55 crc kubenswrapper[4779]: I1128 13:20:55.357063 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1e165593-fee0-4b82-87b3-6f102fdabe4f-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rs7mr\" (UID: \"1e165593-fee0-4b82-87b3-6f102fdabe4f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rs7mr" Nov 28 13:20:55 crc kubenswrapper[4779]: I1128 13:20:55.357157 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/1e165593-fee0-4b82-87b3-6f102fdabe4f-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rs7mr\" (UID: \"1e165593-fee0-4b82-87b3-6f102fdabe4f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rs7mr" Nov 28 13:20:55 crc kubenswrapper[4779]: I1128 13:20:55.357226 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bpgd\" (UniqueName: \"kubernetes.io/projected/1e165593-fee0-4b82-87b3-6f102fdabe4f-kube-api-access-8bpgd\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rs7mr\" (UID: \"1e165593-fee0-4b82-87b3-6f102fdabe4f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rs7mr" Nov 28 13:20:55 crc kubenswrapper[4779]: I1128 13:20:55.357250 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/1e165593-fee0-4b82-87b3-6f102fdabe4f-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rs7mr\" (UID: \"1e165593-fee0-4b82-87b3-6f102fdabe4f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rs7mr" Nov 28 13:20:55 crc kubenswrapper[4779]: I1128 13:20:55.357291 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e165593-fee0-4b82-87b3-6f102fdabe4f-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rs7mr\" (UID: \"1e165593-fee0-4b82-87b3-6f102fdabe4f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rs7mr" Nov 28 13:20:55 crc kubenswrapper[4779]: I1128 13:20:55.357315 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e165593-fee0-4b82-87b3-6f102fdabe4f-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rs7mr\" (UID: \"1e165593-fee0-4b82-87b3-6f102fdabe4f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rs7mr" Nov 28 13:20:55 crc kubenswrapper[4779]: I1128 13:20:55.362841 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/1e165593-fee0-4b82-87b3-6f102fdabe4f-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rs7mr\" (UID: \"1e165593-fee0-4b82-87b3-6f102fdabe4f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rs7mr" Nov 28 13:20:55 crc kubenswrapper[4779]: I1128 13:20:55.364045 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e165593-fee0-4b82-87b3-6f102fdabe4f-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rs7mr\" (UID: \"1e165593-fee0-4b82-87b3-6f102fdabe4f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rs7mr" Nov 28 13:20:55 crc kubenswrapper[4779]: I1128 13:20:55.366059 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/1e165593-fee0-4b82-87b3-6f102fdabe4f-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rs7mr\" (UID: \"1e165593-fee0-4b82-87b3-6f102fdabe4f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rs7mr" Nov 28 13:20:55 crc kubenswrapper[4779]: I1128 13:20:55.366832 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1e165593-fee0-4b82-87b3-6f102fdabe4f-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rs7mr\" (UID: \"1e165593-fee0-4b82-87b3-6f102fdabe4f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rs7mr" Nov 28 13:20:55 crc kubenswrapper[4779]: I1128 13:20:55.368052 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/1e165593-fee0-4b82-87b3-6f102fdabe4f-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rs7mr\" (UID: \"1e165593-fee0-4b82-87b3-6f102fdabe4f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rs7mr" Nov 28 13:20:55 crc kubenswrapper[4779]: I1128 13:20:55.371836 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e165593-fee0-4b82-87b3-6f102fdabe4f-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rs7mr\" (UID: \"1e165593-fee0-4b82-87b3-6f102fdabe4f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rs7mr" Nov 28 13:20:55 crc kubenswrapper[4779]: I1128 13:20:55.390801 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bpgd\" (UniqueName: \"kubernetes.io/projected/1e165593-fee0-4b82-87b3-6f102fdabe4f-kube-api-access-8bpgd\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rs7mr\" (UID: \"1e165593-fee0-4b82-87b3-6f102fdabe4f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rs7mr" Nov 28 13:20:55 crc kubenswrapper[4779]: I1128 13:20:55.504273 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rs7mr" Nov 28 13:20:56 crc kubenswrapper[4779]: I1128 13:20:56.229858 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rs7mr"] Nov 28 13:20:57 crc kubenswrapper[4779]: I1128 13:20:57.012201 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rs7mr" event={"ID":"1e165593-fee0-4b82-87b3-6f102fdabe4f","Type":"ContainerStarted","Data":"b2daa75e0640bc7a42a0c1314572fe6485c518261bf1cc8322ce3395ad5f33f6"} Nov 28 13:20:57 crc kubenswrapper[4779]: I1128 13:20:57.012614 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rs7mr" event={"ID":"1e165593-fee0-4b82-87b3-6f102fdabe4f","Type":"ContainerStarted","Data":"c601faa0bdeb5f3e5e421ee6b0dbf5e83ee967931b27acf93d81d17bf7f4e3a5"} Nov 28 13:20:57 crc kubenswrapper[4779]: I1128 13:20:57.046999 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rs7mr" podStartSLOduration=1.595475811 podStartE2EDuration="2.046975189s" podCreationTimestamp="2025-11-28 13:20:55 +0000 UTC" firstStartedPulling="2025-11-28 13:20:56.238919259 +0000 UTC m=+2716.804594613" lastFinishedPulling="2025-11-28 13:20:56.690418627 +0000 UTC m=+2717.256093991" observedRunningTime="2025-11-28 13:20:57.041237255 +0000 UTC m=+2717.606912649" watchObservedRunningTime="2025-11-28 13:20:57.046975189 +0000 UTC m=+2717.612650563" Nov 28 13:21:16 crc kubenswrapper[4779]: I1128 13:21:16.285351 4779 patch_prober.go:28] interesting pod/machine-config-daemon-kj9g2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 13:21:16 crc kubenswrapper[4779]: I1128 13:21:16.285886 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 13:21:46 crc kubenswrapper[4779]: I1128 13:21:46.284478 4779 patch_prober.go:28] interesting pod/machine-config-daemon-kj9g2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 13:21:46 crc kubenswrapper[4779]: I1128 13:21:46.285068 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 13:21:46 crc kubenswrapper[4779]: I1128 13:21:46.285170 4779 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" Nov 28 13:21:46 crc kubenswrapper[4779]: I1128 13:21:46.286305 4779 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"10993b5460bb0b6d68e62c004e58a6435fe581e4d0f8daaff2aaa3cfe0a29888"} pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 13:21:46 crc kubenswrapper[4779]: I1128 13:21:46.286377 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" containerID="cri-o://10993b5460bb0b6d68e62c004e58a6435fe581e4d0f8daaff2aaa3cfe0a29888" gracePeriod=600 Nov 28 13:21:47 crc kubenswrapper[4779]: I1128 13:21:47.135206 4779 generic.go:334] "Generic (PLEG): container finished" podID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerID="10993b5460bb0b6d68e62c004e58a6435fe581e4d0f8daaff2aaa3cfe0a29888" exitCode=0 Nov 28 13:21:47 crc kubenswrapper[4779]: I1128 13:21:47.135294 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" event={"ID":"3b2a3eb4-4de5-491b-b466-3a35b7d745ec","Type":"ContainerDied","Data":"10993b5460bb0b6d68e62c004e58a6435fe581e4d0f8daaff2aaa3cfe0a29888"} Nov 28 13:21:47 crc kubenswrapper[4779]: I1128 13:21:47.135779 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" event={"ID":"3b2a3eb4-4de5-491b-b466-3a35b7d745ec","Type":"ContainerStarted","Data":"787907d9b97619607abf8c4f9cecb91840367136040816abeda0737b36259574"} Nov 28 13:21:47 crc kubenswrapper[4779]: I1128 13:21:47.135802 4779 scope.go:117] "RemoveContainer" containerID="0f23bfc3f21acb42c5efa181b7a3f3d8dc174ef89b59a8d8e8ca5b8924483e94" Nov 28 13:22:13 crc kubenswrapper[4779]: I1128 13:22:13.901554 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xk8h5"] Nov 28 13:22:13 crc kubenswrapper[4779]: I1128 13:22:13.903896 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xk8h5" Nov 28 13:22:13 crc kubenswrapper[4779]: I1128 13:22:13.923598 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xk8h5"] Nov 28 13:22:14 crc kubenswrapper[4779]: I1128 13:22:14.096112 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78b64de0-f3cc-4d7c-910c-4660f951db5d-catalog-content\") pod \"community-operators-xk8h5\" (UID: \"78b64de0-f3cc-4d7c-910c-4660f951db5d\") " pod="openshift-marketplace/community-operators-xk8h5" Nov 28 13:22:14 crc kubenswrapper[4779]: I1128 13:22:14.096270 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78b64de0-f3cc-4d7c-910c-4660f951db5d-utilities\") pod \"community-operators-xk8h5\" (UID: \"78b64de0-f3cc-4d7c-910c-4660f951db5d\") " pod="openshift-marketplace/community-operators-xk8h5" Nov 28 13:22:14 crc kubenswrapper[4779]: I1128 13:22:14.096363 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl6vc\" (UniqueName: \"kubernetes.io/projected/78b64de0-f3cc-4d7c-910c-4660f951db5d-kube-api-access-jl6vc\") pod \"community-operators-xk8h5\" (UID: \"78b64de0-f3cc-4d7c-910c-4660f951db5d\") " pod="openshift-marketplace/community-operators-xk8h5" Nov 28 13:22:14 crc kubenswrapper[4779]: I1128 13:22:14.198440 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78b64de0-f3cc-4d7c-910c-4660f951db5d-utilities\") pod \"community-operators-xk8h5\" (UID: \"78b64de0-f3cc-4d7c-910c-4660f951db5d\") " pod="openshift-marketplace/community-operators-xk8h5" Nov 28 13:22:14 crc kubenswrapper[4779]: I1128 13:22:14.198563 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jl6vc\" (UniqueName: \"kubernetes.io/projected/78b64de0-f3cc-4d7c-910c-4660f951db5d-kube-api-access-jl6vc\") pod \"community-operators-xk8h5\" (UID: \"78b64de0-f3cc-4d7c-910c-4660f951db5d\") " pod="openshift-marketplace/community-operators-xk8h5" Nov 28 13:22:14 crc kubenswrapper[4779]: I1128 13:22:14.198639 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78b64de0-f3cc-4d7c-910c-4660f951db5d-catalog-content\") pod \"community-operators-xk8h5\" (UID: \"78b64de0-f3cc-4d7c-910c-4660f951db5d\") " pod="openshift-marketplace/community-operators-xk8h5" Nov 28 13:22:14 crc kubenswrapper[4779]: I1128 13:22:14.198984 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78b64de0-f3cc-4d7c-910c-4660f951db5d-utilities\") pod \"community-operators-xk8h5\" (UID: \"78b64de0-f3cc-4d7c-910c-4660f951db5d\") " pod="openshift-marketplace/community-operators-xk8h5" Nov 28 13:22:14 crc kubenswrapper[4779]: I1128 13:22:14.199036 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78b64de0-f3cc-4d7c-910c-4660f951db5d-catalog-content\") pod \"community-operators-xk8h5\" (UID: \"78b64de0-f3cc-4d7c-910c-4660f951db5d\") " pod="openshift-marketplace/community-operators-xk8h5" Nov 28 13:22:14 crc kubenswrapper[4779]: I1128 13:22:14.224732 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jl6vc\" (UniqueName: \"kubernetes.io/projected/78b64de0-f3cc-4d7c-910c-4660f951db5d-kube-api-access-jl6vc\") pod \"community-operators-xk8h5\" (UID: \"78b64de0-f3cc-4d7c-910c-4660f951db5d\") " pod="openshift-marketplace/community-operators-xk8h5" Nov 28 13:22:14 crc kubenswrapper[4779]: I1128 13:22:14.272569 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xk8h5" Nov 28 13:22:14 crc kubenswrapper[4779]: I1128 13:22:14.786538 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xk8h5"] Nov 28 13:22:15 crc kubenswrapper[4779]: I1128 13:22:15.425116 4779 generic.go:334] "Generic (PLEG): container finished" podID="78b64de0-f3cc-4d7c-910c-4660f951db5d" containerID="4a3f3aacafaf6373987835a51b264097d7eb031fb0c084eb12f06d72ac3f7661" exitCode=0 Nov 28 13:22:15 crc kubenswrapper[4779]: I1128 13:22:15.425220 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xk8h5" event={"ID":"78b64de0-f3cc-4d7c-910c-4660f951db5d","Type":"ContainerDied","Data":"4a3f3aacafaf6373987835a51b264097d7eb031fb0c084eb12f06d72ac3f7661"} Nov 28 13:22:15 crc kubenswrapper[4779]: I1128 13:22:15.425430 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xk8h5" event={"ID":"78b64de0-f3cc-4d7c-910c-4660f951db5d","Type":"ContainerStarted","Data":"48a50211e45c58bd84e5de0adcb53c4ef46b3fad2d27cba8935c9000dff9fddb"} Nov 28 13:22:17 crc kubenswrapper[4779]: I1128 13:22:17.451193 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xk8h5" event={"ID":"78b64de0-f3cc-4d7c-910c-4660f951db5d","Type":"ContainerStarted","Data":"bfaa06b285b6ac2281eeaa32138fc90e9ef81d1d9cdcd23714e3099f41f9933d"} Nov 28 13:22:18 crc kubenswrapper[4779]: I1128 13:22:18.462487 4779 generic.go:334] "Generic (PLEG): container finished" podID="78b64de0-f3cc-4d7c-910c-4660f951db5d" containerID="bfaa06b285b6ac2281eeaa32138fc90e9ef81d1d9cdcd23714e3099f41f9933d" exitCode=0 Nov 28 13:22:18 crc kubenswrapper[4779]: I1128 13:22:18.462604 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xk8h5" event={"ID":"78b64de0-f3cc-4d7c-910c-4660f951db5d","Type":"ContainerDied","Data":"bfaa06b285b6ac2281eeaa32138fc90e9ef81d1d9cdcd23714e3099f41f9933d"} Nov 28 13:22:19 crc kubenswrapper[4779]: I1128 13:22:19.479047 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xk8h5" event={"ID":"78b64de0-f3cc-4d7c-910c-4660f951db5d","Type":"ContainerStarted","Data":"7a20f1dd8c76f1c9985af032fd78f46ddbbde45efabea629697e7ce52ad92c0a"} Nov 28 13:22:19 crc kubenswrapper[4779]: I1128 13:22:19.519480 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xk8h5" podStartSLOduration=3.035876495 podStartE2EDuration="6.519453525s" podCreationTimestamp="2025-11-28 13:22:13 +0000 UTC" firstStartedPulling="2025-11-28 13:22:15.427728778 +0000 UTC m=+2795.993404142" lastFinishedPulling="2025-11-28 13:22:18.911305818 +0000 UTC m=+2799.476981172" observedRunningTime="2025-11-28 13:22:19.505204036 +0000 UTC m=+2800.070879410" watchObservedRunningTime="2025-11-28 13:22:19.519453525 +0000 UTC m=+2800.085128909" Nov 28 13:22:24 crc kubenswrapper[4779]: I1128 13:22:24.273033 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xk8h5" Nov 28 13:22:24 crc kubenswrapper[4779]: I1128 13:22:24.273796 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xk8h5" Nov 28 13:22:24 crc kubenswrapper[4779]: I1128 13:22:24.381727 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xk8h5" Nov 28 13:22:24 crc kubenswrapper[4779]: I1128 13:22:24.580527 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xk8h5" Nov 28 13:22:24 crc kubenswrapper[4779]: I1128 13:22:24.633192 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xk8h5"] Nov 28 13:22:26 crc kubenswrapper[4779]: I1128 13:22:26.544443 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xk8h5" podUID="78b64de0-f3cc-4d7c-910c-4660f951db5d" containerName="registry-server" containerID="cri-o://7a20f1dd8c76f1c9985af032fd78f46ddbbde45efabea629697e7ce52ad92c0a" gracePeriod=2 Nov 28 13:22:27 crc kubenswrapper[4779]: I1128 13:22:27.031811 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xk8h5" Nov 28 13:22:27 crc kubenswrapper[4779]: I1128 13:22:27.072666 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78b64de0-f3cc-4d7c-910c-4660f951db5d-catalog-content\") pod \"78b64de0-f3cc-4d7c-910c-4660f951db5d\" (UID: \"78b64de0-f3cc-4d7c-910c-4660f951db5d\") " Nov 28 13:22:27 crc kubenswrapper[4779]: I1128 13:22:27.072739 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78b64de0-f3cc-4d7c-910c-4660f951db5d-utilities\") pod \"78b64de0-f3cc-4d7c-910c-4660f951db5d\" (UID: \"78b64de0-f3cc-4d7c-910c-4660f951db5d\") " Nov 28 13:22:27 crc kubenswrapper[4779]: I1128 13:22:27.072822 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jl6vc\" (UniqueName: \"kubernetes.io/projected/78b64de0-f3cc-4d7c-910c-4660f951db5d-kube-api-access-jl6vc\") pod \"78b64de0-f3cc-4d7c-910c-4660f951db5d\" (UID: \"78b64de0-f3cc-4d7c-910c-4660f951db5d\") " Nov 28 13:22:27 crc kubenswrapper[4779]: I1128 13:22:27.075630 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78b64de0-f3cc-4d7c-910c-4660f951db5d-utilities" (OuterVolumeSpecName: "utilities") pod "78b64de0-f3cc-4d7c-910c-4660f951db5d" (UID: "78b64de0-f3cc-4d7c-910c-4660f951db5d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 13:22:27 crc kubenswrapper[4779]: I1128 13:22:27.109511 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78b64de0-f3cc-4d7c-910c-4660f951db5d-kube-api-access-jl6vc" (OuterVolumeSpecName: "kube-api-access-jl6vc") pod "78b64de0-f3cc-4d7c-910c-4660f951db5d" (UID: "78b64de0-f3cc-4d7c-910c-4660f951db5d"). InnerVolumeSpecName "kube-api-access-jl6vc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:22:27 crc kubenswrapper[4779]: I1128 13:22:27.151358 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78b64de0-f3cc-4d7c-910c-4660f951db5d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "78b64de0-f3cc-4d7c-910c-4660f951db5d" (UID: "78b64de0-f3cc-4d7c-910c-4660f951db5d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 13:22:27 crc kubenswrapper[4779]: I1128 13:22:27.174603 4779 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78b64de0-f3cc-4d7c-910c-4660f951db5d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 13:22:27 crc kubenswrapper[4779]: I1128 13:22:27.174635 4779 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78b64de0-f3cc-4d7c-910c-4660f951db5d-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 13:22:27 crc kubenswrapper[4779]: I1128 13:22:27.174646 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jl6vc\" (UniqueName: \"kubernetes.io/projected/78b64de0-f3cc-4d7c-910c-4660f951db5d-kube-api-access-jl6vc\") on node \"crc\" DevicePath \"\"" Nov 28 13:22:27 crc kubenswrapper[4779]: I1128 13:22:27.594435 4779 generic.go:334] "Generic (PLEG): container finished" podID="78b64de0-f3cc-4d7c-910c-4660f951db5d" containerID="7a20f1dd8c76f1c9985af032fd78f46ddbbde45efabea629697e7ce52ad92c0a" exitCode=0 Nov 28 13:22:27 crc kubenswrapper[4779]: I1128 13:22:27.594503 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xk8h5" event={"ID":"78b64de0-f3cc-4d7c-910c-4660f951db5d","Type":"ContainerDied","Data":"7a20f1dd8c76f1c9985af032fd78f46ddbbde45efabea629697e7ce52ad92c0a"} Nov 28 13:22:27 crc kubenswrapper[4779]: I1128 13:22:27.594551 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xk8h5" event={"ID":"78b64de0-f3cc-4d7c-910c-4660f951db5d","Type":"ContainerDied","Data":"48a50211e45c58bd84e5de0adcb53c4ef46b3fad2d27cba8935c9000dff9fddb"} Nov 28 13:22:27 crc kubenswrapper[4779]: I1128 13:22:27.594568 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xk8h5" Nov 28 13:22:27 crc kubenswrapper[4779]: I1128 13:22:27.594590 4779 scope.go:117] "RemoveContainer" containerID="7a20f1dd8c76f1c9985af032fd78f46ddbbde45efabea629697e7ce52ad92c0a" Nov 28 13:22:27 crc kubenswrapper[4779]: I1128 13:22:27.623612 4779 scope.go:117] "RemoveContainer" containerID="bfaa06b285b6ac2281eeaa32138fc90e9ef81d1d9cdcd23714e3099f41f9933d" Nov 28 13:22:27 crc kubenswrapper[4779]: I1128 13:22:27.667047 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xk8h5"] Nov 28 13:22:27 crc kubenswrapper[4779]: I1128 13:22:27.679893 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xk8h5"] Nov 28 13:22:27 crc kubenswrapper[4779]: I1128 13:22:27.681133 4779 scope.go:117] "RemoveContainer" containerID="4a3f3aacafaf6373987835a51b264097d7eb031fb0c084eb12f06d72ac3f7661" Nov 28 13:22:27 crc kubenswrapper[4779]: I1128 13:22:27.733361 4779 scope.go:117] "RemoveContainer" containerID="7a20f1dd8c76f1c9985af032fd78f46ddbbde45efabea629697e7ce52ad92c0a" Nov 28 13:22:27 crc kubenswrapper[4779]: E1128 13:22:27.733855 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a20f1dd8c76f1c9985af032fd78f46ddbbde45efabea629697e7ce52ad92c0a\": container with ID starting with 7a20f1dd8c76f1c9985af032fd78f46ddbbde45efabea629697e7ce52ad92c0a not found: ID does not exist" containerID="7a20f1dd8c76f1c9985af032fd78f46ddbbde45efabea629697e7ce52ad92c0a" Nov 28 13:22:27 crc kubenswrapper[4779]: I1128 13:22:27.733917 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a20f1dd8c76f1c9985af032fd78f46ddbbde45efabea629697e7ce52ad92c0a"} err="failed to get container status \"7a20f1dd8c76f1c9985af032fd78f46ddbbde45efabea629697e7ce52ad92c0a\": rpc error: code = NotFound desc = could not find container \"7a20f1dd8c76f1c9985af032fd78f46ddbbde45efabea629697e7ce52ad92c0a\": container with ID starting with 7a20f1dd8c76f1c9985af032fd78f46ddbbde45efabea629697e7ce52ad92c0a not found: ID does not exist" Nov 28 13:22:27 crc kubenswrapper[4779]: I1128 13:22:27.733951 4779 scope.go:117] "RemoveContainer" containerID="bfaa06b285b6ac2281eeaa32138fc90e9ef81d1d9cdcd23714e3099f41f9933d" Nov 28 13:22:27 crc kubenswrapper[4779]: E1128 13:22:27.734830 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfaa06b285b6ac2281eeaa32138fc90e9ef81d1d9cdcd23714e3099f41f9933d\": container with ID starting with bfaa06b285b6ac2281eeaa32138fc90e9ef81d1d9cdcd23714e3099f41f9933d not found: ID does not exist" containerID="bfaa06b285b6ac2281eeaa32138fc90e9ef81d1d9cdcd23714e3099f41f9933d" Nov 28 13:22:27 crc kubenswrapper[4779]: I1128 13:22:27.734870 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfaa06b285b6ac2281eeaa32138fc90e9ef81d1d9cdcd23714e3099f41f9933d"} err="failed to get container status \"bfaa06b285b6ac2281eeaa32138fc90e9ef81d1d9cdcd23714e3099f41f9933d\": rpc error: code = NotFound desc = could not find container \"bfaa06b285b6ac2281eeaa32138fc90e9ef81d1d9cdcd23714e3099f41f9933d\": container with ID starting with bfaa06b285b6ac2281eeaa32138fc90e9ef81d1d9cdcd23714e3099f41f9933d not found: ID does not exist" Nov 28 13:22:27 crc kubenswrapper[4779]: I1128 13:22:27.734895 4779 scope.go:117] "RemoveContainer" containerID="4a3f3aacafaf6373987835a51b264097d7eb031fb0c084eb12f06d72ac3f7661" Nov 28 13:22:27 crc kubenswrapper[4779]: E1128 13:22:27.735225 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a3f3aacafaf6373987835a51b264097d7eb031fb0c084eb12f06d72ac3f7661\": container with ID starting with 4a3f3aacafaf6373987835a51b264097d7eb031fb0c084eb12f06d72ac3f7661 not found: ID does not exist" containerID="4a3f3aacafaf6373987835a51b264097d7eb031fb0c084eb12f06d72ac3f7661" Nov 28 13:22:27 crc kubenswrapper[4779]: I1128 13:22:27.735253 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a3f3aacafaf6373987835a51b264097d7eb031fb0c084eb12f06d72ac3f7661"} err="failed to get container status \"4a3f3aacafaf6373987835a51b264097d7eb031fb0c084eb12f06d72ac3f7661\": rpc error: code = NotFound desc = could not find container \"4a3f3aacafaf6373987835a51b264097d7eb031fb0c084eb12f06d72ac3f7661\": container with ID starting with 4a3f3aacafaf6373987835a51b264097d7eb031fb0c084eb12f06d72ac3f7661 not found: ID does not exist" Nov 28 13:22:27 crc kubenswrapper[4779]: I1128 13:22:27.745138 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78b64de0-f3cc-4d7c-910c-4660f951db5d" path="/var/lib/kubelet/pods/78b64de0-f3cc-4d7c-910c-4660f951db5d/volumes" Nov 28 13:22:37 crc kubenswrapper[4779]: I1128 13:22:37.583363 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-h58c9"] Nov 28 13:22:37 crc kubenswrapper[4779]: E1128 13:22:37.584881 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78b64de0-f3cc-4d7c-910c-4660f951db5d" containerName="extract-content" Nov 28 13:22:37 crc kubenswrapper[4779]: I1128 13:22:37.584912 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="78b64de0-f3cc-4d7c-910c-4660f951db5d" containerName="extract-content" Nov 28 13:22:37 crc kubenswrapper[4779]: E1128 13:22:37.584965 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78b64de0-f3cc-4d7c-910c-4660f951db5d" containerName="registry-server" Nov 28 13:22:37 crc kubenswrapper[4779]: I1128 13:22:37.584981 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="78b64de0-f3cc-4d7c-910c-4660f951db5d" containerName="registry-server" Nov 28 13:22:37 crc kubenswrapper[4779]: E1128 13:22:37.585015 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78b64de0-f3cc-4d7c-910c-4660f951db5d" containerName="extract-utilities" Nov 28 13:22:37 crc kubenswrapper[4779]: I1128 13:22:37.585032 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="78b64de0-f3cc-4d7c-910c-4660f951db5d" containerName="extract-utilities" Nov 28 13:22:37 crc kubenswrapper[4779]: I1128 13:22:37.585514 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="78b64de0-f3cc-4d7c-910c-4660f951db5d" containerName="registry-server" Nov 28 13:22:37 crc kubenswrapper[4779]: I1128 13:22:37.588319 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h58c9" Nov 28 13:22:37 crc kubenswrapper[4779]: I1128 13:22:37.603044 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h58c9"] Nov 28 13:22:37 crc kubenswrapper[4779]: I1128 13:22:37.707458 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljg2f\" (UniqueName: \"kubernetes.io/projected/bdc72a19-92b1-473d-85f7-2cb052870633-kube-api-access-ljg2f\") pod \"certified-operators-h58c9\" (UID: \"bdc72a19-92b1-473d-85f7-2cb052870633\") " pod="openshift-marketplace/certified-operators-h58c9" Nov 28 13:22:37 crc kubenswrapper[4779]: I1128 13:22:37.707791 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdc72a19-92b1-473d-85f7-2cb052870633-catalog-content\") pod \"certified-operators-h58c9\" (UID: \"bdc72a19-92b1-473d-85f7-2cb052870633\") " pod="openshift-marketplace/certified-operators-h58c9" Nov 28 13:22:37 crc kubenswrapper[4779]: I1128 13:22:37.707915 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdc72a19-92b1-473d-85f7-2cb052870633-utilities\") pod \"certified-operators-h58c9\" (UID: \"bdc72a19-92b1-473d-85f7-2cb052870633\") " pod="openshift-marketplace/certified-operators-h58c9" Nov 28 13:22:37 crc kubenswrapper[4779]: I1128 13:22:37.810319 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdc72a19-92b1-473d-85f7-2cb052870633-catalog-content\") pod \"certified-operators-h58c9\" (UID: \"bdc72a19-92b1-473d-85f7-2cb052870633\") " pod="openshift-marketplace/certified-operators-h58c9" Nov 28 13:22:37 crc kubenswrapper[4779]: I1128 13:22:37.810950 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdc72a19-92b1-473d-85f7-2cb052870633-utilities\") pod \"certified-operators-h58c9\" (UID: \"bdc72a19-92b1-473d-85f7-2cb052870633\") " pod="openshift-marketplace/certified-operators-h58c9" Nov 28 13:22:37 crc kubenswrapper[4779]: I1128 13:22:37.811119 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljg2f\" (UniqueName: \"kubernetes.io/projected/bdc72a19-92b1-473d-85f7-2cb052870633-kube-api-access-ljg2f\") pod \"certified-operators-h58c9\" (UID: \"bdc72a19-92b1-473d-85f7-2cb052870633\") " pod="openshift-marketplace/certified-operators-h58c9" Nov 28 13:22:37 crc kubenswrapper[4779]: I1128 13:22:37.810972 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdc72a19-92b1-473d-85f7-2cb052870633-catalog-content\") pod \"certified-operators-h58c9\" (UID: \"bdc72a19-92b1-473d-85f7-2cb052870633\") " pod="openshift-marketplace/certified-operators-h58c9" Nov 28 13:22:37 crc kubenswrapper[4779]: I1128 13:22:37.811547 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdc72a19-92b1-473d-85f7-2cb052870633-utilities\") pod \"certified-operators-h58c9\" (UID: \"bdc72a19-92b1-473d-85f7-2cb052870633\") " pod="openshift-marketplace/certified-operators-h58c9" Nov 28 13:22:37 crc kubenswrapper[4779]: I1128 13:22:37.839737 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljg2f\" (UniqueName: \"kubernetes.io/projected/bdc72a19-92b1-473d-85f7-2cb052870633-kube-api-access-ljg2f\") pod \"certified-operators-h58c9\" (UID: \"bdc72a19-92b1-473d-85f7-2cb052870633\") " pod="openshift-marketplace/certified-operators-h58c9" Nov 28 13:22:37 crc kubenswrapper[4779]: I1128 13:22:37.929701 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h58c9" Nov 28 13:22:38 crc kubenswrapper[4779]: I1128 13:22:38.451009 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h58c9"] Nov 28 13:22:38 crc kubenswrapper[4779]: I1128 13:22:38.716861 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h58c9" event={"ID":"bdc72a19-92b1-473d-85f7-2cb052870633","Type":"ContainerStarted","Data":"a3c0abb1ae94c49d501e2f36e6125fcf882b9fd8a974c39fa21c154be8ea93ca"} Nov 28 13:22:38 crc kubenswrapper[4779]: I1128 13:22:38.716943 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h58c9" event={"ID":"bdc72a19-92b1-473d-85f7-2cb052870633","Type":"ContainerStarted","Data":"5dfab257f34d3c6ec488fd99c8834f26d978280dd2a787154a152262eef7aeea"} Nov 28 13:22:39 crc kubenswrapper[4779]: I1128 13:22:39.736446 4779 generic.go:334] "Generic (PLEG): container finished" podID="bdc72a19-92b1-473d-85f7-2cb052870633" containerID="a3c0abb1ae94c49d501e2f36e6125fcf882b9fd8a974c39fa21c154be8ea93ca" exitCode=0 Nov 28 13:22:39 crc kubenswrapper[4779]: I1128 13:22:39.742702 4779 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 13:22:39 crc kubenswrapper[4779]: I1128 13:22:39.755873 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h58c9" event={"ID":"bdc72a19-92b1-473d-85f7-2cb052870633","Type":"ContainerDied","Data":"a3c0abb1ae94c49d501e2f36e6125fcf882b9fd8a974c39fa21c154be8ea93ca"} Nov 28 13:22:41 crc kubenswrapper[4779]: I1128 13:22:41.759035 4779 generic.go:334] "Generic (PLEG): container finished" podID="bdc72a19-92b1-473d-85f7-2cb052870633" containerID="a2103fd848b8dd4c99f3ff3fd25a56384eb6295a1bf09f803c26b9c2f991242e" exitCode=0 Nov 28 13:22:41 crc kubenswrapper[4779]: I1128 13:22:41.759122 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h58c9" event={"ID":"bdc72a19-92b1-473d-85f7-2cb052870633","Type":"ContainerDied","Data":"a2103fd848b8dd4c99f3ff3fd25a56384eb6295a1bf09f803c26b9c2f991242e"} Nov 28 13:22:42 crc kubenswrapper[4779]: I1128 13:22:42.773999 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h58c9" event={"ID":"bdc72a19-92b1-473d-85f7-2cb052870633","Type":"ContainerStarted","Data":"a192881d7c627d6a3afaace9d330353664096a9ed8a29b0e855c310ab266360c"} Nov 28 13:22:42 crc kubenswrapper[4779]: I1128 13:22:42.804742 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-h58c9" podStartSLOduration=3.344867938 podStartE2EDuration="5.804715929s" podCreationTimestamp="2025-11-28 13:22:37 +0000 UTC" firstStartedPulling="2025-11-28 13:22:39.74225877 +0000 UTC m=+2820.307934164" lastFinishedPulling="2025-11-28 13:22:42.202106801 +0000 UTC m=+2822.767782155" observedRunningTime="2025-11-28 13:22:42.789415792 +0000 UTC m=+2823.355091166" watchObservedRunningTime="2025-11-28 13:22:42.804715929 +0000 UTC m=+2823.370391273" Nov 28 13:22:47 crc kubenswrapper[4779]: I1128 13:22:47.929898 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-h58c9" Nov 28 13:22:47 crc kubenswrapper[4779]: I1128 13:22:47.930524 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-h58c9" Nov 28 13:22:48 crc kubenswrapper[4779]: I1128 13:22:48.019915 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-h58c9" Nov 28 13:22:48 crc kubenswrapper[4779]: I1128 13:22:48.898556 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-h58c9" Nov 28 13:22:48 crc kubenswrapper[4779]: I1128 13:22:48.957430 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h58c9"] Nov 28 13:22:50 crc kubenswrapper[4779]: I1128 13:22:50.856656 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-h58c9" podUID="bdc72a19-92b1-473d-85f7-2cb052870633" containerName="registry-server" containerID="cri-o://a192881d7c627d6a3afaace9d330353664096a9ed8a29b0e855c310ab266360c" gracePeriod=2 Nov 28 13:22:51 crc kubenswrapper[4779]: I1128 13:22:51.360751 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h58c9" Nov 28 13:22:51 crc kubenswrapper[4779]: I1128 13:22:51.507040 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdc72a19-92b1-473d-85f7-2cb052870633-utilities\") pod \"bdc72a19-92b1-473d-85f7-2cb052870633\" (UID: \"bdc72a19-92b1-473d-85f7-2cb052870633\") " Nov 28 13:22:51 crc kubenswrapper[4779]: I1128 13:22:51.507165 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljg2f\" (UniqueName: \"kubernetes.io/projected/bdc72a19-92b1-473d-85f7-2cb052870633-kube-api-access-ljg2f\") pod \"bdc72a19-92b1-473d-85f7-2cb052870633\" (UID: \"bdc72a19-92b1-473d-85f7-2cb052870633\") " Nov 28 13:22:51 crc kubenswrapper[4779]: I1128 13:22:51.507425 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdc72a19-92b1-473d-85f7-2cb052870633-catalog-content\") pod \"bdc72a19-92b1-473d-85f7-2cb052870633\" (UID: \"bdc72a19-92b1-473d-85f7-2cb052870633\") " Nov 28 13:22:51 crc kubenswrapper[4779]: I1128 13:22:51.508236 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdc72a19-92b1-473d-85f7-2cb052870633-utilities" (OuterVolumeSpecName: "utilities") pod "bdc72a19-92b1-473d-85f7-2cb052870633" (UID: "bdc72a19-92b1-473d-85f7-2cb052870633"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 13:22:51 crc kubenswrapper[4779]: I1128 13:22:51.515200 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdc72a19-92b1-473d-85f7-2cb052870633-kube-api-access-ljg2f" (OuterVolumeSpecName: "kube-api-access-ljg2f") pod "bdc72a19-92b1-473d-85f7-2cb052870633" (UID: "bdc72a19-92b1-473d-85f7-2cb052870633"). InnerVolumeSpecName "kube-api-access-ljg2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:22:51 crc kubenswrapper[4779]: I1128 13:22:51.595907 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdc72a19-92b1-473d-85f7-2cb052870633-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bdc72a19-92b1-473d-85f7-2cb052870633" (UID: "bdc72a19-92b1-473d-85f7-2cb052870633"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 13:22:51 crc kubenswrapper[4779]: I1128 13:22:51.610190 4779 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdc72a19-92b1-473d-85f7-2cb052870633-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 13:22:51 crc kubenswrapper[4779]: I1128 13:22:51.610228 4779 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdc72a19-92b1-473d-85f7-2cb052870633-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 13:22:51 crc kubenswrapper[4779]: I1128 13:22:51.610246 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljg2f\" (UniqueName: \"kubernetes.io/projected/bdc72a19-92b1-473d-85f7-2cb052870633-kube-api-access-ljg2f\") on node \"crc\" DevicePath \"\"" Nov 28 13:22:51 crc kubenswrapper[4779]: I1128 13:22:51.867901 4779 generic.go:334] "Generic (PLEG): container finished" podID="bdc72a19-92b1-473d-85f7-2cb052870633" containerID="a192881d7c627d6a3afaace9d330353664096a9ed8a29b0e855c310ab266360c" exitCode=0 Nov 28 13:22:51 crc kubenswrapper[4779]: I1128 13:22:51.867944 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h58c9" event={"ID":"bdc72a19-92b1-473d-85f7-2cb052870633","Type":"ContainerDied","Data":"a192881d7c627d6a3afaace9d330353664096a9ed8a29b0e855c310ab266360c"} Nov 28 13:22:51 crc kubenswrapper[4779]: I1128 13:22:51.867971 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h58c9" event={"ID":"bdc72a19-92b1-473d-85f7-2cb052870633","Type":"ContainerDied","Data":"5dfab257f34d3c6ec488fd99c8834f26d978280dd2a787154a152262eef7aeea"} Nov 28 13:22:51 crc kubenswrapper[4779]: I1128 13:22:51.868026 4779 scope.go:117] "RemoveContainer" containerID="a192881d7c627d6a3afaace9d330353664096a9ed8a29b0e855c310ab266360c" Nov 28 13:22:51 crc kubenswrapper[4779]: I1128 13:22:51.868174 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h58c9" Nov 28 13:22:51 crc kubenswrapper[4779]: I1128 13:22:51.892429 4779 scope.go:117] "RemoveContainer" containerID="a2103fd848b8dd4c99f3ff3fd25a56384eb6295a1bf09f803c26b9c2f991242e" Nov 28 13:22:51 crc kubenswrapper[4779]: I1128 13:22:51.895495 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h58c9"] Nov 28 13:22:51 crc kubenswrapper[4779]: I1128 13:22:51.905288 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-h58c9"] Nov 28 13:22:51 crc kubenswrapper[4779]: I1128 13:22:51.917980 4779 scope.go:117] "RemoveContainer" containerID="a3c0abb1ae94c49d501e2f36e6125fcf882b9fd8a974c39fa21c154be8ea93ca" Nov 28 13:22:51 crc kubenswrapper[4779]: I1128 13:22:51.956615 4779 scope.go:117] "RemoveContainer" containerID="a192881d7c627d6a3afaace9d330353664096a9ed8a29b0e855c310ab266360c" Nov 28 13:22:51 crc kubenswrapper[4779]: E1128 13:22:51.957005 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a192881d7c627d6a3afaace9d330353664096a9ed8a29b0e855c310ab266360c\": container with ID starting with a192881d7c627d6a3afaace9d330353664096a9ed8a29b0e855c310ab266360c not found: ID does not exist" containerID="a192881d7c627d6a3afaace9d330353664096a9ed8a29b0e855c310ab266360c" Nov 28 13:22:51 crc kubenswrapper[4779]: I1128 13:22:51.957051 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a192881d7c627d6a3afaace9d330353664096a9ed8a29b0e855c310ab266360c"} err="failed to get container status \"a192881d7c627d6a3afaace9d330353664096a9ed8a29b0e855c310ab266360c\": rpc error: code = NotFound desc = could not find container \"a192881d7c627d6a3afaace9d330353664096a9ed8a29b0e855c310ab266360c\": container with ID starting with a192881d7c627d6a3afaace9d330353664096a9ed8a29b0e855c310ab266360c not found: ID does not exist" Nov 28 13:22:51 crc kubenswrapper[4779]: I1128 13:22:51.957083 4779 scope.go:117] "RemoveContainer" containerID="a2103fd848b8dd4c99f3ff3fd25a56384eb6295a1bf09f803c26b9c2f991242e" Nov 28 13:22:51 crc kubenswrapper[4779]: E1128 13:22:51.957388 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2103fd848b8dd4c99f3ff3fd25a56384eb6295a1bf09f803c26b9c2f991242e\": container with ID starting with a2103fd848b8dd4c99f3ff3fd25a56384eb6295a1bf09f803c26b9c2f991242e not found: ID does not exist" containerID="a2103fd848b8dd4c99f3ff3fd25a56384eb6295a1bf09f803c26b9c2f991242e" Nov 28 13:22:51 crc kubenswrapper[4779]: I1128 13:22:51.957434 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2103fd848b8dd4c99f3ff3fd25a56384eb6295a1bf09f803c26b9c2f991242e"} err="failed to get container status \"a2103fd848b8dd4c99f3ff3fd25a56384eb6295a1bf09f803c26b9c2f991242e\": rpc error: code = NotFound desc = could not find container \"a2103fd848b8dd4c99f3ff3fd25a56384eb6295a1bf09f803c26b9c2f991242e\": container with ID starting with a2103fd848b8dd4c99f3ff3fd25a56384eb6295a1bf09f803c26b9c2f991242e not found: ID does not exist" Nov 28 13:22:51 crc kubenswrapper[4779]: I1128 13:22:51.957452 4779 scope.go:117] "RemoveContainer" containerID="a3c0abb1ae94c49d501e2f36e6125fcf882b9fd8a974c39fa21c154be8ea93ca" Nov 28 13:22:51 crc kubenswrapper[4779]: E1128 13:22:51.957722 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3c0abb1ae94c49d501e2f36e6125fcf882b9fd8a974c39fa21c154be8ea93ca\": container with ID starting with a3c0abb1ae94c49d501e2f36e6125fcf882b9fd8a974c39fa21c154be8ea93ca not found: ID does not exist" containerID="a3c0abb1ae94c49d501e2f36e6125fcf882b9fd8a974c39fa21c154be8ea93ca" Nov 28 13:22:51 crc kubenswrapper[4779]: I1128 13:22:51.957765 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3c0abb1ae94c49d501e2f36e6125fcf882b9fd8a974c39fa21c154be8ea93ca"} err="failed to get container status \"a3c0abb1ae94c49d501e2f36e6125fcf882b9fd8a974c39fa21c154be8ea93ca\": rpc error: code = NotFound desc = could not find container \"a3c0abb1ae94c49d501e2f36e6125fcf882b9fd8a974c39fa21c154be8ea93ca\": container with ID starting with a3c0abb1ae94c49d501e2f36e6125fcf882b9fd8a974c39fa21c154be8ea93ca not found: ID does not exist" Nov 28 13:22:53 crc kubenswrapper[4779]: I1128 13:22:53.735306 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdc72a19-92b1-473d-85f7-2cb052870633" path="/var/lib/kubelet/pods/bdc72a19-92b1-473d-85f7-2cb052870633/volumes" Nov 28 13:23:39 crc kubenswrapper[4779]: I1128 13:23:39.381342 4779 generic.go:334] "Generic (PLEG): container finished" podID="1e165593-fee0-4b82-87b3-6f102fdabe4f" containerID="b2daa75e0640bc7a42a0c1314572fe6485c518261bf1cc8322ce3395ad5f33f6" exitCode=0 Nov 28 13:23:39 crc kubenswrapper[4779]: I1128 13:23:39.381450 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rs7mr" event={"ID":"1e165593-fee0-4b82-87b3-6f102fdabe4f","Type":"ContainerDied","Data":"b2daa75e0640bc7a42a0c1314572fe6485c518261bf1cc8322ce3395ad5f33f6"} Nov 28 13:23:40 crc kubenswrapper[4779]: I1128 13:23:40.894548 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rs7mr" Nov 28 13:23:41 crc kubenswrapper[4779]: I1128 13:23:41.003953 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/1e165593-fee0-4b82-87b3-6f102fdabe4f-ceilometer-compute-config-data-2\") pod \"1e165593-fee0-4b82-87b3-6f102fdabe4f\" (UID: \"1e165593-fee0-4b82-87b3-6f102fdabe4f\") " Nov 28 13:23:41 crc kubenswrapper[4779]: I1128 13:23:41.004009 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/1e165593-fee0-4b82-87b3-6f102fdabe4f-ceilometer-compute-config-data-0\") pod \"1e165593-fee0-4b82-87b3-6f102fdabe4f\" (UID: \"1e165593-fee0-4b82-87b3-6f102fdabe4f\") " Nov 28 13:23:41 crc kubenswrapper[4779]: I1128 13:23:41.004037 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e165593-fee0-4b82-87b3-6f102fdabe4f-inventory\") pod \"1e165593-fee0-4b82-87b3-6f102fdabe4f\" (UID: \"1e165593-fee0-4b82-87b3-6f102fdabe4f\") " Nov 28 13:23:41 crc kubenswrapper[4779]: I1128 13:23:41.004060 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e165593-fee0-4b82-87b3-6f102fdabe4f-telemetry-combined-ca-bundle\") pod \"1e165593-fee0-4b82-87b3-6f102fdabe4f\" (UID: \"1e165593-fee0-4b82-87b3-6f102fdabe4f\") " Nov 28 13:23:41 crc kubenswrapper[4779]: I1128 13:23:41.004081 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1e165593-fee0-4b82-87b3-6f102fdabe4f-ssh-key\") pod \"1e165593-fee0-4b82-87b3-6f102fdabe4f\" (UID: \"1e165593-fee0-4b82-87b3-6f102fdabe4f\") " Nov 28 13:23:41 crc kubenswrapper[4779]: I1128 13:23:41.004149 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bpgd\" (UniqueName: \"kubernetes.io/projected/1e165593-fee0-4b82-87b3-6f102fdabe4f-kube-api-access-8bpgd\") pod \"1e165593-fee0-4b82-87b3-6f102fdabe4f\" (UID: \"1e165593-fee0-4b82-87b3-6f102fdabe4f\") " Nov 28 13:23:41 crc kubenswrapper[4779]: I1128 13:23:41.004180 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/1e165593-fee0-4b82-87b3-6f102fdabe4f-ceilometer-compute-config-data-1\") pod \"1e165593-fee0-4b82-87b3-6f102fdabe4f\" (UID: \"1e165593-fee0-4b82-87b3-6f102fdabe4f\") " Nov 28 13:23:41 crc kubenswrapper[4779]: I1128 13:23:41.017182 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e165593-fee0-4b82-87b3-6f102fdabe4f-kube-api-access-8bpgd" (OuterVolumeSpecName: "kube-api-access-8bpgd") pod "1e165593-fee0-4b82-87b3-6f102fdabe4f" (UID: "1e165593-fee0-4b82-87b3-6f102fdabe4f"). InnerVolumeSpecName "kube-api-access-8bpgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:23:41 crc kubenswrapper[4779]: I1128 13:23:41.018476 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e165593-fee0-4b82-87b3-6f102fdabe4f-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "1e165593-fee0-4b82-87b3-6f102fdabe4f" (UID: "1e165593-fee0-4b82-87b3-6f102fdabe4f"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:23:41 crc kubenswrapper[4779]: I1128 13:23:41.041263 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e165593-fee0-4b82-87b3-6f102fdabe4f-inventory" (OuterVolumeSpecName: "inventory") pod "1e165593-fee0-4b82-87b3-6f102fdabe4f" (UID: "1e165593-fee0-4b82-87b3-6f102fdabe4f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:23:41 crc kubenswrapper[4779]: I1128 13:23:41.044145 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e165593-fee0-4b82-87b3-6f102fdabe4f-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "1e165593-fee0-4b82-87b3-6f102fdabe4f" (UID: "1e165593-fee0-4b82-87b3-6f102fdabe4f"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:23:41 crc kubenswrapper[4779]: I1128 13:23:41.047871 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e165593-fee0-4b82-87b3-6f102fdabe4f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1e165593-fee0-4b82-87b3-6f102fdabe4f" (UID: "1e165593-fee0-4b82-87b3-6f102fdabe4f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:23:41 crc kubenswrapper[4779]: I1128 13:23:41.056241 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e165593-fee0-4b82-87b3-6f102fdabe4f-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "1e165593-fee0-4b82-87b3-6f102fdabe4f" (UID: "1e165593-fee0-4b82-87b3-6f102fdabe4f"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:23:41 crc kubenswrapper[4779]: I1128 13:23:41.066283 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e165593-fee0-4b82-87b3-6f102fdabe4f-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "1e165593-fee0-4b82-87b3-6f102fdabe4f" (UID: "1e165593-fee0-4b82-87b3-6f102fdabe4f"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:23:41 crc kubenswrapper[4779]: I1128 13:23:41.106488 4779 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/1e165593-fee0-4b82-87b3-6f102fdabe4f-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Nov 28 13:23:41 crc kubenswrapper[4779]: I1128 13:23:41.106543 4779 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/1e165593-fee0-4b82-87b3-6f102fdabe4f-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 28 13:23:41 crc kubenswrapper[4779]: I1128 13:23:41.106566 4779 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e165593-fee0-4b82-87b3-6f102fdabe4f-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 13:23:41 crc kubenswrapper[4779]: I1128 13:23:41.106584 4779 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e165593-fee0-4b82-87b3-6f102fdabe4f-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 13:23:41 crc kubenswrapper[4779]: I1128 13:23:41.106600 4779 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1e165593-fee0-4b82-87b3-6f102fdabe4f-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 13:23:41 crc kubenswrapper[4779]: I1128 13:23:41.106617 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bpgd\" (UniqueName: \"kubernetes.io/projected/1e165593-fee0-4b82-87b3-6f102fdabe4f-kube-api-access-8bpgd\") on node \"crc\" DevicePath \"\"" Nov 28 13:23:41 crc kubenswrapper[4779]: I1128 13:23:41.106634 4779 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/1e165593-fee0-4b82-87b3-6f102fdabe4f-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 28 13:23:41 crc kubenswrapper[4779]: I1128 13:23:41.405259 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rs7mr" event={"ID":"1e165593-fee0-4b82-87b3-6f102fdabe4f","Type":"ContainerDied","Data":"c601faa0bdeb5f3e5e421ee6b0dbf5e83ee967931b27acf93d81d17bf7f4e3a5"} Nov 28 13:23:41 crc kubenswrapper[4779]: I1128 13:23:41.405307 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c601faa0bdeb5f3e5e421ee6b0dbf5e83ee967931b27acf93d81d17bf7f4e3a5" Nov 28 13:23:41 crc kubenswrapper[4779]: I1128 13:23:41.405344 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rs7mr" Nov 28 13:23:46 crc kubenswrapper[4779]: I1128 13:23:46.285031 4779 patch_prober.go:28] interesting pod/machine-config-daemon-kj9g2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 13:23:46 crc kubenswrapper[4779]: I1128 13:23:46.285966 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 13:23:51 crc kubenswrapper[4779]: I1128 13:23:51.940425 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n59kn"] Nov 28 13:23:51 crc kubenswrapper[4779]: E1128 13:23:51.941613 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e165593-fee0-4b82-87b3-6f102fdabe4f" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 28 13:23:51 crc kubenswrapper[4779]: I1128 13:23:51.941637 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e165593-fee0-4b82-87b3-6f102fdabe4f" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 28 13:23:51 crc kubenswrapper[4779]: E1128 13:23:51.941683 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdc72a19-92b1-473d-85f7-2cb052870633" containerName="registry-server" Nov 28 13:23:51 crc kubenswrapper[4779]: I1128 13:23:51.941695 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdc72a19-92b1-473d-85f7-2cb052870633" containerName="registry-server" Nov 28 13:23:51 crc kubenswrapper[4779]: E1128 13:23:51.941710 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdc72a19-92b1-473d-85f7-2cb052870633" containerName="extract-content" Nov 28 13:23:51 crc kubenswrapper[4779]: I1128 13:23:51.941725 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdc72a19-92b1-473d-85f7-2cb052870633" containerName="extract-content" Nov 28 13:23:51 crc kubenswrapper[4779]: E1128 13:23:51.941755 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdc72a19-92b1-473d-85f7-2cb052870633" containerName="extract-utilities" Nov 28 13:23:51 crc kubenswrapper[4779]: I1128 13:23:51.941768 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdc72a19-92b1-473d-85f7-2cb052870633" containerName="extract-utilities" Nov 28 13:23:51 crc kubenswrapper[4779]: I1128 13:23:51.942142 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e165593-fee0-4b82-87b3-6f102fdabe4f" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 28 13:23:51 crc kubenswrapper[4779]: I1128 13:23:51.942188 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdc72a19-92b1-473d-85f7-2cb052870633" containerName="registry-server" Nov 28 13:23:51 crc kubenswrapper[4779]: I1128 13:23:51.944490 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n59kn" Nov 28 13:23:51 crc kubenswrapper[4779]: I1128 13:23:51.975463 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1337d74-d239-4d6a-b748-0b02fc1656ba-catalog-content\") pod \"redhat-marketplace-n59kn\" (UID: \"e1337d74-d239-4d6a-b748-0b02fc1656ba\") " pod="openshift-marketplace/redhat-marketplace-n59kn" Nov 28 13:23:51 crc kubenswrapper[4779]: I1128 13:23:51.976157 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzpwp\" (UniqueName: \"kubernetes.io/projected/e1337d74-d239-4d6a-b748-0b02fc1656ba-kube-api-access-hzpwp\") pod \"redhat-marketplace-n59kn\" (UID: \"e1337d74-d239-4d6a-b748-0b02fc1656ba\") " pod="openshift-marketplace/redhat-marketplace-n59kn" Nov 28 13:23:51 crc kubenswrapper[4779]: I1128 13:23:51.976294 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1337d74-d239-4d6a-b748-0b02fc1656ba-utilities\") pod \"redhat-marketplace-n59kn\" (UID: \"e1337d74-d239-4d6a-b748-0b02fc1656ba\") " pod="openshift-marketplace/redhat-marketplace-n59kn" Nov 28 13:23:51 crc kubenswrapper[4779]: I1128 13:23:51.976677 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n59kn"] Nov 28 13:23:52 crc kubenswrapper[4779]: I1128 13:23:52.078164 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1337d74-d239-4d6a-b748-0b02fc1656ba-catalog-content\") pod \"redhat-marketplace-n59kn\" (UID: \"e1337d74-d239-4d6a-b748-0b02fc1656ba\") " pod="openshift-marketplace/redhat-marketplace-n59kn" Nov 28 13:23:52 crc kubenswrapper[4779]: I1128 13:23:52.078249 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzpwp\" (UniqueName: \"kubernetes.io/projected/e1337d74-d239-4d6a-b748-0b02fc1656ba-kube-api-access-hzpwp\") pod \"redhat-marketplace-n59kn\" (UID: \"e1337d74-d239-4d6a-b748-0b02fc1656ba\") " pod="openshift-marketplace/redhat-marketplace-n59kn" Nov 28 13:23:52 crc kubenswrapper[4779]: I1128 13:23:52.078288 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1337d74-d239-4d6a-b748-0b02fc1656ba-utilities\") pod \"redhat-marketplace-n59kn\" (UID: \"e1337d74-d239-4d6a-b748-0b02fc1656ba\") " pod="openshift-marketplace/redhat-marketplace-n59kn" Nov 28 13:23:52 crc kubenswrapper[4779]: I1128 13:23:52.078858 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1337d74-d239-4d6a-b748-0b02fc1656ba-utilities\") pod \"redhat-marketplace-n59kn\" (UID: \"e1337d74-d239-4d6a-b748-0b02fc1656ba\") " pod="openshift-marketplace/redhat-marketplace-n59kn" Nov 28 13:23:52 crc kubenswrapper[4779]: I1128 13:23:52.079225 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1337d74-d239-4d6a-b748-0b02fc1656ba-catalog-content\") pod \"redhat-marketplace-n59kn\" (UID: \"e1337d74-d239-4d6a-b748-0b02fc1656ba\") " pod="openshift-marketplace/redhat-marketplace-n59kn" Nov 28 13:23:52 crc kubenswrapper[4779]: I1128 13:23:52.109169 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzpwp\" (UniqueName: \"kubernetes.io/projected/e1337d74-d239-4d6a-b748-0b02fc1656ba-kube-api-access-hzpwp\") pod \"redhat-marketplace-n59kn\" (UID: \"e1337d74-d239-4d6a-b748-0b02fc1656ba\") " pod="openshift-marketplace/redhat-marketplace-n59kn" Nov 28 13:23:52 crc kubenswrapper[4779]: I1128 13:23:52.285560 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n59kn" Nov 28 13:23:52 crc kubenswrapper[4779]: I1128 13:23:52.737494 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n59kn"] Nov 28 13:23:52 crc kubenswrapper[4779]: W1128 13:23:52.748566 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1337d74_d239_4d6a_b748_0b02fc1656ba.slice/crio-c11de759faa0e709e7aef7beb93c58c9c3ace679ae2f015cbd9162239c1150c2 WatchSource:0}: Error finding container c11de759faa0e709e7aef7beb93c58c9c3ace679ae2f015cbd9162239c1150c2: Status 404 returned error can't find the container with id c11de759faa0e709e7aef7beb93c58c9c3ace679ae2f015cbd9162239c1150c2 Nov 28 13:23:53 crc kubenswrapper[4779]: I1128 13:23:53.534506 4779 generic.go:334] "Generic (PLEG): container finished" podID="e1337d74-d239-4d6a-b748-0b02fc1656ba" containerID="a541615f14f9f273dd6c9b2d03b170e2b0e59a2b770c8841b1d217f4bce06104" exitCode=0 Nov 28 13:23:53 crc kubenswrapper[4779]: I1128 13:23:53.534577 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n59kn" event={"ID":"e1337d74-d239-4d6a-b748-0b02fc1656ba","Type":"ContainerDied","Data":"a541615f14f9f273dd6c9b2d03b170e2b0e59a2b770c8841b1d217f4bce06104"} Nov 28 13:23:53 crc kubenswrapper[4779]: I1128 13:23:53.534934 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n59kn" event={"ID":"e1337d74-d239-4d6a-b748-0b02fc1656ba","Type":"ContainerStarted","Data":"c11de759faa0e709e7aef7beb93c58c9c3ace679ae2f015cbd9162239c1150c2"} Nov 28 13:23:54 crc kubenswrapper[4779]: I1128 13:23:54.551885 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n59kn" event={"ID":"e1337d74-d239-4d6a-b748-0b02fc1656ba","Type":"ContainerStarted","Data":"73fc61e9b90ba79689824eaeac35e6b577249c30a84976712d5265d697bfd40f"} Nov 28 13:23:55 crc kubenswrapper[4779]: I1128 13:23:55.567655 4779 generic.go:334] "Generic (PLEG): container finished" podID="e1337d74-d239-4d6a-b748-0b02fc1656ba" containerID="73fc61e9b90ba79689824eaeac35e6b577249c30a84976712d5265d697bfd40f" exitCode=0 Nov 28 13:23:55 crc kubenswrapper[4779]: I1128 13:23:55.567719 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n59kn" event={"ID":"e1337d74-d239-4d6a-b748-0b02fc1656ba","Type":"ContainerDied","Data":"73fc61e9b90ba79689824eaeac35e6b577249c30a84976712d5265d697bfd40f"} Nov 28 13:23:56 crc kubenswrapper[4779]: I1128 13:23:56.590170 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n59kn" event={"ID":"e1337d74-d239-4d6a-b748-0b02fc1656ba","Type":"ContainerStarted","Data":"43245611cbfb2de8820831b44718285208916de6396bf5515c069d33329b5aeb"} Nov 28 13:23:56 crc kubenswrapper[4779]: I1128 13:23:56.624191 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n59kn" podStartSLOduration=2.957917709 podStartE2EDuration="5.624072267s" podCreationTimestamp="2025-11-28 13:23:51 +0000 UTC" firstStartedPulling="2025-11-28 13:23:53.538063411 +0000 UTC m=+2894.103738805" lastFinishedPulling="2025-11-28 13:23:56.204217979 +0000 UTC m=+2896.769893363" observedRunningTime="2025-11-28 13:23:56.617846192 +0000 UTC m=+2897.183521556" watchObservedRunningTime="2025-11-28 13:23:56.624072267 +0000 UTC m=+2897.189747631" Nov 28 13:24:02 crc kubenswrapper[4779]: I1128 13:24:02.286329 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n59kn" Nov 28 13:24:02 crc kubenswrapper[4779]: I1128 13:24:02.287407 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n59kn" Nov 28 13:24:02 crc kubenswrapper[4779]: I1128 13:24:02.365845 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n59kn" Nov 28 13:24:02 crc kubenswrapper[4779]: I1128 13:24:02.751773 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n59kn" Nov 28 13:24:02 crc kubenswrapper[4779]: I1128 13:24:02.822581 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n59kn"] Nov 28 13:24:04 crc kubenswrapper[4779]: I1128 13:24:04.680298 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-n59kn" podUID="e1337d74-d239-4d6a-b748-0b02fc1656ba" containerName="registry-server" containerID="cri-o://43245611cbfb2de8820831b44718285208916de6396bf5515c069d33329b5aeb" gracePeriod=2 Nov 28 13:24:04 crc kubenswrapper[4779]: E1128 13:24:04.922253 4779 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1337d74_d239_4d6a_b748_0b02fc1656ba.slice/crio-43245611cbfb2de8820831b44718285208916de6396bf5515c069d33329b5aeb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1337d74_d239_4d6a_b748_0b02fc1656ba.slice/crio-conmon-43245611cbfb2de8820831b44718285208916de6396bf5515c069d33329b5aeb.scope\": RecentStats: unable to find data in memory cache]" Nov 28 13:24:05 crc kubenswrapper[4779]: I1128 13:24:05.213468 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n59kn" Nov 28 13:24:05 crc kubenswrapper[4779]: I1128 13:24:05.307331 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzpwp\" (UniqueName: \"kubernetes.io/projected/e1337d74-d239-4d6a-b748-0b02fc1656ba-kube-api-access-hzpwp\") pod \"e1337d74-d239-4d6a-b748-0b02fc1656ba\" (UID: \"e1337d74-d239-4d6a-b748-0b02fc1656ba\") " Nov 28 13:24:05 crc kubenswrapper[4779]: I1128 13:24:05.307504 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1337d74-d239-4d6a-b748-0b02fc1656ba-catalog-content\") pod \"e1337d74-d239-4d6a-b748-0b02fc1656ba\" (UID: \"e1337d74-d239-4d6a-b748-0b02fc1656ba\") " Nov 28 13:24:05 crc kubenswrapper[4779]: I1128 13:24:05.307568 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1337d74-d239-4d6a-b748-0b02fc1656ba-utilities\") pod \"e1337d74-d239-4d6a-b748-0b02fc1656ba\" (UID: \"e1337d74-d239-4d6a-b748-0b02fc1656ba\") " Nov 28 13:24:05 crc kubenswrapper[4779]: I1128 13:24:05.309291 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1337d74-d239-4d6a-b748-0b02fc1656ba-utilities" (OuterVolumeSpecName: "utilities") pod "e1337d74-d239-4d6a-b748-0b02fc1656ba" (UID: "e1337d74-d239-4d6a-b748-0b02fc1656ba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 13:24:05 crc kubenswrapper[4779]: I1128 13:24:05.318432 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1337d74-d239-4d6a-b748-0b02fc1656ba-kube-api-access-hzpwp" (OuterVolumeSpecName: "kube-api-access-hzpwp") pod "e1337d74-d239-4d6a-b748-0b02fc1656ba" (UID: "e1337d74-d239-4d6a-b748-0b02fc1656ba"). InnerVolumeSpecName "kube-api-access-hzpwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:24:05 crc kubenswrapper[4779]: I1128 13:24:05.336782 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1337d74-d239-4d6a-b748-0b02fc1656ba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e1337d74-d239-4d6a-b748-0b02fc1656ba" (UID: "e1337d74-d239-4d6a-b748-0b02fc1656ba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 13:24:05 crc kubenswrapper[4779]: I1128 13:24:05.410443 4779 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1337d74-d239-4d6a-b748-0b02fc1656ba-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 13:24:05 crc kubenswrapper[4779]: I1128 13:24:05.410494 4779 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1337d74-d239-4d6a-b748-0b02fc1656ba-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 13:24:05 crc kubenswrapper[4779]: I1128 13:24:05.410518 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzpwp\" (UniqueName: \"kubernetes.io/projected/e1337d74-d239-4d6a-b748-0b02fc1656ba-kube-api-access-hzpwp\") on node \"crc\" DevicePath \"\"" Nov 28 13:24:05 crc kubenswrapper[4779]: I1128 13:24:05.700479 4779 generic.go:334] "Generic (PLEG): container finished" podID="e1337d74-d239-4d6a-b748-0b02fc1656ba" containerID="43245611cbfb2de8820831b44718285208916de6396bf5515c069d33329b5aeb" exitCode=0 Nov 28 13:24:05 crc kubenswrapper[4779]: I1128 13:24:05.700593 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n59kn" Nov 28 13:24:05 crc kubenswrapper[4779]: I1128 13:24:05.700592 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n59kn" event={"ID":"e1337d74-d239-4d6a-b748-0b02fc1656ba","Type":"ContainerDied","Data":"43245611cbfb2de8820831b44718285208916de6396bf5515c069d33329b5aeb"} Nov 28 13:24:05 crc kubenswrapper[4779]: I1128 13:24:05.700680 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n59kn" event={"ID":"e1337d74-d239-4d6a-b748-0b02fc1656ba","Type":"ContainerDied","Data":"c11de759faa0e709e7aef7beb93c58c9c3ace679ae2f015cbd9162239c1150c2"} Nov 28 13:24:05 crc kubenswrapper[4779]: I1128 13:24:05.700758 4779 scope.go:117] "RemoveContainer" containerID="43245611cbfb2de8820831b44718285208916de6396bf5515c069d33329b5aeb" Nov 28 13:24:05 crc kubenswrapper[4779]: I1128 13:24:05.767514 4779 scope.go:117] "RemoveContainer" containerID="73fc61e9b90ba79689824eaeac35e6b577249c30a84976712d5265d697bfd40f" Nov 28 13:24:05 crc kubenswrapper[4779]: I1128 13:24:05.775396 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n59kn"] Nov 28 13:24:05 crc kubenswrapper[4779]: I1128 13:24:05.775712 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-n59kn"] Nov 28 13:24:05 crc kubenswrapper[4779]: I1128 13:24:05.815241 4779 scope.go:117] "RemoveContainer" containerID="a541615f14f9f273dd6c9b2d03b170e2b0e59a2b770c8841b1d217f4bce06104" Nov 28 13:24:05 crc kubenswrapper[4779]: I1128 13:24:05.851131 4779 scope.go:117] "RemoveContainer" containerID="43245611cbfb2de8820831b44718285208916de6396bf5515c069d33329b5aeb" Nov 28 13:24:05 crc kubenswrapper[4779]: E1128 13:24:05.851834 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43245611cbfb2de8820831b44718285208916de6396bf5515c069d33329b5aeb\": container with ID starting with 43245611cbfb2de8820831b44718285208916de6396bf5515c069d33329b5aeb not found: ID does not exist" containerID="43245611cbfb2de8820831b44718285208916de6396bf5515c069d33329b5aeb" Nov 28 13:24:05 crc kubenswrapper[4779]: I1128 13:24:05.851886 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43245611cbfb2de8820831b44718285208916de6396bf5515c069d33329b5aeb"} err="failed to get container status \"43245611cbfb2de8820831b44718285208916de6396bf5515c069d33329b5aeb\": rpc error: code = NotFound desc = could not find container \"43245611cbfb2de8820831b44718285208916de6396bf5515c069d33329b5aeb\": container with ID starting with 43245611cbfb2de8820831b44718285208916de6396bf5515c069d33329b5aeb not found: ID does not exist" Nov 28 13:24:05 crc kubenswrapper[4779]: I1128 13:24:05.851920 4779 scope.go:117] "RemoveContainer" containerID="73fc61e9b90ba79689824eaeac35e6b577249c30a84976712d5265d697bfd40f" Nov 28 13:24:05 crc kubenswrapper[4779]: E1128 13:24:05.852425 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73fc61e9b90ba79689824eaeac35e6b577249c30a84976712d5265d697bfd40f\": container with ID starting with 73fc61e9b90ba79689824eaeac35e6b577249c30a84976712d5265d697bfd40f not found: ID does not exist" containerID="73fc61e9b90ba79689824eaeac35e6b577249c30a84976712d5265d697bfd40f" Nov 28 13:24:05 crc kubenswrapper[4779]: I1128 13:24:05.852467 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73fc61e9b90ba79689824eaeac35e6b577249c30a84976712d5265d697bfd40f"} err="failed to get container status \"73fc61e9b90ba79689824eaeac35e6b577249c30a84976712d5265d697bfd40f\": rpc error: code = NotFound desc = could not find container \"73fc61e9b90ba79689824eaeac35e6b577249c30a84976712d5265d697bfd40f\": container with ID starting with 73fc61e9b90ba79689824eaeac35e6b577249c30a84976712d5265d697bfd40f not found: ID does not exist" Nov 28 13:24:05 crc kubenswrapper[4779]: I1128 13:24:05.852490 4779 scope.go:117] "RemoveContainer" containerID="a541615f14f9f273dd6c9b2d03b170e2b0e59a2b770c8841b1d217f4bce06104" Nov 28 13:24:05 crc kubenswrapper[4779]: E1128 13:24:05.852855 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a541615f14f9f273dd6c9b2d03b170e2b0e59a2b770c8841b1d217f4bce06104\": container with ID starting with a541615f14f9f273dd6c9b2d03b170e2b0e59a2b770c8841b1d217f4bce06104 not found: ID does not exist" containerID="a541615f14f9f273dd6c9b2d03b170e2b0e59a2b770c8841b1d217f4bce06104" Nov 28 13:24:05 crc kubenswrapper[4779]: I1128 13:24:05.852886 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a541615f14f9f273dd6c9b2d03b170e2b0e59a2b770c8841b1d217f4bce06104"} err="failed to get container status \"a541615f14f9f273dd6c9b2d03b170e2b0e59a2b770c8841b1d217f4bce06104\": rpc error: code = NotFound desc = could not find container \"a541615f14f9f273dd6c9b2d03b170e2b0e59a2b770c8841b1d217f4bce06104\": container with ID starting with a541615f14f9f273dd6c9b2d03b170e2b0e59a2b770c8841b1d217f4bce06104 not found: ID does not exist" Nov 28 13:24:07 crc kubenswrapper[4779]: I1128 13:24:07.746570 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1337d74-d239-4d6a-b748-0b02fc1656ba" path="/var/lib/kubelet/pods/e1337d74-d239-4d6a-b748-0b02fc1656ba/volumes" Nov 28 13:24:16 crc kubenswrapper[4779]: I1128 13:24:16.284863 4779 patch_prober.go:28] interesting pod/machine-config-daemon-kj9g2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 13:24:16 crc kubenswrapper[4779]: I1128 13:24:16.285550 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 13:24:46 crc kubenswrapper[4779]: I1128 13:24:46.284891 4779 patch_prober.go:28] interesting pod/machine-config-daemon-kj9g2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 13:24:46 crc kubenswrapper[4779]: I1128 13:24:46.285513 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 13:24:46 crc kubenswrapper[4779]: I1128 13:24:46.285570 4779 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" Nov 28 13:24:46 crc kubenswrapper[4779]: I1128 13:24:46.286611 4779 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"787907d9b97619607abf8c4f9cecb91840367136040816abeda0737b36259574"} pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 13:24:46 crc kubenswrapper[4779]: I1128 13:24:46.286715 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" containerID="cri-o://787907d9b97619607abf8c4f9cecb91840367136040816abeda0737b36259574" gracePeriod=600 Nov 28 13:24:46 crc kubenswrapper[4779]: E1128 13:24:46.412006 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:24:47 crc kubenswrapper[4779]: I1128 13:24:47.196963 4779 generic.go:334] "Generic (PLEG): container finished" podID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerID="787907d9b97619607abf8c4f9cecb91840367136040816abeda0737b36259574" exitCode=0 Nov 28 13:24:47 crc kubenswrapper[4779]: I1128 13:24:47.197014 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" event={"ID":"3b2a3eb4-4de5-491b-b466-3a35b7d745ec","Type":"ContainerDied","Data":"787907d9b97619607abf8c4f9cecb91840367136040816abeda0737b36259574"} Nov 28 13:24:47 crc kubenswrapper[4779]: I1128 13:24:47.197051 4779 scope.go:117] "RemoveContainer" containerID="10993b5460bb0b6d68e62c004e58a6435fe581e4d0f8daaff2aaa3cfe0a29888" Nov 28 13:24:47 crc kubenswrapper[4779]: I1128 13:24:47.197885 4779 scope.go:117] "RemoveContainer" containerID="787907d9b97619607abf8c4f9cecb91840367136040816abeda0737b36259574" Nov 28 13:24:47 crc kubenswrapper[4779]: E1128 13:24:47.198454 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:24:59 crc kubenswrapper[4779]: I1128 13:24:59.742049 4779 scope.go:117] "RemoveContainer" containerID="787907d9b97619607abf8c4f9cecb91840367136040816abeda0737b36259574" Nov 28 13:24:59 crc kubenswrapper[4779]: E1128 13:24:59.744042 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:25:12 crc kubenswrapper[4779]: I1128 13:25:12.727143 4779 scope.go:117] "RemoveContainer" containerID="787907d9b97619607abf8c4f9cecb91840367136040816abeda0737b36259574" Nov 28 13:25:12 crc kubenswrapper[4779]: E1128 13:25:12.728734 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:25:24 crc kubenswrapper[4779]: I1128 13:25:24.727558 4779 scope.go:117] "RemoveContainer" containerID="787907d9b97619607abf8c4f9cecb91840367136040816abeda0737b36259574" Nov 28 13:25:24 crc kubenswrapper[4779]: E1128 13:25:24.729374 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:25:38 crc kubenswrapper[4779]: I1128 13:25:38.726685 4779 scope.go:117] "RemoveContainer" containerID="787907d9b97619607abf8c4f9cecb91840367136040816abeda0737b36259574" Nov 28 13:25:38 crc kubenswrapper[4779]: E1128 13:25:38.727452 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:25:49 crc kubenswrapper[4779]: I1128 13:25:49.733319 4779 scope.go:117] "RemoveContainer" containerID="787907d9b97619607abf8c4f9cecb91840367136040816abeda0737b36259574" Nov 28 13:25:49 crc kubenswrapper[4779]: E1128 13:25:49.735164 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:26:01 crc kubenswrapper[4779]: I1128 13:26:01.727544 4779 scope.go:117] "RemoveContainer" containerID="787907d9b97619607abf8c4f9cecb91840367136040816abeda0737b36259574" Nov 28 13:26:01 crc kubenswrapper[4779]: E1128 13:26:01.728610 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:26:14 crc kubenswrapper[4779]: I1128 13:26:14.956183 4779 scope.go:117] "RemoveContainer" containerID="787907d9b97619607abf8c4f9cecb91840367136040816abeda0737b36259574" Nov 28 13:26:14 crc kubenswrapper[4779]: E1128 13:26:14.958851 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:26:19 crc kubenswrapper[4779]: I1128 13:26:19.216892 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7574d9569-x822f_f1d9753d-b49d-4e32-b312-137314283984/manager/0.log" Nov 28 13:26:21 crc kubenswrapper[4779]: I1128 13:26:21.063201 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Nov 28 13:26:21 crc kubenswrapper[4779]: I1128 13:26:21.063891 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="4a8f5701-7e1d-414b-aa88-4af10f82a58e" containerName="openstackclient" containerID="cri-o://ffd7841121f7e8c3f4410f0c2ca301ba58474c3087214e5820b8a2f086337dd4" gracePeriod=2 Nov 28 13:26:21 crc kubenswrapper[4779]: I1128 13:26:21.078252 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Nov 28 13:26:21 crc kubenswrapper[4779]: I1128 13:26:21.102355 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 28 13:26:21 crc kubenswrapper[4779]: E1128 13:26:21.102766 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1337d74-d239-4d6a-b748-0b02fc1656ba" containerName="extract-utilities" Nov 28 13:26:21 crc kubenswrapper[4779]: I1128 13:26:21.102784 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1337d74-d239-4d6a-b748-0b02fc1656ba" containerName="extract-utilities" Nov 28 13:26:21 crc kubenswrapper[4779]: E1128 13:26:21.102804 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a8f5701-7e1d-414b-aa88-4af10f82a58e" containerName="openstackclient" Nov 28 13:26:21 crc kubenswrapper[4779]: I1128 13:26:21.102810 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a8f5701-7e1d-414b-aa88-4af10f82a58e" containerName="openstackclient" Nov 28 13:26:21 crc kubenswrapper[4779]: E1128 13:26:21.102828 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1337d74-d239-4d6a-b748-0b02fc1656ba" containerName="registry-server" Nov 28 13:26:21 crc kubenswrapper[4779]: I1128 13:26:21.102834 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1337d74-d239-4d6a-b748-0b02fc1656ba" containerName="registry-server" Nov 28 13:26:21 crc kubenswrapper[4779]: E1128 13:26:21.102851 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1337d74-d239-4d6a-b748-0b02fc1656ba" containerName="extract-content" Nov 28 13:26:21 crc kubenswrapper[4779]: I1128 13:26:21.102859 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1337d74-d239-4d6a-b748-0b02fc1656ba" containerName="extract-content" Nov 28 13:26:21 crc kubenswrapper[4779]: I1128 13:26:21.103037 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1337d74-d239-4d6a-b748-0b02fc1656ba" containerName="registry-server" Nov 28 13:26:21 crc kubenswrapper[4779]: I1128 13:26:21.103062 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a8f5701-7e1d-414b-aa88-4af10f82a58e" containerName="openstackclient" Nov 28 13:26:21 crc kubenswrapper[4779]: I1128 13:26:21.103777 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 28 13:26:21 crc kubenswrapper[4779]: I1128 13:26:21.123858 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 28 13:26:21 crc kubenswrapper[4779]: I1128 13:26:21.137265 4779 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="4a8f5701-7e1d-414b-aa88-4af10f82a58e" podUID="7f349931-5145-4f53-a9a4-6e2c915d0ab9" Nov 28 13:26:21 crc kubenswrapper[4779]: I1128 13:26:21.285172 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7f349931-5145-4f53-a9a4-6e2c915d0ab9-openstack-config\") pod \"openstackclient\" (UID: \"7f349931-5145-4f53-a9a4-6e2c915d0ab9\") " pod="openstack/openstackclient" Nov 28 13:26:21 crc kubenswrapper[4779]: I1128 13:26:21.285509 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f349931-5145-4f53-a9a4-6e2c915d0ab9-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7f349931-5145-4f53-a9a4-6e2c915d0ab9\") " pod="openstack/openstackclient" Nov 28 13:26:21 crc kubenswrapper[4779]: I1128 13:26:21.285605 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht62v\" (UniqueName: \"kubernetes.io/projected/7f349931-5145-4f53-a9a4-6e2c915d0ab9-kube-api-access-ht62v\") pod \"openstackclient\" (UID: \"7f349931-5145-4f53-a9a4-6e2c915d0ab9\") " pod="openstack/openstackclient" Nov 28 13:26:21 crc kubenswrapper[4779]: I1128 13:26:21.285710 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7f349931-5145-4f53-a9a4-6e2c915d0ab9-openstack-config-secret\") pod \"openstackclient\" (UID: \"7f349931-5145-4f53-a9a4-6e2c915d0ab9\") " pod="openstack/openstackclient" Nov 28 13:26:21 crc kubenswrapper[4779]: I1128 13:26:21.388244 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f349931-5145-4f53-a9a4-6e2c915d0ab9-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7f349931-5145-4f53-a9a4-6e2c915d0ab9\") " pod="openstack/openstackclient" Nov 28 13:26:21 crc kubenswrapper[4779]: I1128 13:26:21.388336 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ht62v\" (UniqueName: \"kubernetes.io/projected/7f349931-5145-4f53-a9a4-6e2c915d0ab9-kube-api-access-ht62v\") pod \"openstackclient\" (UID: \"7f349931-5145-4f53-a9a4-6e2c915d0ab9\") " pod="openstack/openstackclient" Nov 28 13:26:21 crc kubenswrapper[4779]: I1128 13:26:21.388472 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7f349931-5145-4f53-a9a4-6e2c915d0ab9-openstack-config-secret\") pod \"openstackclient\" (UID: \"7f349931-5145-4f53-a9a4-6e2c915d0ab9\") " pod="openstack/openstackclient" Nov 28 13:26:21 crc kubenswrapper[4779]: I1128 13:26:21.388665 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7f349931-5145-4f53-a9a4-6e2c915d0ab9-openstack-config\") pod \"openstackclient\" (UID: \"7f349931-5145-4f53-a9a4-6e2c915d0ab9\") " pod="openstack/openstackclient" Nov 28 13:26:21 crc kubenswrapper[4779]: I1128 13:26:21.390378 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7f349931-5145-4f53-a9a4-6e2c915d0ab9-openstack-config\") pod \"openstackclient\" (UID: \"7f349931-5145-4f53-a9a4-6e2c915d0ab9\") " pod="openstack/openstackclient" Nov 28 13:26:21 crc kubenswrapper[4779]: I1128 13:26:21.396610 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f349931-5145-4f53-a9a4-6e2c915d0ab9-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7f349931-5145-4f53-a9a4-6e2c915d0ab9\") " pod="openstack/openstackclient" Nov 28 13:26:21 crc kubenswrapper[4779]: I1128 13:26:21.396746 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7f349931-5145-4f53-a9a4-6e2c915d0ab9-openstack-config-secret\") pod \"openstackclient\" (UID: \"7f349931-5145-4f53-a9a4-6e2c915d0ab9\") " pod="openstack/openstackclient" Nov 28 13:26:21 crc kubenswrapper[4779]: I1128 13:26:21.406705 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ht62v\" (UniqueName: \"kubernetes.io/projected/7f349931-5145-4f53-a9a4-6e2c915d0ab9-kube-api-access-ht62v\") pod \"openstackclient\" (UID: \"7f349931-5145-4f53-a9a4-6e2c915d0ab9\") " pod="openstack/openstackclient" Nov 28 13:26:21 crc kubenswrapper[4779]: I1128 13:26:21.440990 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 28 13:26:21 crc kubenswrapper[4779]: I1128 13:26:21.964493 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 28 13:26:21 crc kubenswrapper[4779]: W1128 13:26:21.969182 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f349931_5145_4f53_a9a4_6e2c915d0ab9.slice/crio-c5a98f9e0b207d96b881e0fa53482df802bc6505b1d3e69d531bf1a8eb0f702f WatchSource:0}: Error finding container c5a98f9e0b207d96b881e0fa53482df802bc6505b1d3e69d531bf1a8eb0f702f: Status 404 returned error can't find the container with id c5a98f9e0b207d96b881e0fa53482df802bc6505b1d3e69d531bf1a8eb0f702f Nov 28 13:26:22 crc kubenswrapper[4779]: I1128 13:26:22.027189 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"7f349931-5145-4f53-a9a4-6e2c915d0ab9","Type":"ContainerStarted","Data":"c5a98f9e0b207d96b881e0fa53482df802bc6505b1d3e69d531bf1a8eb0f702f"} Nov 28 13:26:22 crc kubenswrapper[4779]: I1128 13:26:22.136768 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-qdwnv"] Nov 28 13:26:22 crc kubenswrapper[4779]: I1128 13:26:22.138282 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-qdwnv" Nov 28 13:26:22 crc kubenswrapper[4779]: I1128 13:26:22.199441 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-qdwnv"] Nov 28 13:26:22 crc kubenswrapper[4779]: I1128 13:26:22.307520 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6d3798e-7ea3-4c5d-8d4f-26f180cfdc81-operator-scripts\") pod \"aodh-db-create-qdwnv\" (UID: \"e6d3798e-7ea3-4c5d-8d4f-26f180cfdc81\") " pod="openstack/aodh-db-create-qdwnv" Nov 28 13:26:22 crc kubenswrapper[4779]: I1128 13:26:22.307618 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjtpj\" (UniqueName: \"kubernetes.io/projected/e6d3798e-7ea3-4c5d-8d4f-26f180cfdc81-kube-api-access-wjtpj\") pod \"aodh-db-create-qdwnv\" (UID: \"e6d3798e-7ea3-4c5d-8d4f-26f180cfdc81\") " pod="openstack/aodh-db-create-qdwnv" Nov 28 13:26:22 crc kubenswrapper[4779]: I1128 13:26:22.353273 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-1e1c-account-create-update-g8rjh"] Nov 28 13:26:22 crc kubenswrapper[4779]: I1128 13:26:22.356629 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-1e1c-account-create-update-g8rjh" Nov 28 13:26:22 crc kubenswrapper[4779]: I1128 13:26:22.361659 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Nov 28 13:26:22 crc kubenswrapper[4779]: I1128 13:26:22.384965 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-1e1c-account-create-update-g8rjh"] Nov 28 13:26:22 crc kubenswrapper[4779]: I1128 13:26:22.409287 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6d3798e-7ea3-4c5d-8d4f-26f180cfdc81-operator-scripts\") pod \"aodh-db-create-qdwnv\" (UID: \"e6d3798e-7ea3-4c5d-8d4f-26f180cfdc81\") " pod="openstack/aodh-db-create-qdwnv" Nov 28 13:26:22 crc kubenswrapper[4779]: I1128 13:26:22.409404 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjtpj\" (UniqueName: \"kubernetes.io/projected/e6d3798e-7ea3-4c5d-8d4f-26f180cfdc81-kube-api-access-wjtpj\") pod \"aodh-db-create-qdwnv\" (UID: \"e6d3798e-7ea3-4c5d-8d4f-26f180cfdc81\") " pod="openstack/aodh-db-create-qdwnv" Nov 28 13:26:22 crc kubenswrapper[4779]: I1128 13:26:22.410411 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6d3798e-7ea3-4c5d-8d4f-26f180cfdc81-operator-scripts\") pod \"aodh-db-create-qdwnv\" (UID: \"e6d3798e-7ea3-4c5d-8d4f-26f180cfdc81\") " pod="openstack/aodh-db-create-qdwnv" Nov 28 13:26:22 crc kubenswrapper[4779]: I1128 13:26:22.428035 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjtpj\" (UniqueName: \"kubernetes.io/projected/e6d3798e-7ea3-4c5d-8d4f-26f180cfdc81-kube-api-access-wjtpj\") pod \"aodh-db-create-qdwnv\" (UID: \"e6d3798e-7ea3-4c5d-8d4f-26f180cfdc81\") " pod="openstack/aodh-db-create-qdwnv" Nov 28 13:26:22 crc kubenswrapper[4779]: I1128 13:26:22.510879 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2dm9\" (UniqueName: \"kubernetes.io/projected/77380600-4b57-4e7d-94a1-b3ec588f6989-kube-api-access-t2dm9\") pod \"aodh-1e1c-account-create-update-g8rjh\" (UID: \"77380600-4b57-4e7d-94a1-b3ec588f6989\") " pod="openstack/aodh-1e1c-account-create-update-g8rjh" Nov 28 13:26:22 crc kubenswrapper[4779]: I1128 13:26:22.511160 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77380600-4b57-4e7d-94a1-b3ec588f6989-operator-scripts\") pod \"aodh-1e1c-account-create-update-g8rjh\" (UID: \"77380600-4b57-4e7d-94a1-b3ec588f6989\") " pod="openstack/aodh-1e1c-account-create-update-g8rjh" Nov 28 13:26:22 crc kubenswrapper[4779]: I1128 13:26:22.519368 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-qdwnv" Nov 28 13:26:22 crc kubenswrapper[4779]: I1128 13:26:22.612961 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77380600-4b57-4e7d-94a1-b3ec588f6989-operator-scripts\") pod \"aodh-1e1c-account-create-update-g8rjh\" (UID: \"77380600-4b57-4e7d-94a1-b3ec588f6989\") " pod="openstack/aodh-1e1c-account-create-update-g8rjh" Nov 28 13:26:22 crc kubenswrapper[4779]: I1128 13:26:22.613446 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2dm9\" (UniqueName: \"kubernetes.io/projected/77380600-4b57-4e7d-94a1-b3ec588f6989-kube-api-access-t2dm9\") pod \"aodh-1e1c-account-create-update-g8rjh\" (UID: \"77380600-4b57-4e7d-94a1-b3ec588f6989\") " pod="openstack/aodh-1e1c-account-create-update-g8rjh" Nov 28 13:26:22 crc kubenswrapper[4779]: I1128 13:26:22.613977 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77380600-4b57-4e7d-94a1-b3ec588f6989-operator-scripts\") pod \"aodh-1e1c-account-create-update-g8rjh\" (UID: \"77380600-4b57-4e7d-94a1-b3ec588f6989\") " pod="openstack/aodh-1e1c-account-create-update-g8rjh" Nov 28 13:26:22 crc kubenswrapper[4779]: I1128 13:26:22.646745 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2dm9\" (UniqueName: \"kubernetes.io/projected/77380600-4b57-4e7d-94a1-b3ec588f6989-kube-api-access-t2dm9\") pod \"aodh-1e1c-account-create-update-g8rjh\" (UID: \"77380600-4b57-4e7d-94a1-b3ec588f6989\") " pod="openstack/aodh-1e1c-account-create-update-g8rjh" Nov 28 13:26:22 crc kubenswrapper[4779]: I1128 13:26:22.687919 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-1e1c-account-create-update-g8rjh" Nov 28 13:26:22 crc kubenswrapper[4779]: I1128 13:26:22.978959 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-qdwnv"] Nov 28 13:26:22 crc kubenswrapper[4779]: W1128 13:26:22.979558 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6d3798e_7ea3_4c5d_8d4f_26f180cfdc81.slice/crio-bd2b68e31802e71b698554be28f0ba6643cda9a4aa96938a094a23f894eceb09 WatchSource:0}: Error finding container bd2b68e31802e71b698554be28f0ba6643cda9a4aa96938a094a23f894eceb09: Status 404 returned error can't find the container with id bd2b68e31802e71b698554be28f0ba6643cda9a4aa96938a094a23f894eceb09 Nov 28 13:26:23 crc kubenswrapper[4779]: I1128 13:26:23.042465 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"7f349931-5145-4f53-a9a4-6e2c915d0ab9","Type":"ContainerStarted","Data":"62ac887c34e98db417a37aca60c8ff5dd801b1a13cf3e720d629ae03313a1016"} Nov 28 13:26:23 crc kubenswrapper[4779]: I1128 13:26:23.045905 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-qdwnv" event={"ID":"e6d3798e-7ea3-4c5d-8d4f-26f180cfdc81","Type":"ContainerStarted","Data":"bd2b68e31802e71b698554be28f0ba6643cda9a4aa96938a094a23f894eceb09"} Nov 28 13:26:23 crc kubenswrapper[4779]: I1128 13:26:23.067239 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.067218551 podStartE2EDuration="2.067218551s" podCreationTimestamp="2025-11-28 13:26:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 13:26:23.057216405 +0000 UTC m=+3043.622891769" watchObservedRunningTime="2025-11-28 13:26:23.067218551 +0000 UTC m=+3043.632893915" Nov 28 13:26:23 crc kubenswrapper[4779]: I1128 13:26:23.142279 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-1e1c-account-create-update-g8rjh"] Nov 28 13:26:23 crc kubenswrapper[4779]: W1128 13:26:23.148446 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77380600_4b57_4e7d_94a1_b3ec588f6989.slice/crio-2d5e65be67f95a167940b6759b6c2387aa1d854fbf0d11abd75622c18415270b WatchSource:0}: Error finding container 2d5e65be67f95a167940b6759b6c2387aa1d854fbf0d11abd75622c18415270b: Status 404 returned error can't find the container with id 2d5e65be67f95a167940b6759b6c2387aa1d854fbf0d11abd75622c18415270b Nov 28 13:26:23 crc kubenswrapper[4779]: I1128 13:26:23.281893 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 28 13:26:23 crc kubenswrapper[4779]: I1128 13:26:23.428468 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a8f5701-7e1d-414b-aa88-4af10f82a58e-combined-ca-bundle\") pod \"4a8f5701-7e1d-414b-aa88-4af10f82a58e\" (UID: \"4a8f5701-7e1d-414b-aa88-4af10f82a58e\") " Nov 28 13:26:23 crc kubenswrapper[4779]: I1128 13:26:23.428613 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4a8f5701-7e1d-414b-aa88-4af10f82a58e-openstack-config-secret\") pod \"4a8f5701-7e1d-414b-aa88-4af10f82a58e\" (UID: \"4a8f5701-7e1d-414b-aa88-4af10f82a58e\") " Nov 28 13:26:23 crc kubenswrapper[4779]: I1128 13:26:23.428652 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4a8f5701-7e1d-414b-aa88-4af10f82a58e-openstack-config\") pod \"4a8f5701-7e1d-414b-aa88-4af10f82a58e\" (UID: \"4a8f5701-7e1d-414b-aa88-4af10f82a58e\") " Nov 28 13:26:23 crc kubenswrapper[4779]: I1128 13:26:23.428705 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d624j\" (UniqueName: \"kubernetes.io/projected/4a8f5701-7e1d-414b-aa88-4af10f82a58e-kube-api-access-d624j\") pod \"4a8f5701-7e1d-414b-aa88-4af10f82a58e\" (UID: \"4a8f5701-7e1d-414b-aa88-4af10f82a58e\") " Nov 28 13:26:23 crc kubenswrapper[4779]: I1128 13:26:23.436509 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a8f5701-7e1d-414b-aa88-4af10f82a58e-kube-api-access-d624j" (OuterVolumeSpecName: "kube-api-access-d624j") pod "4a8f5701-7e1d-414b-aa88-4af10f82a58e" (UID: "4a8f5701-7e1d-414b-aa88-4af10f82a58e"). InnerVolumeSpecName "kube-api-access-d624j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:26:23 crc kubenswrapper[4779]: I1128 13:26:23.465213 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a8f5701-7e1d-414b-aa88-4af10f82a58e-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "4a8f5701-7e1d-414b-aa88-4af10f82a58e" (UID: "4a8f5701-7e1d-414b-aa88-4af10f82a58e"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 13:26:23 crc kubenswrapper[4779]: I1128 13:26:23.466791 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a8f5701-7e1d-414b-aa88-4af10f82a58e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4a8f5701-7e1d-414b-aa88-4af10f82a58e" (UID: "4a8f5701-7e1d-414b-aa88-4af10f82a58e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:26:23 crc kubenswrapper[4779]: I1128 13:26:23.499113 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a8f5701-7e1d-414b-aa88-4af10f82a58e-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "4a8f5701-7e1d-414b-aa88-4af10f82a58e" (UID: "4a8f5701-7e1d-414b-aa88-4af10f82a58e"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:26:23 crc kubenswrapper[4779]: I1128 13:26:23.531999 4779 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a8f5701-7e1d-414b-aa88-4af10f82a58e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 13:26:23 crc kubenswrapper[4779]: I1128 13:26:23.532041 4779 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4a8f5701-7e1d-414b-aa88-4af10f82a58e-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 28 13:26:23 crc kubenswrapper[4779]: I1128 13:26:23.532054 4779 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4a8f5701-7e1d-414b-aa88-4af10f82a58e-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 28 13:26:23 crc kubenswrapper[4779]: I1128 13:26:23.532067 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d624j\" (UniqueName: \"kubernetes.io/projected/4a8f5701-7e1d-414b-aa88-4af10f82a58e-kube-api-access-d624j\") on node \"crc\" DevicePath \"\"" Nov 28 13:26:23 crc kubenswrapper[4779]: I1128 13:26:23.737287 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a8f5701-7e1d-414b-aa88-4af10f82a58e" path="/var/lib/kubelet/pods/4a8f5701-7e1d-414b-aa88-4af10f82a58e/volumes" Nov 28 13:26:24 crc kubenswrapper[4779]: I1128 13:26:24.056324 4779 generic.go:334] "Generic (PLEG): container finished" podID="4a8f5701-7e1d-414b-aa88-4af10f82a58e" containerID="ffd7841121f7e8c3f4410f0c2ca301ba58474c3087214e5820b8a2f086337dd4" exitCode=137 Nov 28 13:26:24 crc kubenswrapper[4779]: I1128 13:26:24.056375 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 28 13:26:24 crc kubenswrapper[4779]: I1128 13:26:24.056431 4779 scope.go:117] "RemoveContainer" containerID="ffd7841121f7e8c3f4410f0c2ca301ba58474c3087214e5820b8a2f086337dd4" Nov 28 13:26:24 crc kubenswrapper[4779]: I1128 13:26:24.061663 4779 generic.go:334] "Generic (PLEG): container finished" podID="e6d3798e-7ea3-4c5d-8d4f-26f180cfdc81" containerID="e808832d088605400992406c8f85f8cba6a1b7abe639f6668f8560035c59d1b7" exitCode=0 Nov 28 13:26:24 crc kubenswrapper[4779]: I1128 13:26:24.061841 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-qdwnv" event={"ID":"e6d3798e-7ea3-4c5d-8d4f-26f180cfdc81","Type":"ContainerDied","Data":"e808832d088605400992406c8f85f8cba6a1b7abe639f6668f8560035c59d1b7"} Nov 28 13:26:24 crc kubenswrapper[4779]: I1128 13:26:24.064196 4779 generic.go:334] "Generic (PLEG): container finished" podID="77380600-4b57-4e7d-94a1-b3ec588f6989" containerID="8d706fa6c23626859c5045eaaaf12eea75ce69da87e1f3b013146f958eacfeb1" exitCode=0 Nov 28 13:26:24 crc kubenswrapper[4779]: I1128 13:26:24.064270 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-1e1c-account-create-update-g8rjh" event={"ID":"77380600-4b57-4e7d-94a1-b3ec588f6989","Type":"ContainerDied","Data":"8d706fa6c23626859c5045eaaaf12eea75ce69da87e1f3b013146f958eacfeb1"} Nov 28 13:26:24 crc kubenswrapper[4779]: I1128 13:26:24.064292 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-1e1c-account-create-update-g8rjh" event={"ID":"77380600-4b57-4e7d-94a1-b3ec588f6989","Type":"ContainerStarted","Data":"2d5e65be67f95a167940b6759b6c2387aa1d854fbf0d11abd75622c18415270b"} Nov 28 13:26:24 crc kubenswrapper[4779]: I1128 13:26:24.090327 4779 scope.go:117] "RemoveContainer" containerID="ffd7841121f7e8c3f4410f0c2ca301ba58474c3087214e5820b8a2f086337dd4" Nov 28 13:26:24 crc kubenswrapper[4779]: E1128 13:26:24.090863 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffd7841121f7e8c3f4410f0c2ca301ba58474c3087214e5820b8a2f086337dd4\": container with ID starting with ffd7841121f7e8c3f4410f0c2ca301ba58474c3087214e5820b8a2f086337dd4 not found: ID does not exist" containerID="ffd7841121f7e8c3f4410f0c2ca301ba58474c3087214e5820b8a2f086337dd4" Nov 28 13:26:24 crc kubenswrapper[4779]: I1128 13:26:24.090927 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffd7841121f7e8c3f4410f0c2ca301ba58474c3087214e5820b8a2f086337dd4"} err="failed to get container status \"ffd7841121f7e8c3f4410f0c2ca301ba58474c3087214e5820b8a2f086337dd4\": rpc error: code = NotFound desc = could not find container \"ffd7841121f7e8c3f4410f0c2ca301ba58474c3087214e5820b8a2f086337dd4\": container with ID starting with ffd7841121f7e8c3f4410f0c2ca301ba58474c3087214e5820b8a2f086337dd4 not found: ID does not exist" Nov 28 13:26:25 crc kubenswrapper[4779]: I1128 13:26:25.544896 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-1e1c-account-create-update-g8rjh" Nov 28 13:26:25 crc kubenswrapper[4779]: I1128 13:26:25.552678 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-qdwnv" Nov 28 13:26:25 crc kubenswrapper[4779]: I1128 13:26:25.699744 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6d3798e-7ea3-4c5d-8d4f-26f180cfdc81-operator-scripts\") pod \"e6d3798e-7ea3-4c5d-8d4f-26f180cfdc81\" (UID: \"e6d3798e-7ea3-4c5d-8d4f-26f180cfdc81\") " Nov 28 13:26:25 crc kubenswrapper[4779]: I1128 13:26:25.699971 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2dm9\" (UniqueName: \"kubernetes.io/projected/77380600-4b57-4e7d-94a1-b3ec588f6989-kube-api-access-t2dm9\") pod \"77380600-4b57-4e7d-94a1-b3ec588f6989\" (UID: \"77380600-4b57-4e7d-94a1-b3ec588f6989\") " Nov 28 13:26:25 crc kubenswrapper[4779]: I1128 13:26:25.700023 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77380600-4b57-4e7d-94a1-b3ec588f6989-operator-scripts\") pod \"77380600-4b57-4e7d-94a1-b3ec588f6989\" (UID: \"77380600-4b57-4e7d-94a1-b3ec588f6989\") " Nov 28 13:26:25 crc kubenswrapper[4779]: I1128 13:26:25.700080 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjtpj\" (UniqueName: \"kubernetes.io/projected/e6d3798e-7ea3-4c5d-8d4f-26f180cfdc81-kube-api-access-wjtpj\") pod \"e6d3798e-7ea3-4c5d-8d4f-26f180cfdc81\" (UID: \"e6d3798e-7ea3-4c5d-8d4f-26f180cfdc81\") " Nov 28 13:26:25 crc kubenswrapper[4779]: I1128 13:26:25.701696 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6d3798e-7ea3-4c5d-8d4f-26f180cfdc81-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e6d3798e-7ea3-4c5d-8d4f-26f180cfdc81" (UID: "e6d3798e-7ea3-4c5d-8d4f-26f180cfdc81"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 13:26:25 crc kubenswrapper[4779]: I1128 13:26:25.702134 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77380600-4b57-4e7d-94a1-b3ec588f6989-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "77380600-4b57-4e7d-94a1-b3ec588f6989" (UID: "77380600-4b57-4e7d-94a1-b3ec588f6989"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 13:26:25 crc kubenswrapper[4779]: I1128 13:26:25.708863 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6d3798e-7ea3-4c5d-8d4f-26f180cfdc81-kube-api-access-wjtpj" (OuterVolumeSpecName: "kube-api-access-wjtpj") pod "e6d3798e-7ea3-4c5d-8d4f-26f180cfdc81" (UID: "e6d3798e-7ea3-4c5d-8d4f-26f180cfdc81"). InnerVolumeSpecName "kube-api-access-wjtpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:26:25 crc kubenswrapper[4779]: I1128 13:26:25.713522 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77380600-4b57-4e7d-94a1-b3ec588f6989-kube-api-access-t2dm9" (OuterVolumeSpecName: "kube-api-access-t2dm9") pod "77380600-4b57-4e7d-94a1-b3ec588f6989" (UID: "77380600-4b57-4e7d-94a1-b3ec588f6989"). InnerVolumeSpecName "kube-api-access-t2dm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:26:25 crc kubenswrapper[4779]: I1128 13:26:25.803273 4779 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77380600-4b57-4e7d-94a1-b3ec588f6989-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 13:26:25 crc kubenswrapper[4779]: I1128 13:26:25.803324 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjtpj\" (UniqueName: \"kubernetes.io/projected/e6d3798e-7ea3-4c5d-8d4f-26f180cfdc81-kube-api-access-wjtpj\") on node \"crc\" DevicePath \"\"" Nov 28 13:26:25 crc kubenswrapper[4779]: I1128 13:26:25.803346 4779 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6d3798e-7ea3-4c5d-8d4f-26f180cfdc81-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 13:26:25 crc kubenswrapper[4779]: I1128 13:26:25.803366 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2dm9\" (UniqueName: \"kubernetes.io/projected/77380600-4b57-4e7d-94a1-b3ec588f6989-kube-api-access-t2dm9\") on node \"crc\" DevicePath \"\"" Nov 28 13:26:26 crc kubenswrapper[4779]: I1128 13:26:26.103588 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-qdwnv" Nov 28 13:26:26 crc kubenswrapper[4779]: I1128 13:26:26.103798 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-qdwnv" event={"ID":"e6d3798e-7ea3-4c5d-8d4f-26f180cfdc81","Type":"ContainerDied","Data":"bd2b68e31802e71b698554be28f0ba6643cda9a4aa96938a094a23f894eceb09"} Nov 28 13:26:26 crc kubenswrapper[4779]: I1128 13:26:26.103989 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd2b68e31802e71b698554be28f0ba6643cda9a4aa96938a094a23f894eceb09" Nov 28 13:26:26 crc kubenswrapper[4779]: I1128 13:26:26.106656 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-1e1c-account-create-update-g8rjh" event={"ID":"77380600-4b57-4e7d-94a1-b3ec588f6989","Type":"ContainerDied","Data":"2d5e65be67f95a167940b6759b6c2387aa1d854fbf0d11abd75622c18415270b"} Nov 28 13:26:26 crc kubenswrapper[4779]: I1128 13:26:26.106733 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d5e65be67f95a167940b6759b6c2387aa1d854fbf0d11abd75622c18415270b" Nov 28 13:26:26 crc kubenswrapper[4779]: I1128 13:26:26.106847 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-1e1c-account-create-update-g8rjh" Nov 28 13:26:27 crc kubenswrapper[4779]: I1128 13:26:27.658924 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-82qkk"] Nov 28 13:26:27 crc kubenswrapper[4779]: E1128 13:26:27.659426 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6d3798e-7ea3-4c5d-8d4f-26f180cfdc81" containerName="mariadb-database-create" Nov 28 13:26:27 crc kubenswrapper[4779]: I1128 13:26:27.659446 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6d3798e-7ea3-4c5d-8d4f-26f180cfdc81" containerName="mariadb-database-create" Nov 28 13:26:27 crc kubenswrapper[4779]: E1128 13:26:27.659464 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77380600-4b57-4e7d-94a1-b3ec588f6989" containerName="mariadb-account-create-update" Nov 28 13:26:27 crc kubenswrapper[4779]: I1128 13:26:27.659472 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="77380600-4b57-4e7d-94a1-b3ec588f6989" containerName="mariadb-account-create-update" Nov 28 13:26:27 crc kubenswrapper[4779]: I1128 13:26:27.659755 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6d3798e-7ea3-4c5d-8d4f-26f180cfdc81" containerName="mariadb-database-create" Nov 28 13:26:27 crc kubenswrapper[4779]: I1128 13:26:27.659771 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="77380600-4b57-4e7d-94a1-b3ec588f6989" containerName="mariadb-account-create-update" Nov 28 13:26:27 crc kubenswrapper[4779]: I1128 13:26:27.660566 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-82qkk" Nov 28 13:26:27 crc kubenswrapper[4779]: I1128 13:26:27.662616 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 28 13:26:27 crc kubenswrapper[4779]: I1128 13:26:27.666579 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 28 13:26:27 crc kubenswrapper[4779]: I1128 13:26:27.670475 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-wdvkm" Nov 28 13:26:27 crc kubenswrapper[4779]: I1128 13:26:27.670725 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 28 13:26:27 crc kubenswrapper[4779]: I1128 13:26:27.673510 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-82qkk"] Nov 28 13:26:27 crc kubenswrapper[4779]: I1128 13:26:27.789681 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cgmb\" (UniqueName: \"kubernetes.io/projected/1809c14b-75a0-4a41-b67f-e0a62aa53f0d-kube-api-access-5cgmb\") pod \"aodh-db-sync-82qkk\" (UID: \"1809c14b-75a0-4a41-b67f-e0a62aa53f0d\") " pod="openstack/aodh-db-sync-82qkk" Nov 28 13:26:27 crc kubenswrapper[4779]: I1128 13:26:27.789796 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1809c14b-75a0-4a41-b67f-e0a62aa53f0d-scripts\") pod \"aodh-db-sync-82qkk\" (UID: \"1809c14b-75a0-4a41-b67f-e0a62aa53f0d\") " pod="openstack/aodh-db-sync-82qkk" Nov 28 13:26:27 crc kubenswrapper[4779]: I1128 13:26:27.789832 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1809c14b-75a0-4a41-b67f-e0a62aa53f0d-combined-ca-bundle\") pod \"aodh-db-sync-82qkk\" (UID: \"1809c14b-75a0-4a41-b67f-e0a62aa53f0d\") " pod="openstack/aodh-db-sync-82qkk" Nov 28 13:26:27 crc kubenswrapper[4779]: I1128 13:26:27.790377 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1809c14b-75a0-4a41-b67f-e0a62aa53f0d-config-data\") pod \"aodh-db-sync-82qkk\" (UID: \"1809c14b-75a0-4a41-b67f-e0a62aa53f0d\") " pod="openstack/aodh-db-sync-82qkk" Nov 28 13:26:27 crc kubenswrapper[4779]: I1128 13:26:27.892826 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1809c14b-75a0-4a41-b67f-e0a62aa53f0d-config-data\") pod \"aodh-db-sync-82qkk\" (UID: \"1809c14b-75a0-4a41-b67f-e0a62aa53f0d\") " pod="openstack/aodh-db-sync-82qkk" Nov 28 13:26:27 crc kubenswrapper[4779]: I1128 13:26:27.892945 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cgmb\" (UniqueName: \"kubernetes.io/projected/1809c14b-75a0-4a41-b67f-e0a62aa53f0d-kube-api-access-5cgmb\") pod \"aodh-db-sync-82qkk\" (UID: \"1809c14b-75a0-4a41-b67f-e0a62aa53f0d\") " pod="openstack/aodh-db-sync-82qkk" Nov 28 13:26:27 crc kubenswrapper[4779]: I1128 13:26:27.893127 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1809c14b-75a0-4a41-b67f-e0a62aa53f0d-scripts\") pod \"aodh-db-sync-82qkk\" (UID: \"1809c14b-75a0-4a41-b67f-e0a62aa53f0d\") " pod="openstack/aodh-db-sync-82qkk" Nov 28 13:26:27 crc kubenswrapper[4779]: I1128 13:26:27.893164 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1809c14b-75a0-4a41-b67f-e0a62aa53f0d-combined-ca-bundle\") pod \"aodh-db-sync-82qkk\" (UID: \"1809c14b-75a0-4a41-b67f-e0a62aa53f0d\") " pod="openstack/aodh-db-sync-82qkk" Nov 28 13:26:27 crc kubenswrapper[4779]: I1128 13:26:27.897523 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1809c14b-75a0-4a41-b67f-e0a62aa53f0d-combined-ca-bundle\") pod \"aodh-db-sync-82qkk\" (UID: \"1809c14b-75a0-4a41-b67f-e0a62aa53f0d\") " pod="openstack/aodh-db-sync-82qkk" Nov 28 13:26:27 crc kubenswrapper[4779]: I1128 13:26:27.897717 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1809c14b-75a0-4a41-b67f-e0a62aa53f0d-config-data\") pod \"aodh-db-sync-82qkk\" (UID: \"1809c14b-75a0-4a41-b67f-e0a62aa53f0d\") " pod="openstack/aodh-db-sync-82qkk" Nov 28 13:26:27 crc kubenswrapper[4779]: I1128 13:26:27.897653 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1809c14b-75a0-4a41-b67f-e0a62aa53f0d-scripts\") pod \"aodh-db-sync-82qkk\" (UID: \"1809c14b-75a0-4a41-b67f-e0a62aa53f0d\") " pod="openstack/aodh-db-sync-82qkk" Nov 28 13:26:27 crc kubenswrapper[4779]: I1128 13:26:27.908585 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cgmb\" (UniqueName: \"kubernetes.io/projected/1809c14b-75a0-4a41-b67f-e0a62aa53f0d-kube-api-access-5cgmb\") pod \"aodh-db-sync-82qkk\" (UID: \"1809c14b-75a0-4a41-b67f-e0a62aa53f0d\") " pod="openstack/aodh-db-sync-82qkk" Nov 28 13:26:27 crc kubenswrapper[4779]: I1128 13:26:27.989051 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-82qkk" Nov 28 13:26:28 crc kubenswrapper[4779]: I1128 13:26:28.536389 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-82qkk"] Nov 28 13:26:29 crc kubenswrapper[4779]: I1128 13:26:29.136143 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-82qkk" event={"ID":"1809c14b-75a0-4a41-b67f-e0a62aa53f0d","Type":"ContainerStarted","Data":"2e9b26f299b9b028f3a2f605cdcd9d13dccee3ac10e3070b5bd8cca159aad08e"} Nov 28 13:26:29 crc kubenswrapper[4779]: I1128 13:26:29.752904 4779 scope.go:117] "RemoveContainer" containerID="787907d9b97619607abf8c4f9cecb91840367136040816abeda0737b36259574" Nov 28 13:26:29 crc kubenswrapper[4779]: E1128 13:26:29.753555 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:26:33 crc kubenswrapper[4779]: I1128 13:26:33.173286 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-82qkk" event={"ID":"1809c14b-75a0-4a41-b67f-e0a62aa53f0d","Type":"ContainerStarted","Data":"d2e71ec8d0c2de5825b847e5cad41abbc75bcf6d7be2354aa6e25512d15b1321"} Nov 28 13:26:35 crc kubenswrapper[4779]: I1128 13:26:35.199735 4779 generic.go:334] "Generic (PLEG): container finished" podID="1809c14b-75a0-4a41-b67f-e0a62aa53f0d" containerID="d2e71ec8d0c2de5825b847e5cad41abbc75bcf6d7be2354aa6e25512d15b1321" exitCode=0 Nov 28 13:26:35 crc kubenswrapper[4779]: I1128 13:26:35.199799 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-82qkk" event={"ID":"1809c14b-75a0-4a41-b67f-e0a62aa53f0d","Type":"ContainerDied","Data":"d2e71ec8d0c2de5825b847e5cad41abbc75bcf6d7be2354aa6e25512d15b1321"} Nov 28 13:26:36 crc kubenswrapper[4779]: I1128 13:26:36.516057 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-82qkk" Nov 28 13:26:36 crc kubenswrapper[4779]: I1128 13:26:36.664076 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1809c14b-75a0-4a41-b67f-e0a62aa53f0d-combined-ca-bundle\") pod \"1809c14b-75a0-4a41-b67f-e0a62aa53f0d\" (UID: \"1809c14b-75a0-4a41-b67f-e0a62aa53f0d\") " Nov 28 13:26:36 crc kubenswrapper[4779]: I1128 13:26:36.664152 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cgmb\" (UniqueName: \"kubernetes.io/projected/1809c14b-75a0-4a41-b67f-e0a62aa53f0d-kube-api-access-5cgmb\") pod \"1809c14b-75a0-4a41-b67f-e0a62aa53f0d\" (UID: \"1809c14b-75a0-4a41-b67f-e0a62aa53f0d\") " Nov 28 13:26:36 crc kubenswrapper[4779]: I1128 13:26:36.664245 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1809c14b-75a0-4a41-b67f-e0a62aa53f0d-config-data\") pod \"1809c14b-75a0-4a41-b67f-e0a62aa53f0d\" (UID: \"1809c14b-75a0-4a41-b67f-e0a62aa53f0d\") " Nov 28 13:26:36 crc kubenswrapper[4779]: I1128 13:26:36.664293 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1809c14b-75a0-4a41-b67f-e0a62aa53f0d-scripts\") pod \"1809c14b-75a0-4a41-b67f-e0a62aa53f0d\" (UID: \"1809c14b-75a0-4a41-b67f-e0a62aa53f0d\") " Nov 28 13:26:36 crc kubenswrapper[4779]: I1128 13:26:36.691364 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1809c14b-75a0-4a41-b67f-e0a62aa53f0d-scripts" (OuterVolumeSpecName: "scripts") pod "1809c14b-75a0-4a41-b67f-e0a62aa53f0d" (UID: "1809c14b-75a0-4a41-b67f-e0a62aa53f0d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:26:36 crc kubenswrapper[4779]: I1128 13:26:36.691443 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1809c14b-75a0-4a41-b67f-e0a62aa53f0d-kube-api-access-5cgmb" (OuterVolumeSpecName: "kube-api-access-5cgmb") pod "1809c14b-75a0-4a41-b67f-e0a62aa53f0d" (UID: "1809c14b-75a0-4a41-b67f-e0a62aa53f0d"). InnerVolumeSpecName "kube-api-access-5cgmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:26:36 crc kubenswrapper[4779]: I1128 13:26:36.719170 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1809c14b-75a0-4a41-b67f-e0a62aa53f0d-config-data" (OuterVolumeSpecName: "config-data") pod "1809c14b-75a0-4a41-b67f-e0a62aa53f0d" (UID: "1809c14b-75a0-4a41-b67f-e0a62aa53f0d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:26:36 crc kubenswrapper[4779]: I1128 13:26:36.740712 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1809c14b-75a0-4a41-b67f-e0a62aa53f0d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1809c14b-75a0-4a41-b67f-e0a62aa53f0d" (UID: "1809c14b-75a0-4a41-b67f-e0a62aa53f0d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:26:36 crc kubenswrapper[4779]: I1128 13:26:36.766022 4779 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1809c14b-75a0-4a41-b67f-e0a62aa53f0d-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 13:26:36 crc kubenswrapper[4779]: I1128 13:26:36.766066 4779 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1809c14b-75a0-4a41-b67f-e0a62aa53f0d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 13:26:36 crc kubenswrapper[4779]: I1128 13:26:36.766082 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cgmb\" (UniqueName: \"kubernetes.io/projected/1809c14b-75a0-4a41-b67f-e0a62aa53f0d-kube-api-access-5cgmb\") on node \"crc\" DevicePath \"\"" Nov 28 13:26:36 crc kubenswrapper[4779]: I1128 13:26:36.766109 4779 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1809c14b-75a0-4a41-b67f-e0a62aa53f0d-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 13:26:37 crc kubenswrapper[4779]: I1128 13:26:37.220718 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-82qkk" event={"ID":"1809c14b-75a0-4a41-b67f-e0a62aa53f0d","Type":"ContainerDied","Data":"2e9b26f299b9b028f3a2f605cdcd9d13dccee3ac10e3070b5bd8cca159aad08e"} Nov 28 13:26:37 crc kubenswrapper[4779]: I1128 13:26:37.220764 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e9b26f299b9b028f3a2f605cdcd9d13dccee3ac10e3070b5bd8cca159aad08e" Nov 28 13:26:37 crc kubenswrapper[4779]: I1128 13:26:37.220791 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-82qkk" Nov 28 13:26:40 crc kubenswrapper[4779]: I1128 13:26:40.727006 4779 scope.go:117] "RemoveContainer" containerID="787907d9b97619607abf8c4f9cecb91840367136040816abeda0737b36259574" Nov 28 13:26:40 crc kubenswrapper[4779]: E1128 13:26:40.728157 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:26:42 crc kubenswrapper[4779]: I1128 13:26:42.145722 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 28 13:26:42 crc kubenswrapper[4779]: E1128 13:26:42.146453 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1809c14b-75a0-4a41-b67f-e0a62aa53f0d" containerName="aodh-db-sync" Nov 28 13:26:42 crc kubenswrapper[4779]: I1128 13:26:42.146469 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="1809c14b-75a0-4a41-b67f-e0a62aa53f0d" containerName="aodh-db-sync" Nov 28 13:26:42 crc kubenswrapper[4779]: I1128 13:26:42.146686 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="1809c14b-75a0-4a41-b67f-e0a62aa53f0d" containerName="aodh-db-sync" Nov 28 13:26:42 crc kubenswrapper[4779]: I1128 13:26:42.148755 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 28 13:26:42 crc kubenswrapper[4779]: I1128 13:26:42.150363 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-wdvkm" Nov 28 13:26:42 crc kubenswrapper[4779]: I1128 13:26:42.151241 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 28 13:26:42 crc kubenswrapper[4779]: I1128 13:26:42.153671 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 28 13:26:42 crc kubenswrapper[4779]: I1128 13:26:42.162237 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 28 13:26:42 crc kubenswrapper[4779]: I1128 13:26:42.299989 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/431e808a-cf9a-455e-9178-2a3b30a3a78c-scripts\") pod \"aodh-0\" (UID: \"431e808a-cf9a-455e-9178-2a3b30a3a78c\") " pod="openstack/aodh-0" Nov 28 13:26:42 crc kubenswrapper[4779]: I1128 13:26:42.300172 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/431e808a-cf9a-455e-9178-2a3b30a3a78c-config-data\") pod \"aodh-0\" (UID: \"431e808a-cf9a-455e-9178-2a3b30a3a78c\") " pod="openstack/aodh-0" Nov 28 13:26:42 crc kubenswrapper[4779]: I1128 13:26:42.300258 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6pgv\" (UniqueName: \"kubernetes.io/projected/431e808a-cf9a-455e-9178-2a3b30a3a78c-kube-api-access-c6pgv\") pod \"aodh-0\" (UID: \"431e808a-cf9a-455e-9178-2a3b30a3a78c\") " pod="openstack/aodh-0" Nov 28 13:26:42 crc kubenswrapper[4779]: I1128 13:26:42.300434 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/431e808a-cf9a-455e-9178-2a3b30a3a78c-combined-ca-bundle\") pod \"aodh-0\" (UID: \"431e808a-cf9a-455e-9178-2a3b30a3a78c\") " pod="openstack/aodh-0" Nov 28 13:26:42 crc kubenswrapper[4779]: I1128 13:26:42.402579 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/431e808a-cf9a-455e-9178-2a3b30a3a78c-combined-ca-bundle\") pod \"aodh-0\" (UID: \"431e808a-cf9a-455e-9178-2a3b30a3a78c\") " pod="openstack/aodh-0" Nov 28 13:26:42 crc kubenswrapper[4779]: I1128 13:26:42.402684 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/431e808a-cf9a-455e-9178-2a3b30a3a78c-scripts\") pod \"aodh-0\" (UID: \"431e808a-cf9a-455e-9178-2a3b30a3a78c\") " pod="openstack/aodh-0" Nov 28 13:26:42 crc kubenswrapper[4779]: I1128 13:26:42.402726 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/431e808a-cf9a-455e-9178-2a3b30a3a78c-config-data\") pod \"aodh-0\" (UID: \"431e808a-cf9a-455e-9178-2a3b30a3a78c\") " pod="openstack/aodh-0" Nov 28 13:26:42 crc kubenswrapper[4779]: I1128 13:26:42.402786 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6pgv\" (UniqueName: \"kubernetes.io/projected/431e808a-cf9a-455e-9178-2a3b30a3a78c-kube-api-access-c6pgv\") pod \"aodh-0\" (UID: \"431e808a-cf9a-455e-9178-2a3b30a3a78c\") " pod="openstack/aodh-0" Nov 28 13:26:42 crc kubenswrapper[4779]: I1128 13:26:42.408499 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/431e808a-cf9a-455e-9178-2a3b30a3a78c-scripts\") pod \"aodh-0\" (UID: \"431e808a-cf9a-455e-9178-2a3b30a3a78c\") " pod="openstack/aodh-0" Nov 28 13:26:42 crc kubenswrapper[4779]: I1128 13:26:42.410788 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/431e808a-cf9a-455e-9178-2a3b30a3a78c-combined-ca-bundle\") pod \"aodh-0\" (UID: \"431e808a-cf9a-455e-9178-2a3b30a3a78c\") " pod="openstack/aodh-0" Nov 28 13:26:42 crc kubenswrapper[4779]: I1128 13:26:42.411360 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/431e808a-cf9a-455e-9178-2a3b30a3a78c-config-data\") pod \"aodh-0\" (UID: \"431e808a-cf9a-455e-9178-2a3b30a3a78c\") " pod="openstack/aodh-0" Nov 28 13:26:42 crc kubenswrapper[4779]: I1128 13:26:42.433784 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6pgv\" (UniqueName: \"kubernetes.io/projected/431e808a-cf9a-455e-9178-2a3b30a3a78c-kube-api-access-c6pgv\") pod \"aodh-0\" (UID: \"431e808a-cf9a-455e-9178-2a3b30a3a78c\") " pod="openstack/aodh-0" Nov 28 13:26:42 crc kubenswrapper[4779]: I1128 13:26:42.509923 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 28 13:26:43 crc kubenswrapper[4779]: I1128 13:26:43.003457 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 28 13:26:43 crc kubenswrapper[4779]: W1128 13:26:43.016183 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod431e808a_cf9a_455e_9178_2a3b30a3a78c.slice/crio-b79439225d0e4eaa171ba61ab222b6f6be2675b2acf1206af8f0fbc1cdc4b1dc WatchSource:0}: Error finding container b79439225d0e4eaa171ba61ab222b6f6be2675b2acf1206af8f0fbc1cdc4b1dc: Status 404 returned error can't find the container with id b79439225d0e4eaa171ba61ab222b6f6be2675b2acf1206af8f0fbc1cdc4b1dc Nov 28 13:26:43 crc kubenswrapper[4779]: I1128 13:26:43.284591 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"431e808a-cf9a-455e-9178-2a3b30a3a78c","Type":"ContainerStarted","Data":"b79439225d0e4eaa171ba61ab222b6f6be2675b2acf1206af8f0fbc1cdc4b1dc"} Nov 28 13:26:44 crc kubenswrapper[4779]: I1128 13:26:44.243542 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 13:26:44 crc kubenswrapper[4779]: I1128 13:26:44.244070 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6a49e784-eabb-4391-a69f-695474f302b7" containerName="ceilometer-central-agent" containerID="cri-o://6880011e54c00c5c9ef65fb20f9aa16c481813ccf5c5cbd95ad5d6bde775ca5a" gracePeriod=30 Nov 28 13:26:44 crc kubenswrapper[4779]: I1128 13:26:44.244201 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6a49e784-eabb-4391-a69f-695474f302b7" containerName="sg-core" containerID="cri-o://2f0a352d15db1158d23e457bbf089866271c33e67c97172d12947b073d6afeb9" gracePeriod=30 Nov 28 13:26:44 crc kubenswrapper[4779]: I1128 13:26:44.244239 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6a49e784-eabb-4391-a69f-695474f302b7" containerName="ceilometer-notification-agent" containerID="cri-o://89181b30d62c3f4430f76d72a302618d3ba7217c8aea43dbcee2f552045c6f8c" gracePeriod=30 Nov 28 13:26:44 crc kubenswrapper[4779]: I1128 13:26:44.244417 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6a49e784-eabb-4391-a69f-695474f302b7" containerName="proxy-httpd" containerID="cri-o://9d0b722e3ef9581850dec1db5500c91443fe91e794b6ddf8ad364c3dd1a3b06d" gracePeriod=30 Nov 28 13:26:44 crc kubenswrapper[4779]: I1128 13:26:44.295365 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"431e808a-cf9a-455e-9178-2a3b30a3a78c","Type":"ContainerStarted","Data":"9252e640db47928ace9578de3cc3f4eae0abb49973b675796d9788adfce1c65e"} Nov 28 13:26:45 crc kubenswrapper[4779]: I1128 13:26:45.306774 4779 generic.go:334] "Generic (PLEG): container finished" podID="6a49e784-eabb-4391-a69f-695474f302b7" containerID="9d0b722e3ef9581850dec1db5500c91443fe91e794b6ddf8ad364c3dd1a3b06d" exitCode=0 Nov 28 13:26:45 crc kubenswrapper[4779]: I1128 13:26:45.307318 4779 generic.go:334] "Generic (PLEG): container finished" podID="6a49e784-eabb-4391-a69f-695474f302b7" containerID="2f0a352d15db1158d23e457bbf089866271c33e67c97172d12947b073d6afeb9" exitCode=2 Nov 28 13:26:45 crc kubenswrapper[4779]: I1128 13:26:45.307327 4779 generic.go:334] "Generic (PLEG): container finished" podID="6a49e784-eabb-4391-a69f-695474f302b7" containerID="6880011e54c00c5c9ef65fb20f9aa16c481813ccf5c5cbd95ad5d6bde775ca5a" exitCode=0 Nov 28 13:26:45 crc kubenswrapper[4779]: I1128 13:26:45.306858 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a49e784-eabb-4391-a69f-695474f302b7","Type":"ContainerDied","Data":"9d0b722e3ef9581850dec1db5500c91443fe91e794b6ddf8ad364c3dd1a3b06d"} Nov 28 13:26:45 crc kubenswrapper[4779]: I1128 13:26:45.307384 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a49e784-eabb-4391-a69f-695474f302b7","Type":"ContainerDied","Data":"2f0a352d15db1158d23e457bbf089866271c33e67c97172d12947b073d6afeb9"} Nov 28 13:26:45 crc kubenswrapper[4779]: I1128 13:26:45.307399 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a49e784-eabb-4391-a69f-695474f302b7","Type":"ContainerDied","Data":"6880011e54c00c5c9ef65fb20f9aa16c481813ccf5c5cbd95ad5d6bde775ca5a"} Nov 28 13:26:45 crc kubenswrapper[4779]: I1128 13:26:45.309341 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"431e808a-cf9a-455e-9178-2a3b30a3a78c","Type":"ContainerStarted","Data":"e9ab39a5cef54632ccca5fa67b7c0c1bb26320a90835c305c519fb2270da368a"} Nov 28 13:26:45 crc kubenswrapper[4779]: I1128 13:26:45.503747 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 28 13:26:47 crc kubenswrapper[4779]: I1128 13:26:47.341548 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"431e808a-cf9a-455e-9178-2a3b30a3a78c","Type":"ContainerStarted","Data":"0c8ada8b0b52e8e5b2906ff0bf34cdf25c2ed87c47cdb9d09ca3dace64069cef"} Nov 28 13:26:49 crc kubenswrapper[4779]: I1128 13:26:49.364887 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"431e808a-cf9a-455e-9178-2a3b30a3a78c","Type":"ContainerStarted","Data":"d750849ebd533d589b7bf96573a18f57686cd01cb2b579b15a5bbb39c0fbe8c4"} Nov 28 13:26:49 crc kubenswrapper[4779]: I1128 13:26:49.365029 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="431e808a-cf9a-455e-9178-2a3b30a3a78c" containerName="aodh-api" containerID="cri-o://9252e640db47928ace9578de3cc3f4eae0abb49973b675796d9788adfce1c65e" gracePeriod=30 Nov 28 13:26:49 crc kubenswrapper[4779]: I1128 13:26:49.365148 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="431e808a-cf9a-455e-9178-2a3b30a3a78c" containerName="aodh-evaluator" containerID="cri-o://e9ab39a5cef54632ccca5fa67b7c0c1bb26320a90835c305c519fb2270da368a" gracePeriod=30 Nov 28 13:26:49 crc kubenswrapper[4779]: I1128 13:26:49.365071 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="431e808a-cf9a-455e-9178-2a3b30a3a78c" containerName="aodh-notifier" containerID="cri-o://0c8ada8b0b52e8e5b2906ff0bf34cdf25c2ed87c47cdb9d09ca3dace64069cef" gracePeriod=30 Nov 28 13:26:49 crc kubenswrapper[4779]: I1128 13:26:49.367498 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="431e808a-cf9a-455e-9178-2a3b30a3a78c" containerName="aodh-listener" containerID="cri-o://d750849ebd533d589b7bf96573a18f57686cd01cb2b579b15a5bbb39c0fbe8c4" gracePeriod=30 Nov 28 13:26:49 crc kubenswrapper[4779]: I1128 13:26:49.398860 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=1.963058507 podStartE2EDuration="7.398843396s" podCreationTimestamp="2025-11-28 13:26:42 +0000 UTC" firstStartedPulling="2025-11-28 13:26:43.018645866 +0000 UTC m=+3063.584321220" lastFinishedPulling="2025-11-28 13:26:48.454430745 +0000 UTC m=+3069.020106109" observedRunningTime="2025-11-28 13:26:49.388599443 +0000 UTC m=+3069.954274827" watchObservedRunningTime="2025-11-28 13:26:49.398843396 +0000 UTC m=+3069.964518740" Nov 28 13:26:50 crc kubenswrapper[4779]: I1128 13:26:50.379977 4779 generic.go:334] "Generic (PLEG): container finished" podID="431e808a-cf9a-455e-9178-2a3b30a3a78c" containerID="0c8ada8b0b52e8e5b2906ff0bf34cdf25c2ed87c47cdb9d09ca3dace64069cef" exitCode=0 Nov 28 13:26:50 crc kubenswrapper[4779]: I1128 13:26:50.380720 4779 generic.go:334] "Generic (PLEG): container finished" podID="431e808a-cf9a-455e-9178-2a3b30a3a78c" containerID="e9ab39a5cef54632ccca5fa67b7c0c1bb26320a90835c305c519fb2270da368a" exitCode=0 Nov 28 13:26:50 crc kubenswrapper[4779]: I1128 13:26:50.380736 4779 generic.go:334] "Generic (PLEG): container finished" podID="431e808a-cf9a-455e-9178-2a3b30a3a78c" containerID="9252e640db47928ace9578de3cc3f4eae0abb49973b675796d9788adfce1c65e" exitCode=0 Nov 28 13:26:50 crc kubenswrapper[4779]: I1128 13:26:50.380049 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"431e808a-cf9a-455e-9178-2a3b30a3a78c","Type":"ContainerDied","Data":"0c8ada8b0b52e8e5b2906ff0bf34cdf25c2ed87c47cdb9d09ca3dace64069cef"} Nov 28 13:26:50 crc kubenswrapper[4779]: I1128 13:26:50.380794 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"431e808a-cf9a-455e-9178-2a3b30a3a78c","Type":"ContainerDied","Data":"e9ab39a5cef54632ccca5fa67b7c0c1bb26320a90835c305c519fb2270da368a"} Nov 28 13:26:50 crc kubenswrapper[4779]: I1128 13:26:50.380822 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"431e808a-cf9a-455e-9178-2a3b30a3a78c","Type":"ContainerDied","Data":"9252e640db47928ace9578de3cc3f4eae0abb49973b675796d9788adfce1c65e"} Nov 28 13:26:53 crc kubenswrapper[4779]: I1128 13:26:53.727415 4779 scope.go:117] "RemoveContainer" containerID="787907d9b97619607abf8c4f9cecb91840367136040816abeda0737b36259574" Nov 28 13:26:53 crc kubenswrapper[4779]: E1128 13:26:53.728478 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:26:55 crc kubenswrapper[4779]: I1128 13:26:55.455604 4779 generic.go:334] "Generic (PLEG): container finished" podID="6a49e784-eabb-4391-a69f-695474f302b7" containerID="89181b30d62c3f4430f76d72a302618d3ba7217c8aea43dbcee2f552045c6f8c" exitCode=0 Nov 28 13:26:55 crc kubenswrapper[4779]: I1128 13:26:55.456257 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a49e784-eabb-4391-a69f-695474f302b7","Type":"ContainerDied","Data":"89181b30d62c3f4430f76d72a302618d3ba7217c8aea43dbcee2f552045c6f8c"} Nov 28 13:26:55 crc kubenswrapper[4779]: I1128 13:26:55.544171 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 13:26:55 crc kubenswrapper[4779]: I1128 13:26:55.672319 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a49e784-eabb-4391-a69f-695474f302b7-log-httpd\") pod \"6a49e784-eabb-4391-a69f-695474f302b7\" (UID: \"6a49e784-eabb-4391-a69f-695474f302b7\") " Nov 28 13:26:55 crc kubenswrapper[4779]: I1128 13:26:55.672460 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a49e784-eabb-4391-a69f-695474f302b7-config-data\") pod \"6a49e784-eabb-4391-a69f-695474f302b7\" (UID: \"6a49e784-eabb-4391-a69f-695474f302b7\") " Nov 28 13:26:55 crc kubenswrapper[4779]: I1128 13:26:55.672510 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a49e784-eabb-4391-a69f-695474f302b7-scripts\") pod \"6a49e784-eabb-4391-a69f-695474f302b7\" (UID: \"6a49e784-eabb-4391-a69f-695474f302b7\") " Nov 28 13:26:55 crc kubenswrapper[4779]: I1128 13:26:55.672543 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a49e784-eabb-4391-a69f-695474f302b7-ceilometer-tls-certs\") pod \"6a49e784-eabb-4391-a69f-695474f302b7\" (UID: \"6a49e784-eabb-4391-a69f-695474f302b7\") " Nov 28 13:26:55 crc kubenswrapper[4779]: I1128 13:26:55.672620 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6a49e784-eabb-4391-a69f-695474f302b7-sg-core-conf-yaml\") pod \"6a49e784-eabb-4391-a69f-695474f302b7\" (UID: \"6a49e784-eabb-4391-a69f-695474f302b7\") " Nov 28 13:26:55 crc kubenswrapper[4779]: I1128 13:26:55.672659 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a49e784-eabb-4391-a69f-695474f302b7-combined-ca-bundle\") pod \"6a49e784-eabb-4391-a69f-695474f302b7\" (UID: \"6a49e784-eabb-4391-a69f-695474f302b7\") " Nov 28 13:26:55 crc kubenswrapper[4779]: I1128 13:26:55.672692 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hn6p\" (UniqueName: \"kubernetes.io/projected/6a49e784-eabb-4391-a69f-695474f302b7-kube-api-access-8hn6p\") pod \"6a49e784-eabb-4391-a69f-695474f302b7\" (UID: \"6a49e784-eabb-4391-a69f-695474f302b7\") " Nov 28 13:26:55 crc kubenswrapper[4779]: I1128 13:26:55.672786 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a49e784-eabb-4391-a69f-695474f302b7-run-httpd\") pod \"6a49e784-eabb-4391-a69f-695474f302b7\" (UID: \"6a49e784-eabb-4391-a69f-695474f302b7\") " Nov 28 13:26:55 crc kubenswrapper[4779]: I1128 13:26:55.673333 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a49e784-eabb-4391-a69f-695474f302b7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6a49e784-eabb-4391-a69f-695474f302b7" (UID: "6a49e784-eabb-4391-a69f-695474f302b7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 13:26:55 crc kubenswrapper[4779]: I1128 13:26:55.672802 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a49e784-eabb-4391-a69f-695474f302b7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6a49e784-eabb-4391-a69f-695474f302b7" (UID: "6a49e784-eabb-4391-a69f-695474f302b7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 13:26:55 crc kubenswrapper[4779]: I1128 13:26:55.682450 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a49e784-eabb-4391-a69f-695474f302b7-kube-api-access-8hn6p" (OuterVolumeSpecName: "kube-api-access-8hn6p") pod "6a49e784-eabb-4391-a69f-695474f302b7" (UID: "6a49e784-eabb-4391-a69f-695474f302b7"). InnerVolumeSpecName "kube-api-access-8hn6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:26:55 crc kubenswrapper[4779]: I1128 13:26:55.682578 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a49e784-eabb-4391-a69f-695474f302b7-scripts" (OuterVolumeSpecName: "scripts") pod "6a49e784-eabb-4391-a69f-695474f302b7" (UID: "6a49e784-eabb-4391-a69f-695474f302b7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:26:55 crc kubenswrapper[4779]: I1128 13:26:55.704731 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a49e784-eabb-4391-a69f-695474f302b7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6a49e784-eabb-4391-a69f-695474f302b7" (UID: "6a49e784-eabb-4391-a69f-695474f302b7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:26:55 crc kubenswrapper[4779]: I1128 13:26:55.732469 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a49e784-eabb-4391-a69f-695474f302b7-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "6a49e784-eabb-4391-a69f-695474f302b7" (UID: "6a49e784-eabb-4391-a69f-695474f302b7"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:26:55 crc kubenswrapper[4779]: I1128 13:26:55.759005 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a49e784-eabb-4391-a69f-695474f302b7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6a49e784-eabb-4391-a69f-695474f302b7" (UID: "6a49e784-eabb-4391-a69f-695474f302b7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:26:55 crc kubenswrapper[4779]: I1128 13:26:55.775205 4779 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a49e784-eabb-4391-a69f-695474f302b7-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 13:26:55 crc kubenswrapper[4779]: I1128 13:26:55.775232 4779 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a49e784-eabb-4391-a69f-695474f302b7-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 13:26:55 crc kubenswrapper[4779]: I1128 13:26:55.775244 4779 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a49e784-eabb-4391-a69f-695474f302b7-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 13:26:55 crc kubenswrapper[4779]: I1128 13:26:55.775256 4779 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a49e784-eabb-4391-a69f-695474f302b7-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 13:26:55 crc kubenswrapper[4779]: I1128 13:26:55.775268 4779 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6a49e784-eabb-4391-a69f-695474f302b7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 13:26:55 crc kubenswrapper[4779]: I1128 13:26:55.775279 4779 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a49e784-eabb-4391-a69f-695474f302b7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 13:26:55 crc kubenswrapper[4779]: I1128 13:26:55.775290 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hn6p\" (UniqueName: \"kubernetes.io/projected/6a49e784-eabb-4391-a69f-695474f302b7-kube-api-access-8hn6p\") on node \"crc\" DevicePath \"\"" Nov 28 13:26:55 crc kubenswrapper[4779]: I1128 13:26:55.785810 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a49e784-eabb-4391-a69f-695474f302b7-config-data" (OuterVolumeSpecName: "config-data") pod "6a49e784-eabb-4391-a69f-695474f302b7" (UID: "6a49e784-eabb-4391-a69f-695474f302b7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:26:55 crc kubenswrapper[4779]: I1128 13:26:55.884602 4779 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a49e784-eabb-4391-a69f-695474f302b7-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 13:26:56 crc kubenswrapper[4779]: I1128 13:26:56.473592 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a49e784-eabb-4391-a69f-695474f302b7","Type":"ContainerDied","Data":"f4a0125785b585015858eca2b4e8dda26572e2f7dde89693fa05a133fd0748b0"} Nov 28 13:26:56 crc kubenswrapper[4779]: I1128 13:26:56.473997 4779 scope.go:117] "RemoveContainer" containerID="9d0b722e3ef9581850dec1db5500c91443fe91e794b6ddf8ad364c3dd1a3b06d" Nov 28 13:26:56 crc kubenswrapper[4779]: I1128 13:26:56.473768 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 13:26:56 crc kubenswrapper[4779]: I1128 13:26:56.516200 4779 scope.go:117] "RemoveContainer" containerID="2f0a352d15db1158d23e457bbf089866271c33e67c97172d12947b073d6afeb9" Nov 28 13:26:56 crc kubenswrapper[4779]: I1128 13:26:56.557229 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 13:26:56 crc kubenswrapper[4779]: I1128 13:26:56.564216 4779 scope.go:117] "RemoveContainer" containerID="89181b30d62c3f4430f76d72a302618d3ba7217c8aea43dbcee2f552045c6f8c" Nov 28 13:26:56 crc kubenswrapper[4779]: I1128 13:26:56.576145 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 13:26:56 crc kubenswrapper[4779]: I1128 13:26:56.592350 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 13:26:56 crc kubenswrapper[4779]: I1128 13:26:56.597384 4779 scope.go:117] "RemoveContainer" containerID="6880011e54c00c5c9ef65fb20f9aa16c481813ccf5c5cbd95ad5d6bde775ca5a" Nov 28 13:26:56 crc kubenswrapper[4779]: E1128 13:26:56.599023 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a49e784-eabb-4391-a69f-695474f302b7" containerName="ceilometer-notification-agent" Nov 28 13:26:56 crc kubenswrapper[4779]: I1128 13:26:56.599052 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a49e784-eabb-4391-a69f-695474f302b7" containerName="ceilometer-notification-agent" Nov 28 13:26:56 crc kubenswrapper[4779]: E1128 13:26:56.599076 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a49e784-eabb-4391-a69f-695474f302b7" containerName="ceilometer-central-agent" Nov 28 13:26:56 crc kubenswrapper[4779]: I1128 13:26:56.599084 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a49e784-eabb-4391-a69f-695474f302b7" containerName="ceilometer-central-agent" Nov 28 13:26:56 crc kubenswrapper[4779]: E1128 13:26:56.599120 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a49e784-eabb-4391-a69f-695474f302b7" containerName="sg-core" Nov 28 13:26:56 crc kubenswrapper[4779]: I1128 13:26:56.599129 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a49e784-eabb-4391-a69f-695474f302b7" containerName="sg-core" Nov 28 13:26:56 crc kubenswrapper[4779]: E1128 13:26:56.599138 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a49e784-eabb-4391-a69f-695474f302b7" containerName="proxy-httpd" Nov 28 13:26:56 crc kubenswrapper[4779]: I1128 13:26:56.599145 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a49e784-eabb-4391-a69f-695474f302b7" containerName="proxy-httpd" Nov 28 13:26:56 crc kubenswrapper[4779]: I1128 13:26:56.599389 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a49e784-eabb-4391-a69f-695474f302b7" containerName="proxy-httpd" Nov 28 13:26:56 crc kubenswrapper[4779]: I1128 13:26:56.599406 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a49e784-eabb-4391-a69f-695474f302b7" containerName="ceilometer-central-agent" Nov 28 13:26:56 crc kubenswrapper[4779]: I1128 13:26:56.599415 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a49e784-eabb-4391-a69f-695474f302b7" containerName="sg-core" Nov 28 13:26:56 crc kubenswrapper[4779]: I1128 13:26:56.599435 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a49e784-eabb-4391-a69f-695474f302b7" containerName="ceilometer-notification-agent" Nov 28 13:26:56 crc kubenswrapper[4779]: I1128 13:26:56.607023 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 13:26:56 crc kubenswrapper[4779]: I1128 13:26:56.611760 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 13:26:56 crc kubenswrapper[4779]: I1128 13:26:56.611772 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 13:26:56 crc kubenswrapper[4779]: I1128 13:26:56.611977 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 28 13:26:56 crc kubenswrapper[4779]: I1128 13:26:56.612318 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 13:26:56 crc kubenswrapper[4779]: I1128 13:26:56.701596 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13db3856-5125-439c-86a8-4493e5619b44-config-data\") pod \"ceilometer-0\" (UID: \"13db3856-5125-439c-86a8-4493e5619b44\") " pod="openstack/ceilometer-0" Nov 28 13:26:56 crc kubenswrapper[4779]: I1128 13:26:56.701733 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13db3856-5125-439c-86a8-4493e5619b44-scripts\") pod \"ceilometer-0\" (UID: \"13db3856-5125-439c-86a8-4493e5619b44\") " pod="openstack/ceilometer-0" Nov 28 13:26:56 crc kubenswrapper[4779]: I1128 13:26:56.701787 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/13db3856-5125-439c-86a8-4493e5619b44-log-httpd\") pod \"ceilometer-0\" (UID: \"13db3856-5125-439c-86a8-4493e5619b44\") " pod="openstack/ceilometer-0" Nov 28 13:26:56 crc kubenswrapper[4779]: I1128 13:26:56.701811 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/13db3856-5125-439c-86a8-4493e5619b44-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"13db3856-5125-439c-86a8-4493e5619b44\") " pod="openstack/ceilometer-0" Nov 28 13:26:56 crc kubenswrapper[4779]: I1128 13:26:56.701857 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/13db3856-5125-439c-86a8-4493e5619b44-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"13db3856-5125-439c-86a8-4493e5619b44\") " pod="openstack/ceilometer-0" Nov 28 13:26:56 crc kubenswrapper[4779]: I1128 13:26:56.701946 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/13db3856-5125-439c-86a8-4493e5619b44-run-httpd\") pod \"ceilometer-0\" (UID: \"13db3856-5125-439c-86a8-4493e5619b44\") " pod="openstack/ceilometer-0" Nov 28 13:26:56 crc kubenswrapper[4779]: I1128 13:26:56.701983 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr8cj\" (UniqueName: \"kubernetes.io/projected/13db3856-5125-439c-86a8-4493e5619b44-kube-api-access-hr8cj\") pod \"ceilometer-0\" (UID: \"13db3856-5125-439c-86a8-4493e5619b44\") " pod="openstack/ceilometer-0" Nov 28 13:26:56 crc kubenswrapper[4779]: I1128 13:26:56.702025 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13db3856-5125-439c-86a8-4493e5619b44-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"13db3856-5125-439c-86a8-4493e5619b44\") " pod="openstack/ceilometer-0" Nov 28 13:26:56 crc kubenswrapper[4779]: I1128 13:26:56.804285 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13db3856-5125-439c-86a8-4493e5619b44-scripts\") pod \"ceilometer-0\" (UID: \"13db3856-5125-439c-86a8-4493e5619b44\") " pod="openstack/ceilometer-0" Nov 28 13:26:56 crc kubenswrapper[4779]: I1128 13:26:56.804359 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/13db3856-5125-439c-86a8-4493e5619b44-log-httpd\") pod \"ceilometer-0\" (UID: \"13db3856-5125-439c-86a8-4493e5619b44\") " pod="openstack/ceilometer-0" Nov 28 13:26:56 crc kubenswrapper[4779]: I1128 13:26:56.804377 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/13db3856-5125-439c-86a8-4493e5619b44-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"13db3856-5125-439c-86a8-4493e5619b44\") " pod="openstack/ceilometer-0" Nov 28 13:26:56 crc kubenswrapper[4779]: I1128 13:26:56.804428 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/13db3856-5125-439c-86a8-4493e5619b44-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"13db3856-5125-439c-86a8-4493e5619b44\") " pod="openstack/ceilometer-0" Nov 28 13:26:56 crc kubenswrapper[4779]: I1128 13:26:56.804472 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/13db3856-5125-439c-86a8-4493e5619b44-run-httpd\") pod \"ceilometer-0\" (UID: \"13db3856-5125-439c-86a8-4493e5619b44\") " pod="openstack/ceilometer-0" Nov 28 13:26:56 crc kubenswrapper[4779]: I1128 13:26:56.804494 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr8cj\" (UniqueName: \"kubernetes.io/projected/13db3856-5125-439c-86a8-4493e5619b44-kube-api-access-hr8cj\") pod \"ceilometer-0\" (UID: \"13db3856-5125-439c-86a8-4493e5619b44\") " pod="openstack/ceilometer-0" Nov 28 13:26:56 crc kubenswrapper[4779]: I1128 13:26:56.804529 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13db3856-5125-439c-86a8-4493e5619b44-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"13db3856-5125-439c-86a8-4493e5619b44\") " pod="openstack/ceilometer-0" Nov 28 13:26:56 crc kubenswrapper[4779]: I1128 13:26:56.804573 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13db3856-5125-439c-86a8-4493e5619b44-config-data\") pod \"ceilometer-0\" (UID: \"13db3856-5125-439c-86a8-4493e5619b44\") " pod="openstack/ceilometer-0" Nov 28 13:26:56 crc kubenswrapper[4779]: I1128 13:26:56.805671 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/13db3856-5125-439c-86a8-4493e5619b44-run-httpd\") pod \"ceilometer-0\" (UID: \"13db3856-5125-439c-86a8-4493e5619b44\") " pod="openstack/ceilometer-0" Nov 28 13:26:56 crc kubenswrapper[4779]: I1128 13:26:56.805877 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/13db3856-5125-439c-86a8-4493e5619b44-log-httpd\") pod \"ceilometer-0\" (UID: \"13db3856-5125-439c-86a8-4493e5619b44\") " pod="openstack/ceilometer-0" Nov 28 13:26:56 crc kubenswrapper[4779]: I1128 13:26:56.809611 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/13db3856-5125-439c-86a8-4493e5619b44-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"13db3856-5125-439c-86a8-4493e5619b44\") " pod="openstack/ceilometer-0" Nov 28 13:26:56 crc kubenswrapper[4779]: I1128 13:26:56.809737 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13db3856-5125-439c-86a8-4493e5619b44-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"13db3856-5125-439c-86a8-4493e5619b44\") " pod="openstack/ceilometer-0" Nov 28 13:26:56 crc kubenswrapper[4779]: I1128 13:26:56.812256 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/13db3856-5125-439c-86a8-4493e5619b44-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"13db3856-5125-439c-86a8-4493e5619b44\") " pod="openstack/ceilometer-0" Nov 28 13:26:56 crc kubenswrapper[4779]: I1128 13:26:56.813698 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13db3856-5125-439c-86a8-4493e5619b44-config-data\") pod \"ceilometer-0\" (UID: \"13db3856-5125-439c-86a8-4493e5619b44\") " pod="openstack/ceilometer-0" Nov 28 13:26:56 crc kubenswrapper[4779]: I1128 13:26:56.814814 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13db3856-5125-439c-86a8-4493e5619b44-scripts\") pod \"ceilometer-0\" (UID: \"13db3856-5125-439c-86a8-4493e5619b44\") " pod="openstack/ceilometer-0" Nov 28 13:26:56 crc kubenswrapper[4779]: I1128 13:26:56.828140 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr8cj\" (UniqueName: \"kubernetes.io/projected/13db3856-5125-439c-86a8-4493e5619b44-kube-api-access-hr8cj\") pod \"ceilometer-0\" (UID: \"13db3856-5125-439c-86a8-4493e5619b44\") " pod="openstack/ceilometer-0" Nov 28 13:26:56 crc kubenswrapper[4779]: I1128 13:26:56.934459 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 13:26:57 crc kubenswrapper[4779]: I1128 13:26:57.460063 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 13:26:57 crc kubenswrapper[4779]: I1128 13:26:57.483448 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"13db3856-5125-439c-86a8-4493e5619b44","Type":"ContainerStarted","Data":"a4b274e64541eb4def9a4b78d99dd85cb445b49eaac0d1e92fd140710286d2c2"} Nov 28 13:26:57 crc kubenswrapper[4779]: I1128 13:26:57.741720 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a49e784-eabb-4391-a69f-695474f302b7" path="/var/lib/kubelet/pods/6a49e784-eabb-4391-a69f-695474f302b7/volumes" Nov 28 13:26:59 crc kubenswrapper[4779]: I1128 13:26:59.524887 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"13db3856-5125-439c-86a8-4493e5619b44","Type":"ContainerStarted","Data":"f84e95c63915754daba98a3340dc95a28237598f9cf40cecd36cc1ba900278ad"} Nov 28 13:27:00 crc kubenswrapper[4779]: I1128 13:27:00.540557 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"13db3856-5125-439c-86a8-4493e5619b44","Type":"ContainerStarted","Data":"3d7487bb2d344bc7fcbb11d515cf889821178010a824481d1c0daf153e4d9bb8"} Nov 28 13:27:01 crc kubenswrapper[4779]: I1128 13:27:01.555387 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"13db3856-5125-439c-86a8-4493e5619b44","Type":"ContainerStarted","Data":"bee32cb1eb1554ba9fb7d5e12ede0ad8005a1bd1dd4e5ece09f761ec2cdd8c15"} Nov 28 13:27:02 crc kubenswrapper[4779]: I1128 13:27:02.568300 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"13db3856-5125-439c-86a8-4493e5619b44","Type":"ContainerStarted","Data":"90d8aca7b33edd9dd7c3f3031fd9d323737a0c67378a165cb9aab4fb3631aefa"} Nov 28 13:27:02 crc kubenswrapper[4779]: I1128 13:27:02.568789 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 13:27:02 crc kubenswrapper[4779]: I1128 13:27:02.593042 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.93834067 podStartE2EDuration="6.593025102s" podCreationTimestamp="2025-11-28 13:26:56 +0000 UTC" firstStartedPulling="2025-11-28 13:26:57.462140533 +0000 UTC m=+3078.027815897" lastFinishedPulling="2025-11-28 13:27:02.116824975 +0000 UTC m=+3082.682500329" observedRunningTime="2025-11-28 13:27:02.588872582 +0000 UTC m=+3083.154547956" watchObservedRunningTime="2025-11-28 13:27:02.593025102 +0000 UTC m=+3083.158700456" Nov 28 13:27:07 crc kubenswrapper[4779]: I1128 13:27:07.726273 4779 scope.go:117] "RemoveContainer" containerID="787907d9b97619607abf8c4f9cecb91840367136040816abeda0737b36259574" Nov 28 13:27:07 crc kubenswrapper[4779]: E1128 13:27:07.727207 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:27:19 crc kubenswrapper[4779]: I1128 13:27:19.733864 4779 generic.go:334] "Generic (PLEG): container finished" podID="431e808a-cf9a-455e-9178-2a3b30a3a78c" containerID="d750849ebd533d589b7bf96573a18f57686cd01cb2b579b15a5bbb39c0fbe8c4" exitCode=137 Nov 28 13:27:19 crc kubenswrapper[4779]: I1128 13:27:19.739504 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"431e808a-cf9a-455e-9178-2a3b30a3a78c","Type":"ContainerDied","Data":"d750849ebd533d589b7bf96573a18f57686cd01cb2b579b15a5bbb39c0fbe8c4"} Nov 28 13:27:19 crc kubenswrapper[4779]: I1128 13:27:19.835521 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 28 13:27:19 crc kubenswrapper[4779]: I1128 13:27:19.951237 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6pgv\" (UniqueName: \"kubernetes.io/projected/431e808a-cf9a-455e-9178-2a3b30a3a78c-kube-api-access-c6pgv\") pod \"431e808a-cf9a-455e-9178-2a3b30a3a78c\" (UID: \"431e808a-cf9a-455e-9178-2a3b30a3a78c\") " Nov 28 13:27:19 crc kubenswrapper[4779]: I1128 13:27:19.951515 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/431e808a-cf9a-455e-9178-2a3b30a3a78c-combined-ca-bundle\") pod \"431e808a-cf9a-455e-9178-2a3b30a3a78c\" (UID: \"431e808a-cf9a-455e-9178-2a3b30a3a78c\") " Nov 28 13:27:19 crc kubenswrapper[4779]: I1128 13:27:19.951586 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/431e808a-cf9a-455e-9178-2a3b30a3a78c-config-data\") pod \"431e808a-cf9a-455e-9178-2a3b30a3a78c\" (UID: \"431e808a-cf9a-455e-9178-2a3b30a3a78c\") " Nov 28 13:27:19 crc kubenswrapper[4779]: I1128 13:27:19.951609 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/431e808a-cf9a-455e-9178-2a3b30a3a78c-scripts\") pod \"431e808a-cf9a-455e-9178-2a3b30a3a78c\" (UID: \"431e808a-cf9a-455e-9178-2a3b30a3a78c\") " Nov 28 13:27:19 crc kubenswrapper[4779]: I1128 13:27:19.958495 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/431e808a-cf9a-455e-9178-2a3b30a3a78c-scripts" (OuterVolumeSpecName: "scripts") pod "431e808a-cf9a-455e-9178-2a3b30a3a78c" (UID: "431e808a-cf9a-455e-9178-2a3b30a3a78c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:27:19 crc kubenswrapper[4779]: I1128 13:27:19.964123 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/431e808a-cf9a-455e-9178-2a3b30a3a78c-kube-api-access-c6pgv" (OuterVolumeSpecName: "kube-api-access-c6pgv") pod "431e808a-cf9a-455e-9178-2a3b30a3a78c" (UID: "431e808a-cf9a-455e-9178-2a3b30a3a78c"). InnerVolumeSpecName "kube-api-access-c6pgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:27:20 crc kubenswrapper[4779]: I1128 13:27:20.054067 4779 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/431e808a-cf9a-455e-9178-2a3b30a3a78c-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 13:27:20 crc kubenswrapper[4779]: I1128 13:27:20.054108 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6pgv\" (UniqueName: \"kubernetes.io/projected/431e808a-cf9a-455e-9178-2a3b30a3a78c-kube-api-access-c6pgv\") on node \"crc\" DevicePath \"\"" Nov 28 13:27:20 crc kubenswrapper[4779]: I1128 13:27:20.076403 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/431e808a-cf9a-455e-9178-2a3b30a3a78c-config-data" (OuterVolumeSpecName: "config-data") pod "431e808a-cf9a-455e-9178-2a3b30a3a78c" (UID: "431e808a-cf9a-455e-9178-2a3b30a3a78c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:27:20 crc kubenswrapper[4779]: I1128 13:27:20.088243 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/431e808a-cf9a-455e-9178-2a3b30a3a78c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "431e808a-cf9a-455e-9178-2a3b30a3a78c" (UID: "431e808a-cf9a-455e-9178-2a3b30a3a78c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:27:20 crc kubenswrapper[4779]: I1128 13:27:20.155455 4779 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/431e808a-cf9a-455e-9178-2a3b30a3a78c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 13:27:20 crc kubenswrapper[4779]: I1128 13:27:20.155505 4779 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/431e808a-cf9a-455e-9178-2a3b30a3a78c-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 13:27:20 crc kubenswrapper[4779]: I1128 13:27:20.744185 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"431e808a-cf9a-455e-9178-2a3b30a3a78c","Type":"ContainerDied","Data":"b79439225d0e4eaa171ba61ab222b6f6be2675b2acf1206af8f0fbc1cdc4b1dc"} Nov 28 13:27:20 crc kubenswrapper[4779]: I1128 13:27:20.744518 4779 scope.go:117] "RemoveContainer" containerID="d750849ebd533d589b7bf96573a18f57686cd01cb2b579b15a5bbb39c0fbe8c4" Nov 28 13:27:20 crc kubenswrapper[4779]: I1128 13:27:20.744315 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 28 13:27:20 crc kubenswrapper[4779]: I1128 13:27:20.801973 4779 scope.go:117] "RemoveContainer" containerID="0c8ada8b0b52e8e5b2906ff0bf34cdf25c2ed87c47cdb9d09ca3dace64069cef" Nov 28 13:27:20 crc kubenswrapper[4779]: I1128 13:27:20.808006 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 28 13:27:20 crc kubenswrapper[4779]: I1128 13:27:20.828255 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Nov 28 13:27:20 crc kubenswrapper[4779]: I1128 13:27:20.835428 4779 scope.go:117] "RemoveContainer" containerID="e9ab39a5cef54632ccca5fa67b7c0c1bb26320a90835c305c519fb2270da368a" Nov 28 13:27:20 crc kubenswrapper[4779]: I1128 13:27:20.835556 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 28 13:27:20 crc kubenswrapper[4779]: E1128 13:27:20.836008 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="431e808a-cf9a-455e-9178-2a3b30a3a78c" containerName="aodh-listener" Nov 28 13:27:20 crc kubenswrapper[4779]: I1128 13:27:20.836022 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="431e808a-cf9a-455e-9178-2a3b30a3a78c" containerName="aodh-listener" Nov 28 13:27:20 crc kubenswrapper[4779]: E1128 13:27:20.836034 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="431e808a-cf9a-455e-9178-2a3b30a3a78c" containerName="aodh-api" Nov 28 13:27:20 crc kubenswrapper[4779]: I1128 13:27:20.836041 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="431e808a-cf9a-455e-9178-2a3b30a3a78c" containerName="aodh-api" Nov 28 13:27:20 crc kubenswrapper[4779]: E1128 13:27:20.836057 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="431e808a-cf9a-455e-9178-2a3b30a3a78c" containerName="aodh-evaluator" Nov 28 13:27:20 crc kubenswrapper[4779]: I1128 13:27:20.836063 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="431e808a-cf9a-455e-9178-2a3b30a3a78c" containerName="aodh-evaluator" Nov 28 13:27:20 crc kubenswrapper[4779]: E1128 13:27:20.836081 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="431e808a-cf9a-455e-9178-2a3b30a3a78c" containerName="aodh-notifier" Nov 28 13:27:20 crc kubenswrapper[4779]: I1128 13:27:20.836087 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="431e808a-cf9a-455e-9178-2a3b30a3a78c" containerName="aodh-notifier" Nov 28 13:27:20 crc kubenswrapper[4779]: I1128 13:27:20.836274 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="431e808a-cf9a-455e-9178-2a3b30a3a78c" containerName="aodh-listener" Nov 28 13:27:20 crc kubenswrapper[4779]: I1128 13:27:20.836288 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="431e808a-cf9a-455e-9178-2a3b30a3a78c" containerName="aodh-notifier" Nov 28 13:27:20 crc kubenswrapper[4779]: I1128 13:27:20.836305 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="431e808a-cf9a-455e-9178-2a3b30a3a78c" containerName="aodh-api" Nov 28 13:27:20 crc kubenswrapper[4779]: I1128 13:27:20.836317 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="431e808a-cf9a-455e-9178-2a3b30a3a78c" containerName="aodh-evaluator" Nov 28 13:27:20 crc kubenswrapper[4779]: I1128 13:27:20.838187 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 28 13:27:20 crc kubenswrapper[4779]: I1128 13:27:20.843412 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 28 13:27:20 crc kubenswrapper[4779]: I1128 13:27:20.844660 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-wdvkm" Nov 28 13:27:20 crc kubenswrapper[4779]: I1128 13:27:20.844852 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Nov 28 13:27:20 crc kubenswrapper[4779]: I1128 13:27:20.845085 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 28 13:27:20 crc kubenswrapper[4779]: I1128 13:27:20.845221 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 28 13:27:20 crc kubenswrapper[4779]: I1128 13:27:20.845339 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Nov 28 13:27:20 crc kubenswrapper[4779]: I1128 13:27:20.861610 4779 scope.go:117] "RemoveContainer" containerID="9252e640db47928ace9578de3cc3f4eae0abb49973b675796d9788adfce1c65e" Nov 28 13:27:20 crc kubenswrapper[4779]: I1128 13:27:20.881498 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a8e5660-e380-4665-b764-3fea920548f1-scripts\") pod \"aodh-0\" (UID: \"1a8e5660-e380-4665-b764-3fea920548f1\") " pod="openstack/aodh-0" Nov 28 13:27:20 crc kubenswrapper[4779]: I1128 13:27:20.881647 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6s66\" (UniqueName: \"kubernetes.io/projected/1a8e5660-e380-4665-b764-3fea920548f1-kube-api-access-g6s66\") pod \"aodh-0\" (UID: \"1a8e5660-e380-4665-b764-3fea920548f1\") " pod="openstack/aodh-0" Nov 28 13:27:20 crc kubenswrapper[4779]: I1128 13:27:20.881905 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a8e5660-e380-4665-b764-3fea920548f1-combined-ca-bundle\") pod \"aodh-0\" (UID: \"1a8e5660-e380-4665-b764-3fea920548f1\") " pod="openstack/aodh-0" Nov 28 13:27:20 crc kubenswrapper[4779]: I1128 13:27:20.881996 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a8e5660-e380-4665-b764-3fea920548f1-internal-tls-certs\") pod \"aodh-0\" (UID: \"1a8e5660-e380-4665-b764-3fea920548f1\") " pod="openstack/aodh-0" Nov 28 13:27:20 crc kubenswrapper[4779]: I1128 13:27:20.882202 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a8e5660-e380-4665-b764-3fea920548f1-config-data\") pod \"aodh-0\" (UID: \"1a8e5660-e380-4665-b764-3fea920548f1\") " pod="openstack/aodh-0" Nov 28 13:27:20 crc kubenswrapper[4779]: I1128 13:27:20.882329 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a8e5660-e380-4665-b764-3fea920548f1-public-tls-certs\") pod \"aodh-0\" (UID: \"1a8e5660-e380-4665-b764-3fea920548f1\") " pod="openstack/aodh-0" Nov 28 13:27:20 crc kubenswrapper[4779]: I1128 13:27:20.983764 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a8e5660-e380-4665-b764-3fea920548f1-scripts\") pod \"aodh-0\" (UID: \"1a8e5660-e380-4665-b764-3fea920548f1\") " pod="openstack/aodh-0" Nov 28 13:27:20 crc kubenswrapper[4779]: I1128 13:27:20.983887 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6s66\" (UniqueName: \"kubernetes.io/projected/1a8e5660-e380-4665-b764-3fea920548f1-kube-api-access-g6s66\") pod \"aodh-0\" (UID: \"1a8e5660-e380-4665-b764-3fea920548f1\") " pod="openstack/aodh-0" Nov 28 13:27:20 crc kubenswrapper[4779]: I1128 13:27:20.983958 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a8e5660-e380-4665-b764-3fea920548f1-combined-ca-bundle\") pod \"aodh-0\" (UID: \"1a8e5660-e380-4665-b764-3fea920548f1\") " pod="openstack/aodh-0" Nov 28 13:27:20 crc kubenswrapper[4779]: I1128 13:27:20.983991 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a8e5660-e380-4665-b764-3fea920548f1-internal-tls-certs\") pod \"aodh-0\" (UID: \"1a8e5660-e380-4665-b764-3fea920548f1\") " pod="openstack/aodh-0" Nov 28 13:27:20 crc kubenswrapper[4779]: I1128 13:27:20.984038 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a8e5660-e380-4665-b764-3fea920548f1-config-data\") pod \"aodh-0\" (UID: \"1a8e5660-e380-4665-b764-3fea920548f1\") " pod="openstack/aodh-0" Nov 28 13:27:20 crc kubenswrapper[4779]: I1128 13:27:20.984083 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a8e5660-e380-4665-b764-3fea920548f1-public-tls-certs\") pod \"aodh-0\" (UID: \"1a8e5660-e380-4665-b764-3fea920548f1\") " pod="openstack/aodh-0" Nov 28 13:27:20 crc kubenswrapper[4779]: I1128 13:27:20.989830 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a8e5660-e380-4665-b764-3fea920548f1-scripts\") pod \"aodh-0\" (UID: \"1a8e5660-e380-4665-b764-3fea920548f1\") " pod="openstack/aodh-0" Nov 28 13:27:20 crc kubenswrapper[4779]: I1128 13:27:20.990959 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a8e5660-e380-4665-b764-3fea920548f1-config-data\") pod \"aodh-0\" (UID: \"1a8e5660-e380-4665-b764-3fea920548f1\") " pod="openstack/aodh-0" Nov 28 13:27:20 crc kubenswrapper[4779]: I1128 13:27:20.991626 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a8e5660-e380-4665-b764-3fea920548f1-combined-ca-bundle\") pod \"aodh-0\" (UID: \"1a8e5660-e380-4665-b764-3fea920548f1\") " pod="openstack/aodh-0" Nov 28 13:27:20 crc kubenswrapper[4779]: I1128 13:27:20.991630 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a8e5660-e380-4665-b764-3fea920548f1-public-tls-certs\") pod \"aodh-0\" (UID: \"1a8e5660-e380-4665-b764-3fea920548f1\") " pod="openstack/aodh-0" Nov 28 13:27:20 crc kubenswrapper[4779]: I1128 13:27:20.994477 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a8e5660-e380-4665-b764-3fea920548f1-internal-tls-certs\") pod \"aodh-0\" (UID: \"1a8e5660-e380-4665-b764-3fea920548f1\") " pod="openstack/aodh-0" Nov 28 13:27:21 crc kubenswrapper[4779]: I1128 13:27:21.014372 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6s66\" (UniqueName: \"kubernetes.io/projected/1a8e5660-e380-4665-b764-3fea920548f1-kube-api-access-g6s66\") pod \"aodh-0\" (UID: \"1a8e5660-e380-4665-b764-3fea920548f1\") " pod="openstack/aodh-0" Nov 28 13:27:21 crc kubenswrapper[4779]: I1128 13:27:21.159151 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 28 13:27:21 crc kubenswrapper[4779]: I1128 13:27:21.694157 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 28 13:27:21 crc kubenswrapper[4779]: I1128 13:27:21.736194 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="431e808a-cf9a-455e-9178-2a3b30a3a78c" path="/var/lib/kubelet/pods/431e808a-cf9a-455e-9178-2a3b30a3a78c/volumes" Nov 28 13:27:21 crc kubenswrapper[4779]: I1128 13:27:21.753702 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1a8e5660-e380-4665-b764-3fea920548f1","Type":"ContainerStarted","Data":"25467aad9530ff2e13747dbdfbaf2f429ad22a1b8bca581517d7099b52cdb0a6"} Nov 28 13:27:22 crc kubenswrapper[4779]: I1128 13:27:22.725791 4779 scope.go:117] "RemoveContainer" containerID="787907d9b97619607abf8c4f9cecb91840367136040816abeda0737b36259574" Nov 28 13:27:22 crc kubenswrapper[4779]: E1128 13:27:22.726582 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:27:22 crc kubenswrapper[4779]: I1128 13:27:22.769794 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1a8e5660-e380-4665-b764-3fea920548f1","Type":"ContainerStarted","Data":"ffb32173505904f113786c182ee2f0a52d8909f8fbd747131e3024dff45119de"} Nov 28 13:27:23 crc kubenswrapper[4779]: I1128 13:27:23.781922 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1a8e5660-e380-4665-b764-3fea920548f1","Type":"ContainerStarted","Data":"0c6039808062e2eac3e32b342138d8dd98fa5fc33feb8f36b2c6a08a0652c257"} Nov 28 13:27:24 crc kubenswrapper[4779]: I1128 13:27:24.800925 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1a8e5660-e380-4665-b764-3fea920548f1","Type":"ContainerStarted","Data":"f0472042d8ef1b14166cecbe53b8904b508cf830837e56ab8d0da6c7e506f021"} Nov 28 13:27:25 crc kubenswrapper[4779]: I1128 13:27:25.816733 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1a8e5660-e380-4665-b764-3fea920548f1","Type":"ContainerStarted","Data":"a437877253c6baaf6d32dd06c2b9c27eb513e85c1344f650e836e4320b358584"} Nov 28 13:27:25 crc kubenswrapper[4779]: I1128 13:27:25.858868 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.519180407 podStartE2EDuration="5.858845829s" podCreationTimestamp="2025-11-28 13:27:20 +0000 UTC" firstStartedPulling="2025-11-28 13:27:21.689799375 +0000 UTC m=+3102.255474729" lastFinishedPulling="2025-11-28 13:27:25.029464797 +0000 UTC m=+3105.595140151" observedRunningTime="2025-11-28 13:27:25.841646681 +0000 UTC m=+3106.407322055" watchObservedRunningTime="2025-11-28 13:27:25.858845829 +0000 UTC m=+3106.424521193" Nov 28 13:27:26 crc kubenswrapper[4779]: I1128 13:27:26.941794 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 28 13:27:37 crc kubenswrapper[4779]: I1128 13:27:37.727053 4779 scope.go:117] "RemoveContainer" containerID="787907d9b97619607abf8c4f9cecb91840367136040816abeda0737b36259574" Nov 28 13:27:37 crc kubenswrapper[4779]: E1128 13:27:37.728461 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:27:48 crc kubenswrapper[4779]: I1128 13:27:48.727459 4779 scope.go:117] "RemoveContainer" containerID="787907d9b97619607abf8c4f9cecb91840367136040816abeda0737b36259574" Nov 28 13:27:48 crc kubenswrapper[4779]: E1128 13:27:48.728191 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:28:02 crc kubenswrapper[4779]: I1128 13:28:02.726851 4779 scope.go:117] "RemoveContainer" containerID="787907d9b97619607abf8c4f9cecb91840367136040816abeda0737b36259574" Nov 28 13:28:02 crc kubenswrapper[4779]: E1128 13:28:02.729022 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:28:13 crc kubenswrapper[4779]: I1128 13:28:13.726986 4779 scope.go:117] "RemoveContainer" containerID="787907d9b97619607abf8c4f9cecb91840367136040816abeda0737b36259574" Nov 28 13:28:13 crc kubenswrapper[4779]: E1128 13:28:13.727933 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:28:26 crc kubenswrapper[4779]: I1128 13:28:26.727192 4779 scope.go:117] "RemoveContainer" containerID="787907d9b97619607abf8c4f9cecb91840367136040816abeda0737b36259574" Nov 28 13:28:26 crc kubenswrapper[4779]: E1128 13:28:26.728287 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:28:37 crc kubenswrapper[4779]: I1128 13:28:37.726815 4779 scope.go:117] "RemoveContainer" containerID="787907d9b97619607abf8c4f9cecb91840367136040816abeda0737b36259574" Nov 28 13:28:37 crc kubenswrapper[4779]: E1128 13:28:37.727857 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:28:49 crc kubenswrapper[4779]: I1128 13:28:49.726151 4779 scope.go:117] "RemoveContainer" containerID="787907d9b97619607abf8c4f9cecb91840367136040816abeda0737b36259574" Nov 28 13:28:49 crc kubenswrapper[4779]: E1128 13:28:49.727083 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:29:01 crc kubenswrapper[4779]: I1128 13:29:01.727382 4779 scope.go:117] "RemoveContainer" containerID="787907d9b97619607abf8c4f9cecb91840367136040816abeda0737b36259574" Nov 28 13:29:01 crc kubenswrapper[4779]: E1128 13:29:01.728354 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:29:13 crc kubenswrapper[4779]: I1128 13:29:13.726391 4779 scope.go:117] "RemoveContainer" containerID="787907d9b97619607abf8c4f9cecb91840367136040816abeda0737b36259574" Nov 28 13:29:13 crc kubenswrapper[4779]: E1128 13:29:13.727226 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:29:27 crc kubenswrapper[4779]: I1128 13:29:27.726718 4779 scope.go:117] "RemoveContainer" containerID="787907d9b97619607abf8c4f9cecb91840367136040816abeda0737b36259574" Nov 28 13:29:27 crc kubenswrapper[4779]: E1128 13:29:27.728388 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:29:40 crc kubenswrapper[4779]: I1128 13:29:40.727295 4779 scope.go:117] "RemoveContainer" containerID="787907d9b97619607abf8c4f9cecb91840367136040816abeda0737b36259574" Nov 28 13:29:40 crc kubenswrapper[4779]: E1128 13:29:40.728382 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:29:54 crc kubenswrapper[4779]: I1128 13:29:54.726196 4779 scope.go:117] "RemoveContainer" containerID="787907d9b97619607abf8c4f9cecb91840367136040816abeda0737b36259574" Nov 28 13:29:55 crc kubenswrapper[4779]: I1128 13:29:55.427336 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" event={"ID":"3b2a3eb4-4de5-491b-b466-3a35b7d745ec","Type":"ContainerStarted","Data":"3e44906474ac2ee22a4abe440e8a06eaadb743a6b2583926c64ac53c9f7bc166"} Nov 28 13:30:00 crc kubenswrapper[4779]: I1128 13:30:00.143185 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405610-cnsvs"] Nov 28 13:30:00 crc kubenswrapper[4779]: I1128 13:30:00.144797 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405610-cnsvs" Nov 28 13:30:00 crc kubenswrapper[4779]: I1128 13:30:00.146646 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 13:30:00 crc kubenswrapper[4779]: I1128 13:30:00.147662 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 13:30:00 crc kubenswrapper[4779]: I1128 13:30:00.153032 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405610-cnsvs"] Nov 28 13:30:00 crc kubenswrapper[4779]: I1128 13:30:00.248316 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/393aa6de-ee73-4f6e-966e-80e79cffe850-config-volume\") pod \"collect-profiles-29405610-cnsvs\" (UID: \"393aa6de-ee73-4f6e-966e-80e79cffe850\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405610-cnsvs" Nov 28 13:30:00 crc kubenswrapper[4779]: I1128 13:30:00.248651 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsqlz\" (UniqueName: \"kubernetes.io/projected/393aa6de-ee73-4f6e-966e-80e79cffe850-kube-api-access-fsqlz\") pod \"collect-profiles-29405610-cnsvs\" (UID: \"393aa6de-ee73-4f6e-966e-80e79cffe850\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405610-cnsvs" Nov 28 13:30:00 crc kubenswrapper[4779]: I1128 13:30:00.248683 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/393aa6de-ee73-4f6e-966e-80e79cffe850-secret-volume\") pod \"collect-profiles-29405610-cnsvs\" (UID: \"393aa6de-ee73-4f6e-966e-80e79cffe850\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405610-cnsvs" Nov 28 13:30:00 crc kubenswrapper[4779]: I1128 13:30:00.350499 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsqlz\" (UniqueName: \"kubernetes.io/projected/393aa6de-ee73-4f6e-966e-80e79cffe850-kube-api-access-fsqlz\") pod \"collect-profiles-29405610-cnsvs\" (UID: \"393aa6de-ee73-4f6e-966e-80e79cffe850\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405610-cnsvs" Nov 28 13:30:00 crc kubenswrapper[4779]: I1128 13:30:00.350830 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/393aa6de-ee73-4f6e-966e-80e79cffe850-secret-volume\") pod \"collect-profiles-29405610-cnsvs\" (UID: \"393aa6de-ee73-4f6e-966e-80e79cffe850\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405610-cnsvs" Nov 28 13:30:00 crc kubenswrapper[4779]: I1128 13:30:00.351150 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/393aa6de-ee73-4f6e-966e-80e79cffe850-config-volume\") pod \"collect-profiles-29405610-cnsvs\" (UID: \"393aa6de-ee73-4f6e-966e-80e79cffe850\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405610-cnsvs" Nov 28 13:30:00 crc kubenswrapper[4779]: I1128 13:30:00.352480 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/393aa6de-ee73-4f6e-966e-80e79cffe850-config-volume\") pod \"collect-profiles-29405610-cnsvs\" (UID: \"393aa6de-ee73-4f6e-966e-80e79cffe850\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405610-cnsvs" Nov 28 13:30:00 crc kubenswrapper[4779]: I1128 13:30:00.357729 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/393aa6de-ee73-4f6e-966e-80e79cffe850-secret-volume\") pod \"collect-profiles-29405610-cnsvs\" (UID: \"393aa6de-ee73-4f6e-966e-80e79cffe850\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405610-cnsvs" Nov 28 13:30:00 crc kubenswrapper[4779]: I1128 13:30:00.367709 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsqlz\" (UniqueName: \"kubernetes.io/projected/393aa6de-ee73-4f6e-966e-80e79cffe850-kube-api-access-fsqlz\") pod \"collect-profiles-29405610-cnsvs\" (UID: \"393aa6de-ee73-4f6e-966e-80e79cffe850\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405610-cnsvs" Nov 28 13:30:00 crc kubenswrapper[4779]: I1128 13:30:00.464875 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405610-cnsvs" Nov 28 13:30:00 crc kubenswrapper[4779]: I1128 13:30:00.981340 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405610-cnsvs"] Nov 28 13:30:00 crc kubenswrapper[4779]: W1128 13:30:00.982527 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod393aa6de_ee73_4f6e_966e_80e79cffe850.slice/crio-b36d4a3053217fa80b804c1747479e637c8f7dd22d6f43ab1467b9917a0a3bbd WatchSource:0}: Error finding container b36d4a3053217fa80b804c1747479e637c8f7dd22d6f43ab1467b9917a0a3bbd: Status 404 returned error can't find the container with id b36d4a3053217fa80b804c1747479e637c8f7dd22d6f43ab1467b9917a0a3bbd Nov 28 13:30:01 crc kubenswrapper[4779]: I1128 13:30:01.494379 4779 generic.go:334] "Generic (PLEG): container finished" podID="393aa6de-ee73-4f6e-966e-80e79cffe850" containerID="3910f247cc07c5d2bcf16c9e0efcb7c855fe52515ffd20689f7fa9036ae0715e" exitCode=0 Nov 28 13:30:01 crc kubenswrapper[4779]: I1128 13:30:01.494639 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405610-cnsvs" event={"ID":"393aa6de-ee73-4f6e-966e-80e79cffe850","Type":"ContainerDied","Data":"3910f247cc07c5d2bcf16c9e0efcb7c855fe52515ffd20689f7fa9036ae0715e"} Nov 28 13:30:01 crc kubenswrapper[4779]: I1128 13:30:01.494676 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405610-cnsvs" event={"ID":"393aa6de-ee73-4f6e-966e-80e79cffe850","Type":"ContainerStarted","Data":"b36d4a3053217fa80b804c1747479e637c8f7dd22d6f43ab1467b9917a0a3bbd"} Nov 28 13:30:02 crc kubenswrapper[4779]: I1128 13:30:02.858347 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405610-cnsvs" Nov 28 13:30:02 crc kubenswrapper[4779]: I1128 13:30:02.909317 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsqlz\" (UniqueName: \"kubernetes.io/projected/393aa6de-ee73-4f6e-966e-80e79cffe850-kube-api-access-fsqlz\") pod \"393aa6de-ee73-4f6e-966e-80e79cffe850\" (UID: \"393aa6de-ee73-4f6e-966e-80e79cffe850\") " Nov 28 13:30:02 crc kubenswrapper[4779]: I1128 13:30:02.909468 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/393aa6de-ee73-4f6e-966e-80e79cffe850-secret-volume\") pod \"393aa6de-ee73-4f6e-966e-80e79cffe850\" (UID: \"393aa6de-ee73-4f6e-966e-80e79cffe850\") " Nov 28 13:30:02 crc kubenswrapper[4779]: I1128 13:30:02.909682 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/393aa6de-ee73-4f6e-966e-80e79cffe850-config-volume\") pod \"393aa6de-ee73-4f6e-966e-80e79cffe850\" (UID: \"393aa6de-ee73-4f6e-966e-80e79cffe850\") " Nov 28 13:30:02 crc kubenswrapper[4779]: I1128 13:30:02.910623 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/393aa6de-ee73-4f6e-966e-80e79cffe850-config-volume" (OuterVolumeSpecName: "config-volume") pod "393aa6de-ee73-4f6e-966e-80e79cffe850" (UID: "393aa6de-ee73-4f6e-966e-80e79cffe850"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 13:30:02 crc kubenswrapper[4779]: I1128 13:30:02.911576 4779 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/393aa6de-ee73-4f6e-966e-80e79cffe850-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 13:30:02 crc kubenswrapper[4779]: I1128 13:30:02.914545 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/393aa6de-ee73-4f6e-966e-80e79cffe850-kube-api-access-fsqlz" (OuterVolumeSpecName: "kube-api-access-fsqlz") pod "393aa6de-ee73-4f6e-966e-80e79cffe850" (UID: "393aa6de-ee73-4f6e-966e-80e79cffe850"). InnerVolumeSpecName "kube-api-access-fsqlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:30:02 crc kubenswrapper[4779]: I1128 13:30:02.914798 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/393aa6de-ee73-4f6e-966e-80e79cffe850-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "393aa6de-ee73-4f6e-966e-80e79cffe850" (UID: "393aa6de-ee73-4f6e-966e-80e79cffe850"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:30:03 crc kubenswrapper[4779]: I1128 13:30:03.013213 4779 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/393aa6de-ee73-4f6e-966e-80e79cffe850-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 13:30:03 crc kubenswrapper[4779]: I1128 13:30:03.013252 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsqlz\" (UniqueName: \"kubernetes.io/projected/393aa6de-ee73-4f6e-966e-80e79cffe850-kube-api-access-fsqlz\") on node \"crc\" DevicePath \"\"" Nov 28 13:30:03 crc kubenswrapper[4779]: I1128 13:30:03.530657 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405610-cnsvs" event={"ID":"393aa6de-ee73-4f6e-966e-80e79cffe850","Type":"ContainerDied","Data":"b36d4a3053217fa80b804c1747479e637c8f7dd22d6f43ab1467b9917a0a3bbd"} Nov 28 13:30:03 crc kubenswrapper[4779]: I1128 13:30:03.531016 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b36d4a3053217fa80b804c1747479e637c8f7dd22d6f43ab1467b9917a0a3bbd" Nov 28 13:30:03 crc kubenswrapper[4779]: I1128 13:30:03.530722 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405610-cnsvs" Nov 28 13:30:03 crc kubenswrapper[4779]: I1128 13:30:03.934432 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405565-kxfxj"] Nov 28 13:30:03 crc kubenswrapper[4779]: I1128 13:30:03.943389 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405565-kxfxj"] Nov 28 13:30:05 crc kubenswrapper[4779]: I1128 13:30:05.743808 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16957643-e7b1-4447-a15c-da6bdb1fbe75" path="/var/lib/kubelet/pods/16957643-e7b1-4447-a15c-da6bdb1fbe75/volumes" Nov 28 13:30:11 crc kubenswrapper[4779]: I1128 13:30:11.292257 4779 scope.go:117] "RemoveContainer" containerID="c41911ddb09c5a07a07fdcd95dd3d446a039175941c64231ce3de1a5eaeddfaa" Nov 28 13:30:14 crc kubenswrapper[4779]: I1128 13:30:14.331413 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zx6m7"] Nov 28 13:30:14 crc kubenswrapper[4779]: E1128 13:30:14.332689 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="393aa6de-ee73-4f6e-966e-80e79cffe850" containerName="collect-profiles" Nov 28 13:30:14 crc kubenswrapper[4779]: I1128 13:30:14.332709 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="393aa6de-ee73-4f6e-966e-80e79cffe850" containerName="collect-profiles" Nov 28 13:30:14 crc kubenswrapper[4779]: I1128 13:30:14.332996 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="393aa6de-ee73-4f6e-966e-80e79cffe850" containerName="collect-profiles" Nov 28 13:30:14 crc kubenswrapper[4779]: I1128 13:30:14.334727 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zx6m7" Nov 28 13:30:14 crc kubenswrapper[4779]: I1128 13:30:14.343113 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zx6m7"] Nov 28 13:30:14 crc kubenswrapper[4779]: I1128 13:30:14.440232 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x96fs\" (UniqueName: \"kubernetes.io/projected/ee5d39ae-0d7a-4740-b04e-8f34311b7bd3-kube-api-access-x96fs\") pod \"redhat-operators-zx6m7\" (UID: \"ee5d39ae-0d7a-4740-b04e-8f34311b7bd3\") " pod="openshift-marketplace/redhat-operators-zx6m7" Nov 28 13:30:14 crc kubenswrapper[4779]: I1128 13:30:14.440389 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee5d39ae-0d7a-4740-b04e-8f34311b7bd3-utilities\") pod \"redhat-operators-zx6m7\" (UID: \"ee5d39ae-0d7a-4740-b04e-8f34311b7bd3\") " pod="openshift-marketplace/redhat-operators-zx6m7" Nov 28 13:30:14 crc kubenswrapper[4779]: I1128 13:30:14.440550 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee5d39ae-0d7a-4740-b04e-8f34311b7bd3-catalog-content\") pod \"redhat-operators-zx6m7\" (UID: \"ee5d39ae-0d7a-4740-b04e-8f34311b7bd3\") " pod="openshift-marketplace/redhat-operators-zx6m7" Nov 28 13:30:14 crc kubenswrapper[4779]: I1128 13:30:14.542446 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee5d39ae-0d7a-4740-b04e-8f34311b7bd3-utilities\") pod \"redhat-operators-zx6m7\" (UID: \"ee5d39ae-0d7a-4740-b04e-8f34311b7bd3\") " pod="openshift-marketplace/redhat-operators-zx6m7" Nov 28 13:30:14 crc kubenswrapper[4779]: I1128 13:30:14.542580 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee5d39ae-0d7a-4740-b04e-8f34311b7bd3-catalog-content\") pod \"redhat-operators-zx6m7\" (UID: \"ee5d39ae-0d7a-4740-b04e-8f34311b7bd3\") " pod="openshift-marketplace/redhat-operators-zx6m7" Nov 28 13:30:14 crc kubenswrapper[4779]: I1128 13:30:14.542620 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x96fs\" (UniqueName: \"kubernetes.io/projected/ee5d39ae-0d7a-4740-b04e-8f34311b7bd3-kube-api-access-x96fs\") pod \"redhat-operators-zx6m7\" (UID: \"ee5d39ae-0d7a-4740-b04e-8f34311b7bd3\") " pod="openshift-marketplace/redhat-operators-zx6m7" Nov 28 13:30:14 crc kubenswrapper[4779]: I1128 13:30:14.543054 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee5d39ae-0d7a-4740-b04e-8f34311b7bd3-utilities\") pod \"redhat-operators-zx6m7\" (UID: \"ee5d39ae-0d7a-4740-b04e-8f34311b7bd3\") " pod="openshift-marketplace/redhat-operators-zx6m7" Nov 28 13:30:14 crc kubenswrapper[4779]: I1128 13:30:14.543085 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee5d39ae-0d7a-4740-b04e-8f34311b7bd3-catalog-content\") pod \"redhat-operators-zx6m7\" (UID: \"ee5d39ae-0d7a-4740-b04e-8f34311b7bd3\") " pod="openshift-marketplace/redhat-operators-zx6m7" Nov 28 13:30:14 crc kubenswrapper[4779]: I1128 13:30:14.563438 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x96fs\" (UniqueName: \"kubernetes.io/projected/ee5d39ae-0d7a-4740-b04e-8f34311b7bd3-kube-api-access-x96fs\") pod \"redhat-operators-zx6m7\" (UID: \"ee5d39ae-0d7a-4740-b04e-8f34311b7bd3\") " pod="openshift-marketplace/redhat-operators-zx6m7" Nov 28 13:30:14 crc kubenswrapper[4779]: I1128 13:30:14.660576 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zx6m7" Nov 28 13:30:15 crc kubenswrapper[4779]: I1128 13:30:15.689214 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zx6m7"] Nov 28 13:30:16 crc kubenswrapper[4779]: I1128 13:30:16.648523 4779 generic.go:334] "Generic (PLEG): container finished" podID="ee5d39ae-0d7a-4740-b04e-8f34311b7bd3" containerID="1ed574dcbb53603d3a55d077b223c531bd6ae6bf3971059acdaf9697e6d08f8b" exitCode=0 Nov 28 13:30:16 crc kubenswrapper[4779]: I1128 13:30:16.648613 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zx6m7" event={"ID":"ee5d39ae-0d7a-4740-b04e-8f34311b7bd3","Type":"ContainerDied","Data":"1ed574dcbb53603d3a55d077b223c531bd6ae6bf3971059acdaf9697e6d08f8b"} Nov 28 13:30:16 crc kubenswrapper[4779]: I1128 13:30:16.649413 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zx6m7" event={"ID":"ee5d39ae-0d7a-4740-b04e-8f34311b7bd3","Type":"ContainerStarted","Data":"265f096ade6f78265218374c1b1d793f1e4813771f6c57de6088e3347951b79b"} Nov 28 13:30:16 crc kubenswrapper[4779]: I1128 13:30:16.651522 4779 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 13:30:18 crc kubenswrapper[4779]: I1128 13:30:18.672314 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zx6m7" event={"ID":"ee5d39ae-0d7a-4740-b04e-8f34311b7bd3","Type":"ContainerStarted","Data":"c4df4fd8d9a18491dd5e27e69b380e1d532fe2687d6516035a097a7f5e6270e8"} Nov 28 13:30:20 crc kubenswrapper[4779]: I1128 13:30:20.702505 4779 generic.go:334] "Generic (PLEG): container finished" podID="ee5d39ae-0d7a-4740-b04e-8f34311b7bd3" containerID="c4df4fd8d9a18491dd5e27e69b380e1d532fe2687d6516035a097a7f5e6270e8" exitCode=0 Nov 28 13:30:20 crc kubenswrapper[4779]: I1128 13:30:20.702589 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zx6m7" event={"ID":"ee5d39ae-0d7a-4740-b04e-8f34311b7bd3","Type":"ContainerDied","Data":"c4df4fd8d9a18491dd5e27e69b380e1d532fe2687d6516035a097a7f5e6270e8"} Nov 28 13:30:21 crc kubenswrapper[4779]: I1128 13:30:21.711936 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zx6m7" event={"ID":"ee5d39ae-0d7a-4740-b04e-8f34311b7bd3","Type":"ContainerStarted","Data":"26c0c39ff8082c831ddd9fdb0c199b531b9ee0a3a28851573af98fcc82377c49"} Nov 28 13:30:21 crc kubenswrapper[4779]: I1128 13:30:21.744772 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zx6m7" podStartSLOduration=3.166666425 podStartE2EDuration="7.744751479s" podCreationTimestamp="2025-11-28 13:30:14 +0000 UTC" firstStartedPulling="2025-11-28 13:30:16.651306997 +0000 UTC m=+3277.216982351" lastFinishedPulling="2025-11-28 13:30:21.229392001 +0000 UTC m=+3281.795067405" observedRunningTime="2025-11-28 13:30:21.733590652 +0000 UTC m=+3282.299266026" watchObservedRunningTime="2025-11-28 13:30:21.744751479 +0000 UTC m=+3282.310426863" Nov 28 13:30:22 crc kubenswrapper[4779]: I1128 13:30:22.604526 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7574d9569-x822f_f1d9753d-b49d-4e32-b312-137314283984/manager/0.log" Nov 28 13:30:24 crc kubenswrapper[4779]: I1128 13:30:24.661006 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zx6m7" Nov 28 13:30:24 crc kubenswrapper[4779]: I1128 13:30:24.661308 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zx6m7" Nov 28 13:30:25 crc kubenswrapper[4779]: I1128 13:30:25.723454 4779 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zx6m7" podUID="ee5d39ae-0d7a-4740-b04e-8f34311b7bd3" containerName="registry-server" probeResult="failure" output=< Nov 28 13:30:25 crc kubenswrapper[4779]: timeout: failed to connect service ":50051" within 1s Nov 28 13:30:25 crc kubenswrapper[4779]: > Nov 28 13:30:34 crc kubenswrapper[4779]: I1128 13:30:34.718308 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zx6m7" Nov 28 13:30:34 crc kubenswrapper[4779]: I1128 13:30:34.778719 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zx6m7" Nov 28 13:30:37 crc kubenswrapper[4779]: I1128 13:30:37.130191 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921046cv4"] Nov 28 13:30:37 crc kubenswrapper[4779]: I1128 13:30:37.133620 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921046cv4" Nov 28 13:30:37 crc kubenswrapper[4779]: I1128 13:30:37.139786 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 28 13:30:37 crc kubenswrapper[4779]: I1128 13:30:37.149892 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921046cv4"] Nov 28 13:30:37 crc kubenswrapper[4779]: I1128 13:30:37.274954 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zx6m7"] Nov 28 13:30:37 crc kubenswrapper[4779]: I1128 13:30:37.275379 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zx6m7" podUID="ee5d39ae-0d7a-4740-b04e-8f34311b7bd3" containerName="registry-server" containerID="cri-o://26c0c39ff8082c831ddd9fdb0c199b531b9ee0a3a28851573af98fcc82377c49" gracePeriod=2 Nov 28 13:30:37 crc kubenswrapper[4779]: I1128 13:30:37.340481 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zgsn\" (UniqueName: \"kubernetes.io/projected/e307524d-7be7-4841-ac8a-dea95d4c976e-kube-api-access-7zgsn\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921046cv4\" (UID: \"e307524d-7be7-4841-ac8a-dea95d4c976e\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921046cv4" Nov 28 13:30:37 crc kubenswrapper[4779]: I1128 13:30:37.340689 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e307524d-7be7-4841-ac8a-dea95d4c976e-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921046cv4\" (UID: \"e307524d-7be7-4841-ac8a-dea95d4c976e\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921046cv4" Nov 28 13:30:37 crc kubenswrapper[4779]: I1128 13:30:37.340774 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e307524d-7be7-4841-ac8a-dea95d4c976e-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921046cv4\" (UID: \"e307524d-7be7-4841-ac8a-dea95d4c976e\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921046cv4" Nov 28 13:30:37 crc kubenswrapper[4779]: I1128 13:30:37.442611 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zgsn\" (UniqueName: \"kubernetes.io/projected/e307524d-7be7-4841-ac8a-dea95d4c976e-kube-api-access-7zgsn\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921046cv4\" (UID: \"e307524d-7be7-4841-ac8a-dea95d4c976e\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921046cv4" Nov 28 13:30:37 crc kubenswrapper[4779]: I1128 13:30:37.442834 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e307524d-7be7-4841-ac8a-dea95d4c976e-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921046cv4\" (UID: \"e307524d-7be7-4841-ac8a-dea95d4c976e\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921046cv4" Nov 28 13:30:37 crc kubenswrapper[4779]: I1128 13:30:37.442999 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e307524d-7be7-4841-ac8a-dea95d4c976e-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921046cv4\" (UID: \"e307524d-7be7-4841-ac8a-dea95d4c976e\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921046cv4" Nov 28 13:30:37 crc kubenswrapper[4779]: I1128 13:30:37.443849 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e307524d-7be7-4841-ac8a-dea95d4c976e-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921046cv4\" (UID: \"e307524d-7be7-4841-ac8a-dea95d4c976e\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921046cv4" Nov 28 13:30:37 crc kubenswrapper[4779]: I1128 13:30:37.443981 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e307524d-7be7-4841-ac8a-dea95d4c976e-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921046cv4\" (UID: \"e307524d-7be7-4841-ac8a-dea95d4c976e\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921046cv4" Nov 28 13:30:37 crc kubenswrapper[4779]: I1128 13:30:37.477353 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zgsn\" (UniqueName: \"kubernetes.io/projected/e307524d-7be7-4841-ac8a-dea95d4c976e-kube-api-access-7zgsn\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921046cv4\" (UID: \"e307524d-7be7-4841-ac8a-dea95d4c976e\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921046cv4" Nov 28 13:30:37 crc kubenswrapper[4779]: I1128 13:30:37.759004 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921046cv4" Nov 28 13:30:38 crc kubenswrapper[4779]: I1128 13:30:38.250576 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921046cv4"] Nov 28 13:30:38 crc kubenswrapper[4779]: I1128 13:30:38.894759 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921046cv4" event={"ID":"e307524d-7be7-4841-ac8a-dea95d4c976e","Type":"ContainerStarted","Data":"6ed261869187bfb6feb6f51f5796f7591adf0c8723f2db0be19c99b09e14fc82"} Nov 28 13:30:38 crc kubenswrapper[4779]: I1128 13:30:38.895052 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921046cv4" event={"ID":"e307524d-7be7-4841-ac8a-dea95d4c976e","Type":"ContainerStarted","Data":"a5f050d8058c29abd73a9de54853525217987f4986878f29ab3182c6eed4ace6"} Nov 28 13:30:38 crc kubenswrapper[4779]: I1128 13:30:38.897531 4779 generic.go:334] "Generic (PLEG): container finished" podID="ee5d39ae-0d7a-4740-b04e-8f34311b7bd3" containerID="26c0c39ff8082c831ddd9fdb0c199b531b9ee0a3a28851573af98fcc82377c49" exitCode=0 Nov 28 13:30:38 crc kubenswrapper[4779]: I1128 13:30:38.897570 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zx6m7" event={"ID":"ee5d39ae-0d7a-4740-b04e-8f34311b7bd3","Type":"ContainerDied","Data":"26c0c39ff8082c831ddd9fdb0c199b531b9ee0a3a28851573af98fcc82377c49"} Nov 28 13:30:39 crc kubenswrapper[4779]: I1128 13:30:39.134918 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zx6m7" Nov 28 13:30:39 crc kubenswrapper[4779]: I1128 13:30:39.182369 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee5d39ae-0d7a-4740-b04e-8f34311b7bd3-utilities\") pod \"ee5d39ae-0d7a-4740-b04e-8f34311b7bd3\" (UID: \"ee5d39ae-0d7a-4740-b04e-8f34311b7bd3\") " Nov 28 13:30:39 crc kubenswrapper[4779]: I1128 13:30:39.182514 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x96fs\" (UniqueName: \"kubernetes.io/projected/ee5d39ae-0d7a-4740-b04e-8f34311b7bd3-kube-api-access-x96fs\") pod \"ee5d39ae-0d7a-4740-b04e-8f34311b7bd3\" (UID: \"ee5d39ae-0d7a-4740-b04e-8f34311b7bd3\") " Nov 28 13:30:39 crc kubenswrapper[4779]: I1128 13:30:39.182770 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee5d39ae-0d7a-4740-b04e-8f34311b7bd3-catalog-content\") pod \"ee5d39ae-0d7a-4740-b04e-8f34311b7bd3\" (UID: \"ee5d39ae-0d7a-4740-b04e-8f34311b7bd3\") " Nov 28 13:30:39 crc kubenswrapper[4779]: I1128 13:30:39.183920 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee5d39ae-0d7a-4740-b04e-8f34311b7bd3-utilities" (OuterVolumeSpecName: "utilities") pod "ee5d39ae-0d7a-4740-b04e-8f34311b7bd3" (UID: "ee5d39ae-0d7a-4740-b04e-8f34311b7bd3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 13:30:39 crc kubenswrapper[4779]: I1128 13:30:39.191173 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee5d39ae-0d7a-4740-b04e-8f34311b7bd3-kube-api-access-x96fs" (OuterVolumeSpecName: "kube-api-access-x96fs") pod "ee5d39ae-0d7a-4740-b04e-8f34311b7bd3" (UID: "ee5d39ae-0d7a-4740-b04e-8f34311b7bd3"). InnerVolumeSpecName "kube-api-access-x96fs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:30:39 crc kubenswrapper[4779]: I1128 13:30:39.285653 4779 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee5d39ae-0d7a-4740-b04e-8f34311b7bd3-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 13:30:39 crc kubenswrapper[4779]: I1128 13:30:39.285695 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x96fs\" (UniqueName: \"kubernetes.io/projected/ee5d39ae-0d7a-4740-b04e-8f34311b7bd3-kube-api-access-x96fs\") on node \"crc\" DevicePath \"\"" Nov 28 13:30:39 crc kubenswrapper[4779]: I1128 13:30:39.871699 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee5d39ae-0d7a-4740-b04e-8f34311b7bd3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ee5d39ae-0d7a-4740-b04e-8f34311b7bd3" (UID: "ee5d39ae-0d7a-4740-b04e-8f34311b7bd3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 13:30:39 crc kubenswrapper[4779]: I1128 13:30:39.897246 4779 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee5d39ae-0d7a-4740-b04e-8f34311b7bd3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 13:30:39 crc kubenswrapper[4779]: I1128 13:30:39.908312 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zx6m7" event={"ID":"ee5d39ae-0d7a-4740-b04e-8f34311b7bd3","Type":"ContainerDied","Data":"265f096ade6f78265218374c1b1d793f1e4813771f6c57de6088e3347951b79b"} Nov 28 13:30:39 crc kubenswrapper[4779]: I1128 13:30:39.908367 4779 scope.go:117] "RemoveContainer" containerID="26c0c39ff8082c831ddd9fdb0c199b531b9ee0a3a28851573af98fcc82377c49" Nov 28 13:30:39 crc kubenswrapper[4779]: I1128 13:30:39.908331 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zx6m7" Nov 28 13:30:39 crc kubenswrapper[4779]: I1128 13:30:39.934360 4779 scope.go:117] "RemoveContainer" containerID="c4df4fd8d9a18491dd5e27e69b380e1d532fe2687d6516035a097a7f5e6270e8" Nov 28 13:30:39 crc kubenswrapper[4779]: I1128 13:30:39.946470 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zx6m7"] Nov 28 13:30:39 crc kubenswrapper[4779]: I1128 13:30:39.954298 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zx6m7"] Nov 28 13:30:39 crc kubenswrapper[4779]: I1128 13:30:39.970148 4779 scope.go:117] "RemoveContainer" containerID="1ed574dcbb53603d3a55d077b223c531bd6ae6bf3971059acdaf9697e6d08f8b" Nov 28 13:30:40 crc kubenswrapper[4779]: I1128 13:30:40.925754 4779 generic.go:334] "Generic (PLEG): container finished" podID="e307524d-7be7-4841-ac8a-dea95d4c976e" containerID="6ed261869187bfb6feb6f51f5796f7591adf0c8723f2db0be19c99b09e14fc82" exitCode=0 Nov 28 13:30:40 crc kubenswrapper[4779]: I1128 13:30:40.925866 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921046cv4" event={"ID":"e307524d-7be7-4841-ac8a-dea95d4c976e","Type":"ContainerDied","Data":"6ed261869187bfb6feb6f51f5796f7591adf0c8723f2db0be19c99b09e14fc82"} Nov 28 13:30:41 crc kubenswrapper[4779]: I1128 13:30:41.737674 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee5d39ae-0d7a-4740-b04e-8f34311b7bd3" path="/var/lib/kubelet/pods/ee5d39ae-0d7a-4740-b04e-8f34311b7bd3/volumes" Nov 28 13:30:43 crc kubenswrapper[4779]: I1128 13:30:43.956187 4779 generic.go:334] "Generic (PLEG): container finished" podID="e307524d-7be7-4841-ac8a-dea95d4c976e" containerID="3517e9f25d835590a312f0fe846a1d8b4d9439b47bc6873716f5916b637c65a2" exitCode=0 Nov 28 13:30:43 crc kubenswrapper[4779]: I1128 13:30:43.956240 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921046cv4" event={"ID":"e307524d-7be7-4841-ac8a-dea95d4c976e","Type":"ContainerDied","Data":"3517e9f25d835590a312f0fe846a1d8b4d9439b47bc6873716f5916b637c65a2"} Nov 28 13:30:44 crc kubenswrapper[4779]: I1128 13:30:44.970566 4779 generic.go:334] "Generic (PLEG): container finished" podID="e307524d-7be7-4841-ac8a-dea95d4c976e" containerID="f2ea11a66c896aa019ae525d490ae91345f7210643e858f8fb8098104ecfe14b" exitCode=0 Nov 28 13:30:44 crc kubenswrapper[4779]: I1128 13:30:44.970629 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921046cv4" event={"ID":"e307524d-7be7-4841-ac8a-dea95d4c976e","Type":"ContainerDied","Data":"f2ea11a66c896aa019ae525d490ae91345f7210643e858f8fb8098104ecfe14b"} Nov 28 13:30:46 crc kubenswrapper[4779]: I1128 13:30:46.321228 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921046cv4" Nov 28 13:30:46 crc kubenswrapper[4779]: I1128 13:30:46.430696 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e307524d-7be7-4841-ac8a-dea95d4c976e-bundle\") pod \"e307524d-7be7-4841-ac8a-dea95d4c976e\" (UID: \"e307524d-7be7-4841-ac8a-dea95d4c976e\") " Nov 28 13:30:46 crc kubenswrapper[4779]: I1128 13:30:46.431020 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e307524d-7be7-4841-ac8a-dea95d4c976e-util\") pod \"e307524d-7be7-4841-ac8a-dea95d4c976e\" (UID: \"e307524d-7be7-4841-ac8a-dea95d4c976e\") " Nov 28 13:30:46 crc kubenswrapper[4779]: I1128 13:30:46.431061 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zgsn\" (UniqueName: \"kubernetes.io/projected/e307524d-7be7-4841-ac8a-dea95d4c976e-kube-api-access-7zgsn\") pod \"e307524d-7be7-4841-ac8a-dea95d4c976e\" (UID: \"e307524d-7be7-4841-ac8a-dea95d4c976e\") " Nov 28 13:30:46 crc kubenswrapper[4779]: I1128 13:30:46.433827 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e307524d-7be7-4841-ac8a-dea95d4c976e-bundle" (OuterVolumeSpecName: "bundle") pod "e307524d-7be7-4841-ac8a-dea95d4c976e" (UID: "e307524d-7be7-4841-ac8a-dea95d4c976e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 13:30:46 crc kubenswrapper[4779]: I1128 13:30:46.443418 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e307524d-7be7-4841-ac8a-dea95d4c976e-kube-api-access-7zgsn" (OuterVolumeSpecName: "kube-api-access-7zgsn") pod "e307524d-7be7-4841-ac8a-dea95d4c976e" (UID: "e307524d-7be7-4841-ac8a-dea95d4c976e"). InnerVolumeSpecName "kube-api-access-7zgsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:30:46 crc kubenswrapper[4779]: I1128 13:30:46.444416 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e307524d-7be7-4841-ac8a-dea95d4c976e-util" (OuterVolumeSpecName: "util") pod "e307524d-7be7-4841-ac8a-dea95d4c976e" (UID: "e307524d-7be7-4841-ac8a-dea95d4c976e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 13:30:46 crc kubenswrapper[4779]: I1128 13:30:46.533039 4779 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e307524d-7be7-4841-ac8a-dea95d4c976e-util\") on node \"crc\" DevicePath \"\"" Nov 28 13:30:46 crc kubenswrapper[4779]: I1128 13:30:46.533078 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zgsn\" (UniqueName: \"kubernetes.io/projected/e307524d-7be7-4841-ac8a-dea95d4c976e-kube-api-access-7zgsn\") on node \"crc\" DevicePath \"\"" Nov 28 13:30:46 crc kubenswrapper[4779]: I1128 13:30:46.533105 4779 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e307524d-7be7-4841-ac8a-dea95d4c976e-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 13:30:46 crc kubenswrapper[4779]: I1128 13:30:46.994989 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921046cv4" event={"ID":"e307524d-7be7-4841-ac8a-dea95d4c976e","Type":"ContainerDied","Data":"a5f050d8058c29abd73a9de54853525217987f4986878f29ab3182c6eed4ace6"} Nov 28 13:30:46 crc kubenswrapper[4779]: I1128 13:30:46.995057 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5f050d8058c29abd73a9de54853525217987f4986878f29ab3182c6eed4ace6" Nov 28 13:30:46 crc kubenswrapper[4779]: I1128 13:30:46.995212 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921046cv4" Nov 28 13:31:02 crc kubenswrapper[4779]: I1128 13:31:02.971948 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-l5jtg"] Nov 28 13:31:02 crc kubenswrapper[4779]: E1128 13:31:02.973795 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e307524d-7be7-4841-ac8a-dea95d4c976e" containerName="extract" Nov 28 13:31:02 crc kubenswrapper[4779]: I1128 13:31:02.973880 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="e307524d-7be7-4841-ac8a-dea95d4c976e" containerName="extract" Nov 28 13:31:02 crc kubenswrapper[4779]: E1128 13:31:02.973947 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee5d39ae-0d7a-4740-b04e-8f34311b7bd3" containerName="extract-utilities" Nov 28 13:31:02 crc kubenswrapper[4779]: I1128 13:31:02.973999 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee5d39ae-0d7a-4740-b04e-8f34311b7bd3" containerName="extract-utilities" Nov 28 13:31:02 crc kubenswrapper[4779]: E1128 13:31:02.974080 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e307524d-7be7-4841-ac8a-dea95d4c976e" containerName="pull" Nov 28 13:31:02 crc kubenswrapper[4779]: I1128 13:31:02.974163 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="e307524d-7be7-4841-ac8a-dea95d4c976e" containerName="pull" Nov 28 13:31:02 crc kubenswrapper[4779]: E1128 13:31:02.974232 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee5d39ae-0d7a-4740-b04e-8f34311b7bd3" containerName="registry-server" Nov 28 13:31:02 crc kubenswrapper[4779]: I1128 13:31:02.974311 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee5d39ae-0d7a-4740-b04e-8f34311b7bd3" containerName="registry-server" Nov 28 13:31:02 crc kubenswrapper[4779]: E1128 13:31:02.974379 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee5d39ae-0d7a-4740-b04e-8f34311b7bd3" containerName="extract-content" Nov 28 13:31:02 crc kubenswrapper[4779]: I1128 13:31:02.974971 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee5d39ae-0d7a-4740-b04e-8f34311b7bd3" containerName="extract-content" Nov 28 13:31:02 crc kubenswrapper[4779]: E1128 13:31:02.975114 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e307524d-7be7-4841-ac8a-dea95d4c976e" containerName="util" Nov 28 13:31:02 crc kubenswrapper[4779]: I1128 13:31:02.975204 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="e307524d-7be7-4841-ac8a-dea95d4c976e" containerName="util" Nov 28 13:31:02 crc kubenswrapper[4779]: I1128 13:31:02.975657 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="e307524d-7be7-4841-ac8a-dea95d4c976e" containerName="extract" Nov 28 13:31:02 crc kubenswrapper[4779]: I1128 13:31:02.975754 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee5d39ae-0d7a-4740-b04e-8f34311b7bd3" containerName="registry-server" Nov 28 13:31:02 crc kubenswrapper[4779]: I1128 13:31:02.976980 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-l5jtg" Nov 28 13:31:02 crc kubenswrapper[4779]: I1128 13:31:02.978877 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-jz2cs" Nov 28 13:31:02 crc kubenswrapper[4779]: I1128 13:31:02.980529 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Nov 28 13:31:02 crc kubenswrapper[4779]: I1128 13:31:02.980886 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Nov 28 13:31:02 crc kubenswrapper[4779]: I1128 13:31:02.990347 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-l5jtg"] Nov 28 13:31:03 crc kubenswrapper[4779]: I1128 13:31:03.081926 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfnqf\" (UniqueName: \"kubernetes.io/projected/06f1d580-00d9-4699-8e8d-8087523ef59a-kube-api-access-zfnqf\") pod \"obo-prometheus-operator-668cf9dfbb-l5jtg\" (UID: \"06f1d580-00d9-4699-8e8d-8087523ef59a\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-l5jtg" Nov 28 13:31:03 crc kubenswrapper[4779]: I1128 13:31:03.102571 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-d986bbfbc-cwqv4"] Nov 28 13:31:03 crc kubenswrapper[4779]: I1128 13:31:03.104009 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d986bbfbc-cwqv4" Nov 28 13:31:03 crc kubenswrapper[4779]: I1128 13:31:03.110420 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Nov 28 13:31:03 crc kubenswrapper[4779]: I1128 13:31:03.110605 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-nmjjz" Nov 28 13:31:03 crc kubenswrapper[4779]: I1128 13:31:03.142743 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-d986bbfbc-z4cw2"] Nov 28 13:31:03 crc kubenswrapper[4779]: I1128 13:31:03.144860 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d986bbfbc-z4cw2" Nov 28 13:31:03 crc kubenswrapper[4779]: I1128 13:31:03.176390 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-d986bbfbc-z4cw2"] Nov 28 13:31:03 crc kubenswrapper[4779]: I1128 13:31:03.183527 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfnqf\" (UniqueName: \"kubernetes.io/projected/06f1d580-00d9-4699-8e8d-8087523ef59a-kube-api-access-zfnqf\") pod \"obo-prometheus-operator-668cf9dfbb-l5jtg\" (UID: \"06f1d580-00d9-4699-8e8d-8087523ef59a\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-l5jtg" Nov 28 13:31:03 crc kubenswrapper[4779]: I1128 13:31:03.183587 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4fc94f4f-278c-4c4f-a547-2779183ca661-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-d986bbfbc-cwqv4\" (UID: \"4fc94f4f-278c-4c4f-a547-2779183ca661\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-d986bbfbc-cwqv4" Nov 28 13:31:03 crc kubenswrapper[4779]: I1128 13:31:03.183642 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4fc94f4f-278c-4c4f-a547-2779183ca661-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-d986bbfbc-cwqv4\" (UID: \"4fc94f4f-278c-4c4f-a547-2779183ca661\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-d986bbfbc-cwqv4" Nov 28 13:31:03 crc kubenswrapper[4779]: I1128 13:31:03.200437 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-d986bbfbc-cwqv4"] Nov 28 13:31:03 crc kubenswrapper[4779]: I1128 13:31:03.226736 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfnqf\" (UniqueName: \"kubernetes.io/projected/06f1d580-00d9-4699-8e8d-8087523ef59a-kube-api-access-zfnqf\") pod \"obo-prometheus-operator-668cf9dfbb-l5jtg\" (UID: \"06f1d580-00d9-4699-8e8d-8087523ef59a\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-l5jtg" Nov 28 13:31:03 crc kubenswrapper[4779]: I1128 13:31:03.286269 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4fc94f4f-278c-4c4f-a547-2779183ca661-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-d986bbfbc-cwqv4\" (UID: \"4fc94f4f-278c-4c4f-a547-2779183ca661\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-d986bbfbc-cwqv4" Nov 28 13:31:03 crc kubenswrapper[4779]: I1128 13:31:03.286354 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4fc94f4f-278c-4c4f-a547-2779183ca661-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-d986bbfbc-cwqv4\" (UID: \"4fc94f4f-278c-4c4f-a547-2779183ca661\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-d986bbfbc-cwqv4" Nov 28 13:31:03 crc kubenswrapper[4779]: I1128 13:31:03.286567 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9aef4803-506a-4ca3-9bdd-2ef8865a975c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-d986bbfbc-z4cw2\" (UID: \"9aef4803-506a-4ca3-9bdd-2ef8865a975c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-d986bbfbc-z4cw2" Nov 28 13:31:03 crc kubenswrapper[4779]: I1128 13:31:03.286604 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9aef4803-506a-4ca3-9bdd-2ef8865a975c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-d986bbfbc-z4cw2\" (UID: \"9aef4803-506a-4ca3-9bdd-2ef8865a975c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-d986bbfbc-z4cw2" Nov 28 13:31:03 crc kubenswrapper[4779]: I1128 13:31:03.291891 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4fc94f4f-278c-4c4f-a547-2779183ca661-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-d986bbfbc-cwqv4\" (UID: \"4fc94f4f-278c-4c4f-a547-2779183ca661\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-d986bbfbc-cwqv4" Nov 28 13:31:03 crc kubenswrapper[4779]: I1128 13:31:03.302558 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-l5jtg" Nov 28 13:31:03 crc kubenswrapper[4779]: I1128 13:31:03.316085 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4fc94f4f-278c-4c4f-a547-2779183ca661-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-d986bbfbc-cwqv4\" (UID: \"4fc94f4f-278c-4c4f-a547-2779183ca661\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-d986bbfbc-cwqv4" Nov 28 13:31:03 crc kubenswrapper[4779]: I1128 13:31:03.346348 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-z4wlc"] Nov 28 13:31:03 crc kubenswrapper[4779]: I1128 13:31:03.347828 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-z4wlc" Nov 28 13:31:03 crc kubenswrapper[4779]: I1128 13:31:03.365594 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Nov 28 13:31:03 crc kubenswrapper[4779]: I1128 13:31:03.365970 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-52xwv" Nov 28 13:31:03 crc kubenswrapper[4779]: I1128 13:31:03.396637 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-z4wlc"] Nov 28 13:31:03 crc kubenswrapper[4779]: I1128 13:31:03.405731 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9aef4803-506a-4ca3-9bdd-2ef8865a975c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-d986bbfbc-z4cw2\" (UID: \"9aef4803-506a-4ca3-9bdd-2ef8865a975c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-d986bbfbc-z4cw2" Nov 28 13:31:03 crc kubenswrapper[4779]: I1128 13:31:03.405976 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9aef4803-506a-4ca3-9bdd-2ef8865a975c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-d986bbfbc-z4cw2\" (UID: \"9aef4803-506a-4ca3-9bdd-2ef8865a975c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-d986bbfbc-z4cw2" Nov 28 13:31:03 crc kubenswrapper[4779]: I1128 13:31:03.417740 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9aef4803-506a-4ca3-9bdd-2ef8865a975c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-d986bbfbc-z4cw2\" (UID: \"9aef4803-506a-4ca3-9bdd-2ef8865a975c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-d986bbfbc-z4cw2" Nov 28 13:31:03 crc kubenswrapper[4779]: I1128 13:31:03.419916 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d986bbfbc-cwqv4" Nov 28 13:31:03 crc kubenswrapper[4779]: I1128 13:31:03.430207 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9aef4803-506a-4ca3-9bdd-2ef8865a975c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-d986bbfbc-z4cw2\" (UID: \"9aef4803-506a-4ca3-9bdd-2ef8865a975c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-d986bbfbc-z4cw2" Nov 28 13:31:03 crc kubenswrapper[4779]: I1128 13:31:03.466673 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d986bbfbc-z4cw2" Nov 28 13:31:03 crc kubenswrapper[4779]: I1128 13:31:03.513385 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6844\" (UniqueName: \"kubernetes.io/projected/179dd1bb-6c8d-443a-a408-40273ae8f6f6-kube-api-access-p6844\") pod \"observability-operator-d8bb48f5d-z4wlc\" (UID: \"179dd1bb-6c8d-443a-a408-40273ae8f6f6\") " pod="openshift-operators/observability-operator-d8bb48f5d-z4wlc" Nov 28 13:31:03 crc kubenswrapper[4779]: I1128 13:31:03.513549 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/179dd1bb-6c8d-443a-a408-40273ae8f6f6-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-z4wlc\" (UID: \"179dd1bb-6c8d-443a-a408-40273ae8f6f6\") " pod="openshift-operators/observability-operator-d8bb48f5d-z4wlc" Nov 28 13:31:03 crc kubenswrapper[4779]: I1128 13:31:03.547574 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5446b9c989-njrck"] Nov 28 13:31:03 crc kubenswrapper[4779]: I1128 13:31:03.578847 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-njrck" Nov 28 13:31:03 crc kubenswrapper[4779]: I1128 13:31:03.600261 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-mxzsp" Nov 28 13:31:03 crc kubenswrapper[4779]: I1128 13:31:03.601694 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-njrck"] Nov 28 13:31:03 crc kubenswrapper[4779]: I1128 13:31:03.616677 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6844\" (UniqueName: \"kubernetes.io/projected/179dd1bb-6c8d-443a-a408-40273ae8f6f6-kube-api-access-p6844\") pod \"observability-operator-d8bb48f5d-z4wlc\" (UID: \"179dd1bb-6c8d-443a-a408-40273ae8f6f6\") " pod="openshift-operators/observability-operator-d8bb48f5d-z4wlc" Nov 28 13:31:03 crc kubenswrapper[4779]: I1128 13:31:03.617237 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/179dd1bb-6c8d-443a-a408-40273ae8f6f6-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-z4wlc\" (UID: \"179dd1bb-6c8d-443a-a408-40273ae8f6f6\") " pod="openshift-operators/observability-operator-d8bb48f5d-z4wlc" Nov 28 13:31:03 crc kubenswrapper[4779]: I1128 13:31:03.624881 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/179dd1bb-6c8d-443a-a408-40273ae8f6f6-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-z4wlc\" (UID: \"179dd1bb-6c8d-443a-a408-40273ae8f6f6\") " pod="openshift-operators/observability-operator-d8bb48f5d-z4wlc" Nov 28 13:31:03 crc kubenswrapper[4779]: I1128 13:31:03.637842 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6844\" (UniqueName: \"kubernetes.io/projected/179dd1bb-6c8d-443a-a408-40273ae8f6f6-kube-api-access-p6844\") pod \"observability-operator-d8bb48f5d-z4wlc\" (UID: \"179dd1bb-6c8d-443a-a408-40273ae8f6f6\") " pod="openshift-operators/observability-operator-d8bb48f5d-z4wlc" Nov 28 13:31:03 crc kubenswrapper[4779]: I1128 13:31:03.719419 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrfq9\" (UniqueName: \"kubernetes.io/projected/cfb01668-ce93-42c0-8c77-1aaac40d5160-kube-api-access-zrfq9\") pod \"perses-operator-5446b9c989-njrck\" (UID: \"cfb01668-ce93-42c0-8c77-1aaac40d5160\") " pod="openshift-operators/perses-operator-5446b9c989-njrck" Nov 28 13:31:03 crc kubenswrapper[4779]: I1128 13:31:03.719720 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/cfb01668-ce93-42c0-8c77-1aaac40d5160-openshift-service-ca\") pod \"perses-operator-5446b9c989-njrck\" (UID: \"cfb01668-ce93-42c0-8c77-1aaac40d5160\") " pod="openshift-operators/perses-operator-5446b9c989-njrck" Nov 28 13:31:03 crc kubenswrapper[4779]: I1128 13:31:03.821113 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrfq9\" (UniqueName: \"kubernetes.io/projected/cfb01668-ce93-42c0-8c77-1aaac40d5160-kube-api-access-zrfq9\") pod \"perses-operator-5446b9c989-njrck\" (UID: \"cfb01668-ce93-42c0-8c77-1aaac40d5160\") " pod="openshift-operators/perses-operator-5446b9c989-njrck" Nov 28 13:31:03 crc kubenswrapper[4779]: I1128 13:31:03.821183 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/cfb01668-ce93-42c0-8c77-1aaac40d5160-openshift-service-ca\") pod \"perses-operator-5446b9c989-njrck\" (UID: \"cfb01668-ce93-42c0-8c77-1aaac40d5160\") " pod="openshift-operators/perses-operator-5446b9c989-njrck" Nov 28 13:31:03 crc kubenswrapper[4779]: I1128 13:31:03.822038 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/cfb01668-ce93-42c0-8c77-1aaac40d5160-openshift-service-ca\") pod \"perses-operator-5446b9c989-njrck\" (UID: \"cfb01668-ce93-42c0-8c77-1aaac40d5160\") " pod="openshift-operators/perses-operator-5446b9c989-njrck" Nov 28 13:31:03 crc kubenswrapper[4779]: I1128 13:31:03.843747 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrfq9\" (UniqueName: \"kubernetes.io/projected/cfb01668-ce93-42c0-8c77-1aaac40d5160-kube-api-access-zrfq9\") pod \"perses-operator-5446b9c989-njrck\" (UID: \"cfb01668-ce93-42c0-8c77-1aaac40d5160\") " pod="openshift-operators/perses-operator-5446b9c989-njrck" Nov 28 13:31:03 crc kubenswrapper[4779]: I1128 13:31:03.866350 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-z4wlc" Nov 28 13:31:03 crc kubenswrapper[4779]: I1128 13:31:03.919641 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-njrck" Nov 28 13:31:04 crc kubenswrapper[4779]: I1128 13:31:04.164602 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-l5jtg"] Nov 28 13:31:04 crc kubenswrapper[4779]: I1128 13:31:04.188728 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-d986bbfbc-z4cw2"] Nov 28 13:31:04 crc kubenswrapper[4779]: W1128 13:31:04.220543 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9aef4803_506a_4ca3_9bdd_2ef8865a975c.slice/crio-e15b6ca7a67f9666864b1c68bb0fcb3043155c24f88e5ce7d803aa08108c7d26 WatchSource:0}: Error finding container e15b6ca7a67f9666864b1c68bb0fcb3043155c24f88e5ce7d803aa08108c7d26: Status 404 returned error can't find the container with id e15b6ca7a67f9666864b1c68bb0fcb3043155c24f88e5ce7d803aa08108c7d26 Nov 28 13:31:04 crc kubenswrapper[4779]: I1128 13:31:04.327176 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-d986bbfbc-cwqv4"] Nov 28 13:31:04 crc kubenswrapper[4779]: W1128 13:31:04.346952 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fc94f4f_278c_4c4f_a547_2779183ca661.slice/crio-d9351e946a197550d4424755260f5c2b525b26801100f937f5b5ee9107ee2870 WatchSource:0}: Error finding container d9351e946a197550d4424755260f5c2b525b26801100f937f5b5ee9107ee2870: Status 404 returned error can't find the container with id d9351e946a197550d4424755260f5c2b525b26801100f937f5b5ee9107ee2870 Nov 28 13:31:04 crc kubenswrapper[4779]: I1128 13:31:04.571128 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-njrck"] Nov 28 13:31:04 crc kubenswrapper[4779]: W1128 13:31:04.579987 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcfb01668_ce93_42c0_8c77_1aaac40d5160.slice/crio-54ff2909021436cbb96ddf0732d1c477c7133dc239ed648f0e45ea745a7589d4 WatchSource:0}: Error finding container 54ff2909021436cbb96ddf0732d1c477c7133dc239ed648f0e45ea745a7589d4: Status 404 returned error can't find the container with id 54ff2909021436cbb96ddf0732d1c477c7133dc239ed648f0e45ea745a7589d4 Nov 28 13:31:04 crc kubenswrapper[4779]: I1128 13:31:04.666120 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-z4wlc"] Nov 28 13:31:04 crc kubenswrapper[4779]: W1128 13:31:04.699623 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod179dd1bb_6c8d_443a_a408_40273ae8f6f6.slice/crio-da3f6867f394a9154de6d29641dc8b38ec137bafce335557b49fc8e555f73daa WatchSource:0}: Error finding container da3f6867f394a9154de6d29641dc8b38ec137bafce335557b49fc8e555f73daa: Status 404 returned error can't find the container with id da3f6867f394a9154de6d29641dc8b38ec137bafce335557b49fc8e555f73daa Nov 28 13:31:05 crc kubenswrapper[4779]: I1128 13:31:05.172430 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-l5jtg" event={"ID":"06f1d580-00d9-4699-8e8d-8087523ef59a","Type":"ContainerStarted","Data":"67c5d2abe0d99e6d8dab428486706e85fab9457e3c5baf20831249a40460e470"} Nov 28 13:31:05 crc kubenswrapper[4779]: I1128 13:31:05.177258 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d986bbfbc-cwqv4" event={"ID":"4fc94f4f-278c-4c4f-a547-2779183ca661","Type":"ContainerStarted","Data":"d9351e946a197550d4424755260f5c2b525b26801100f937f5b5ee9107ee2870"} Nov 28 13:31:05 crc kubenswrapper[4779]: I1128 13:31:05.179782 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d986bbfbc-z4cw2" event={"ID":"9aef4803-506a-4ca3-9bdd-2ef8865a975c","Type":"ContainerStarted","Data":"e15b6ca7a67f9666864b1c68bb0fcb3043155c24f88e5ce7d803aa08108c7d26"} Nov 28 13:31:05 crc kubenswrapper[4779]: I1128 13:31:05.181230 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-njrck" event={"ID":"cfb01668-ce93-42c0-8c77-1aaac40d5160","Type":"ContainerStarted","Data":"54ff2909021436cbb96ddf0732d1c477c7133dc239ed648f0e45ea745a7589d4"} Nov 28 13:31:05 crc kubenswrapper[4779]: I1128 13:31:05.185601 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-z4wlc" event={"ID":"179dd1bb-6c8d-443a-a408-40273ae8f6f6","Type":"ContainerStarted","Data":"da3f6867f394a9154de6d29641dc8b38ec137bafce335557b49fc8e555f73daa"} Nov 28 13:31:19 crc kubenswrapper[4779]: I1128 13:31:19.443695 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d986bbfbc-cwqv4" event={"ID":"4fc94f4f-278c-4c4f-a547-2779183ca661","Type":"ContainerStarted","Data":"e8f4d060269ef4f96dedb4319b370d9e3a06b61837a1a0b0f6018c03624406ec"} Nov 28 13:31:19 crc kubenswrapper[4779]: I1128 13:31:19.446264 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d986bbfbc-z4cw2" event={"ID":"9aef4803-506a-4ca3-9bdd-2ef8865a975c","Type":"ContainerStarted","Data":"6d5c46b542ac735dd6523c8b1e94f1fa6d72fa3d1b3b0c82be2dde93fa2e1f91"} Nov 28 13:31:19 crc kubenswrapper[4779]: I1128 13:31:19.448524 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-njrck" event={"ID":"cfb01668-ce93-42c0-8c77-1aaac40d5160","Type":"ContainerStarted","Data":"11938ce7508de7a613aaf83f00302bfe2e84171b328aa350714e7875dbd41b30"} Nov 28 13:31:19 crc kubenswrapper[4779]: I1128 13:31:19.448716 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5446b9c989-njrck" Nov 28 13:31:19 crc kubenswrapper[4779]: I1128 13:31:19.473875 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d986bbfbc-cwqv4" podStartSLOduration=2.019271816 podStartE2EDuration="16.473852591s" podCreationTimestamp="2025-11-28 13:31:03 +0000 UTC" firstStartedPulling="2025-11-28 13:31:04.353048738 +0000 UTC m=+3324.918724092" lastFinishedPulling="2025-11-28 13:31:18.807629513 +0000 UTC m=+3339.373304867" observedRunningTime="2025-11-28 13:31:19.46410464 +0000 UTC m=+3340.029779994" watchObservedRunningTime="2025-11-28 13:31:19.473852591 +0000 UTC m=+3340.039527945" Nov 28 13:31:19 crc kubenswrapper[4779]: I1128 13:31:19.513912 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d986bbfbc-z4cw2" podStartSLOduration=1.841280222 podStartE2EDuration="16.513893581s" podCreationTimestamp="2025-11-28 13:31:03 +0000 UTC" firstStartedPulling="2025-11-28 13:31:04.222148051 +0000 UTC m=+3324.787823405" lastFinishedPulling="2025-11-28 13:31:18.89476141 +0000 UTC m=+3339.460436764" observedRunningTime="2025-11-28 13:31:19.501936401 +0000 UTC m=+3340.067611755" watchObservedRunningTime="2025-11-28 13:31:19.513893581 +0000 UTC m=+3340.079568935" Nov 28 13:31:19 crc kubenswrapper[4779]: I1128 13:31:19.554730 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5446b9c989-njrck" podStartSLOduration=2.298864835 podStartE2EDuration="16.554712041s" podCreationTimestamp="2025-11-28 13:31:03 +0000 UTC" firstStartedPulling="2025-11-28 13:31:04.581726337 +0000 UTC m=+3325.147401691" lastFinishedPulling="2025-11-28 13:31:18.837573543 +0000 UTC m=+3339.403248897" observedRunningTime="2025-11-28 13:31:19.546001908 +0000 UTC m=+3340.111677262" watchObservedRunningTime="2025-11-28 13:31:19.554712041 +0000 UTC m=+3340.120387395" Nov 28 13:31:20 crc kubenswrapper[4779]: I1128 13:31:20.460370 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-l5jtg" event={"ID":"06f1d580-00d9-4699-8e8d-8087523ef59a","Type":"ContainerStarted","Data":"136c21e904d65a3bf7fe6278322d93a2a00723f0d3fac8b53a8303160a72eb1e"} Nov 28 13:31:21 crc kubenswrapper[4779]: I1128 13:31:21.462161 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-l5jtg" podStartSLOduration=4.740768427 podStartE2EDuration="19.462143959s" podCreationTimestamp="2025-11-28 13:31:02 +0000 UTC" firstStartedPulling="2025-11-28 13:31:04.173671936 +0000 UTC m=+3324.739347290" lastFinishedPulling="2025-11-28 13:31:18.895047468 +0000 UTC m=+3339.460722822" observedRunningTime="2025-11-28 13:31:20.482295862 +0000 UTC m=+3341.047971226" watchObservedRunningTime="2025-11-28 13:31:21.462143959 +0000 UTC m=+3342.027819303" Nov 28 13:31:21 crc kubenswrapper[4779]: I1128 13:31:21.468832 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 28 13:31:21 crc kubenswrapper[4779]: I1128 13:31:21.469078 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="1a8e5660-e380-4665-b764-3fea920548f1" containerName="aodh-api" containerID="cri-o://ffb32173505904f113786c182ee2f0a52d8909f8fbd747131e3024dff45119de" gracePeriod=30 Nov 28 13:31:21 crc kubenswrapper[4779]: I1128 13:31:21.469528 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="1a8e5660-e380-4665-b764-3fea920548f1" containerName="aodh-listener" containerID="cri-o://a437877253c6baaf6d32dd06c2b9c27eb513e85c1344f650e836e4320b358584" gracePeriod=30 Nov 28 13:31:21 crc kubenswrapper[4779]: I1128 13:31:21.469575 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="1a8e5660-e380-4665-b764-3fea920548f1" containerName="aodh-notifier" containerID="cri-o://f0472042d8ef1b14166cecbe53b8904b508cf830837e56ab8d0da6c7e506f021" gracePeriod=30 Nov 28 13:31:21 crc kubenswrapper[4779]: I1128 13:31:21.469611 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="1a8e5660-e380-4665-b764-3fea920548f1" containerName="aodh-evaluator" containerID="cri-o://0c6039808062e2eac3e32b342138d8dd98fa5fc33feb8f36b2c6a08a0652c257" gracePeriod=30 Nov 28 13:31:22 crc kubenswrapper[4779]: I1128 13:31:22.493284 4779 generic.go:334] "Generic (PLEG): container finished" podID="1a8e5660-e380-4665-b764-3fea920548f1" containerID="0c6039808062e2eac3e32b342138d8dd98fa5fc33feb8f36b2c6a08a0652c257" exitCode=0 Nov 28 13:31:22 crc kubenswrapper[4779]: I1128 13:31:22.493568 4779 generic.go:334] "Generic (PLEG): container finished" podID="1a8e5660-e380-4665-b764-3fea920548f1" containerID="ffb32173505904f113786c182ee2f0a52d8909f8fbd747131e3024dff45119de" exitCode=0 Nov 28 13:31:22 crc kubenswrapper[4779]: I1128 13:31:22.493595 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1a8e5660-e380-4665-b764-3fea920548f1","Type":"ContainerDied","Data":"0c6039808062e2eac3e32b342138d8dd98fa5fc33feb8f36b2c6a08a0652c257"} Nov 28 13:31:22 crc kubenswrapper[4779]: I1128 13:31:22.493621 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1a8e5660-e380-4665-b764-3fea920548f1","Type":"ContainerDied","Data":"ffb32173505904f113786c182ee2f0a52d8909f8fbd747131e3024dff45119de"} Nov 28 13:31:23 crc kubenswrapper[4779]: I1128 13:31:23.510815 4779 generic.go:334] "Generic (PLEG): container finished" podID="1a8e5660-e380-4665-b764-3fea920548f1" containerID="a437877253c6baaf6d32dd06c2b9c27eb513e85c1344f650e836e4320b358584" exitCode=0 Nov 28 13:31:23 crc kubenswrapper[4779]: I1128 13:31:23.510857 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1a8e5660-e380-4665-b764-3fea920548f1","Type":"ContainerDied","Data":"a437877253c6baaf6d32dd06c2b9c27eb513e85c1344f650e836e4320b358584"} Nov 28 13:31:24 crc kubenswrapper[4779]: I1128 13:31:24.520397 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-z4wlc" event={"ID":"179dd1bb-6c8d-443a-a408-40273ae8f6f6","Type":"ContainerStarted","Data":"948763dd42eb44a00de550315353ad39199fd9ce46ce97dc6fbb34fb8900d945"} Nov 28 13:31:24 crc kubenswrapper[4779]: I1128 13:31:24.521137 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-d8bb48f5d-z4wlc" Nov 28 13:31:24 crc kubenswrapper[4779]: I1128 13:31:24.545900 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-d8bb48f5d-z4wlc" podStartSLOduration=2.691353552 podStartE2EDuration="21.545878831s" podCreationTimestamp="2025-11-28 13:31:03 +0000 UTC" firstStartedPulling="2025-11-28 13:31:04.707254351 +0000 UTC m=+3325.272929705" lastFinishedPulling="2025-11-28 13:31:23.56177963 +0000 UTC m=+3344.127454984" observedRunningTime="2025-11-28 13:31:24.538792171 +0000 UTC m=+3345.104467555" watchObservedRunningTime="2025-11-28 13:31:24.545878831 +0000 UTC m=+3345.111554185" Nov 28 13:31:24 crc kubenswrapper[4779]: I1128 13:31:24.578259 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-d8bb48f5d-z4wlc" Nov 28 13:31:25 crc kubenswrapper[4779]: I1128 13:31:25.758325 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Nov 28 13:31:25 crc kubenswrapper[4779]: I1128 13:31:25.760483 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Nov 28 13:31:25 crc kubenswrapper[4779]: I1128 13:31:25.762866 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Nov 28 13:31:25 crc kubenswrapper[4779]: I1128 13:31:25.763044 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Nov 28 13:31:25 crc kubenswrapper[4779]: I1128 13:31:25.763454 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-w9bln" Nov 28 13:31:25 crc kubenswrapper[4779]: I1128 13:31:25.763511 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Nov 28 13:31:25 crc kubenswrapper[4779]: I1128 13:31:25.768715 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Nov 28 13:31:25 crc kubenswrapper[4779]: I1128 13:31:25.829964 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Nov 28 13:31:25 crc kubenswrapper[4779]: I1128 13:31:25.852385 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7222438c-fe9c-429a-899e-269d84def6d7-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"7222438c-fe9c-429a-899e-269d84def6d7\") " pod="openstack/alertmanager-metric-storage-0" Nov 28 13:31:25 crc kubenswrapper[4779]: I1128 13:31:25.852432 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s8j4\" (UniqueName: \"kubernetes.io/projected/7222438c-fe9c-429a-899e-269d84def6d7-kube-api-access-5s8j4\") pod \"alertmanager-metric-storage-0\" (UID: \"7222438c-fe9c-429a-899e-269d84def6d7\") " pod="openstack/alertmanager-metric-storage-0" Nov 28 13:31:25 crc kubenswrapper[4779]: I1128 13:31:25.852456 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/7222438c-fe9c-429a-899e-269d84def6d7-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"7222438c-fe9c-429a-899e-269d84def6d7\") " pod="openstack/alertmanager-metric-storage-0" Nov 28 13:31:25 crc kubenswrapper[4779]: I1128 13:31:25.852479 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7222438c-fe9c-429a-899e-269d84def6d7-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"7222438c-fe9c-429a-899e-269d84def6d7\") " pod="openstack/alertmanager-metric-storage-0" Nov 28 13:31:25 crc kubenswrapper[4779]: I1128 13:31:25.852576 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/7222438c-fe9c-429a-899e-269d84def6d7-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"7222438c-fe9c-429a-899e-269d84def6d7\") " pod="openstack/alertmanager-metric-storage-0" Nov 28 13:31:25 crc kubenswrapper[4779]: I1128 13:31:25.852623 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/7222438c-fe9c-429a-899e-269d84def6d7-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"7222438c-fe9c-429a-899e-269d84def6d7\") " pod="openstack/alertmanager-metric-storage-0" Nov 28 13:31:25 crc kubenswrapper[4779]: I1128 13:31:25.852702 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7222438c-fe9c-429a-899e-269d84def6d7-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"7222438c-fe9c-429a-899e-269d84def6d7\") " pod="openstack/alertmanager-metric-storage-0" Nov 28 13:31:25 crc kubenswrapper[4779]: I1128 13:31:25.954621 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7222438c-fe9c-429a-899e-269d84def6d7-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"7222438c-fe9c-429a-899e-269d84def6d7\") " pod="openstack/alertmanager-metric-storage-0" Nov 28 13:31:25 crc kubenswrapper[4779]: I1128 13:31:25.954714 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7222438c-fe9c-429a-899e-269d84def6d7-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"7222438c-fe9c-429a-899e-269d84def6d7\") " pod="openstack/alertmanager-metric-storage-0" Nov 28 13:31:25 crc kubenswrapper[4779]: I1128 13:31:25.954746 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s8j4\" (UniqueName: \"kubernetes.io/projected/7222438c-fe9c-429a-899e-269d84def6d7-kube-api-access-5s8j4\") pod \"alertmanager-metric-storage-0\" (UID: \"7222438c-fe9c-429a-899e-269d84def6d7\") " pod="openstack/alertmanager-metric-storage-0" Nov 28 13:31:25 crc kubenswrapper[4779]: I1128 13:31:25.954772 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/7222438c-fe9c-429a-899e-269d84def6d7-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"7222438c-fe9c-429a-899e-269d84def6d7\") " pod="openstack/alertmanager-metric-storage-0" Nov 28 13:31:25 crc kubenswrapper[4779]: I1128 13:31:25.954800 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7222438c-fe9c-429a-899e-269d84def6d7-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"7222438c-fe9c-429a-899e-269d84def6d7\") " pod="openstack/alertmanager-metric-storage-0" Nov 28 13:31:25 crc kubenswrapper[4779]: I1128 13:31:25.954843 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/7222438c-fe9c-429a-899e-269d84def6d7-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"7222438c-fe9c-429a-899e-269d84def6d7\") " pod="openstack/alertmanager-metric-storage-0" Nov 28 13:31:25 crc kubenswrapper[4779]: I1128 13:31:25.954908 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/7222438c-fe9c-429a-899e-269d84def6d7-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"7222438c-fe9c-429a-899e-269d84def6d7\") " pod="openstack/alertmanager-metric-storage-0" Nov 28 13:31:25 crc kubenswrapper[4779]: I1128 13:31:25.955781 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/7222438c-fe9c-429a-899e-269d84def6d7-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"7222438c-fe9c-429a-899e-269d84def6d7\") " pod="openstack/alertmanager-metric-storage-0" Nov 28 13:31:25 crc kubenswrapper[4779]: I1128 13:31:25.960286 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7222438c-fe9c-429a-899e-269d84def6d7-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"7222438c-fe9c-429a-899e-269d84def6d7\") " pod="openstack/alertmanager-metric-storage-0" Nov 28 13:31:25 crc kubenswrapper[4779]: I1128 13:31:25.960459 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7222438c-fe9c-429a-899e-269d84def6d7-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"7222438c-fe9c-429a-899e-269d84def6d7\") " pod="openstack/alertmanager-metric-storage-0" Nov 28 13:31:25 crc kubenswrapper[4779]: I1128 13:31:25.960830 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7222438c-fe9c-429a-899e-269d84def6d7-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"7222438c-fe9c-429a-899e-269d84def6d7\") " pod="openstack/alertmanager-metric-storage-0" Nov 28 13:31:25 crc kubenswrapper[4779]: I1128 13:31:25.964313 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/7222438c-fe9c-429a-899e-269d84def6d7-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"7222438c-fe9c-429a-899e-269d84def6d7\") " pod="openstack/alertmanager-metric-storage-0" Nov 28 13:31:25 crc kubenswrapper[4779]: I1128 13:31:25.965280 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/7222438c-fe9c-429a-899e-269d84def6d7-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"7222438c-fe9c-429a-899e-269d84def6d7\") " pod="openstack/alertmanager-metric-storage-0" Nov 28 13:31:25 crc kubenswrapper[4779]: I1128 13:31:25.970138 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s8j4\" (UniqueName: \"kubernetes.io/projected/7222438c-fe9c-429a-899e-269d84def6d7-kube-api-access-5s8j4\") pod \"alertmanager-metric-storage-0\" (UID: \"7222438c-fe9c-429a-899e-269d84def6d7\") " pod="openstack/alertmanager-metric-storage-0" Nov 28 13:31:26 crc kubenswrapper[4779]: I1128 13:31:26.083082 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Nov 28 13:31:26 crc kubenswrapper[4779]: I1128 13:31:26.352520 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 28 13:31:26 crc kubenswrapper[4779]: I1128 13:31:26.355552 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 28 13:31:26 crc kubenswrapper[4779]: I1128 13:31:26.358947 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Nov 28 13:31:26 crc kubenswrapper[4779]: I1128 13:31:26.359142 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Nov 28 13:31:26 crc kubenswrapper[4779]: I1128 13:31:26.359230 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-k7vw5" Nov 28 13:31:26 crc kubenswrapper[4779]: I1128 13:31:26.359258 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Nov 28 13:31:26 crc kubenswrapper[4779]: I1128 13:31:26.359463 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Nov 28 13:31:26 crc kubenswrapper[4779]: I1128 13:31:26.359593 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Nov 28 13:31:26 crc kubenswrapper[4779]: I1128 13:31:26.368745 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 28 13:31:26 crc kubenswrapper[4779]: I1128 13:31:26.464956 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/319ee731-3ce1-42ae-bd3e-0c8e38840b1d-config\") pod \"prometheus-metric-storage-0\" (UID: \"319ee731-3ce1-42ae-bd3e-0c8e38840b1d\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:31:26 crc kubenswrapper[4779]: I1128 13:31:26.465016 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/319ee731-3ce1-42ae-bd3e-0c8e38840b1d-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"319ee731-3ce1-42ae-bd3e-0c8e38840b1d\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:31:26 crc kubenswrapper[4779]: I1128 13:31:26.465213 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"prometheus-metric-storage-0\" (UID: \"319ee731-3ce1-42ae-bd3e-0c8e38840b1d\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:31:26 crc kubenswrapper[4779]: I1128 13:31:26.465240 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/319ee731-3ce1-42ae-bd3e-0c8e38840b1d-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"319ee731-3ce1-42ae-bd3e-0c8e38840b1d\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:31:26 crc kubenswrapper[4779]: I1128 13:31:26.465268 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/319ee731-3ce1-42ae-bd3e-0c8e38840b1d-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"319ee731-3ce1-42ae-bd3e-0c8e38840b1d\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:31:26 crc kubenswrapper[4779]: I1128 13:31:26.465351 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phk6p\" (UniqueName: \"kubernetes.io/projected/319ee731-3ce1-42ae-bd3e-0c8e38840b1d-kube-api-access-phk6p\") pod \"prometheus-metric-storage-0\" (UID: \"319ee731-3ce1-42ae-bd3e-0c8e38840b1d\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:31:26 crc kubenswrapper[4779]: I1128 13:31:26.465397 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/319ee731-3ce1-42ae-bd3e-0c8e38840b1d-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"319ee731-3ce1-42ae-bd3e-0c8e38840b1d\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:31:26 crc kubenswrapper[4779]: I1128 13:31:26.465430 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/319ee731-3ce1-42ae-bd3e-0c8e38840b1d-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"319ee731-3ce1-42ae-bd3e-0c8e38840b1d\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:31:26 crc kubenswrapper[4779]: I1128 13:31:26.567062 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/319ee731-3ce1-42ae-bd3e-0c8e38840b1d-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"319ee731-3ce1-42ae-bd3e-0c8e38840b1d\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:31:26 crc kubenswrapper[4779]: I1128 13:31:26.567163 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/319ee731-3ce1-42ae-bd3e-0c8e38840b1d-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"319ee731-3ce1-42ae-bd3e-0c8e38840b1d\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:31:26 crc kubenswrapper[4779]: I1128 13:31:26.567198 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/319ee731-3ce1-42ae-bd3e-0c8e38840b1d-config\") pod \"prometheus-metric-storage-0\" (UID: \"319ee731-3ce1-42ae-bd3e-0c8e38840b1d\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:31:26 crc kubenswrapper[4779]: I1128 13:31:26.567234 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/319ee731-3ce1-42ae-bd3e-0c8e38840b1d-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"319ee731-3ce1-42ae-bd3e-0c8e38840b1d\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:31:26 crc kubenswrapper[4779]: I1128 13:31:26.567420 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"prometheus-metric-storage-0\" (UID: \"319ee731-3ce1-42ae-bd3e-0c8e38840b1d\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:31:26 crc kubenswrapper[4779]: I1128 13:31:26.567457 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/319ee731-3ce1-42ae-bd3e-0c8e38840b1d-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"319ee731-3ce1-42ae-bd3e-0c8e38840b1d\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:31:26 crc kubenswrapper[4779]: I1128 13:31:26.567493 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/319ee731-3ce1-42ae-bd3e-0c8e38840b1d-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"319ee731-3ce1-42ae-bd3e-0c8e38840b1d\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:31:26 crc kubenswrapper[4779]: I1128 13:31:26.567555 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phk6p\" (UniqueName: \"kubernetes.io/projected/319ee731-3ce1-42ae-bd3e-0c8e38840b1d-kube-api-access-phk6p\") pod \"prometheus-metric-storage-0\" (UID: \"319ee731-3ce1-42ae-bd3e-0c8e38840b1d\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:31:26 crc kubenswrapper[4779]: I1128 13:31:26.568012 4779 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"prometheus-metric-storage-0\" (UID: \"319ee731-3ce1-42ae-bd3e-0c8e38840b1d\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/prometheus-metric-storage-0" Nov 28 13:31:26 crc kubenswrapper[4779]: I1128 13:31:26.568305 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/319ee731-3ce1-42ae-bd3e-0c8e38840b1d-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"319ee731-3ce1-42ae-bd3e-0c8e38840b1d\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:31:26 crc kubenswrapper[4779]: I1128 13:31:26.573480 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/319ee731-3ce1-42ae-bd3e-0c8e38840b1d-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"319ee731-3ce1-42ae-bd3e-0c8e38840b1d\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:31:26 crc kubenswrapper[4779]: I1128 13:31:26.574270 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/319ee731-3ce1-42ae-bd3e-0c8e38840b1d-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"319ee731-3ce1-42ae-bd3e-0c8e38840b1d\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:31:26 crc kubenswrapper[4779]: I1128 13:31:26.575268 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/319ee731-3ce1-42ae-bd3e-0c8e38840b1d-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"319ee731-3ce1-42ae-bd3e-0c8e38840b1d\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:31:26 crc kubenswrapper[4779]: I1128 13:31:26.578121 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/319ee731-3ce1-42ae-bd3e-0c8e38840b1d-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"319ee731-3ce1-42ae-bd3e-0c8e38840b1d\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:31:26 crc kubenswrapper[4779]: I1128 13:31:26.578786 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/319ee731-3ce1-42ae-bd3e-0c8e38840b1d-config\") pod \"prometheus-metric-storage-0\" (UID: \"319ee731-3ce1-42ae-bd3e-0c8e38840b1d\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:31:26 crc kubenswrapper[4779]: I1128 13:31:26.589017 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phk6p\" (UniqueName: \"kubernetes.io/projected/319ee731-3ce1-42ae-bd3e-0c8e38840b1d-kube-api-access-phk6p\") pod \"prometheus-metric-storage-0\" (UID: \"319ee731-3ce1-42ae-bd3e-0c8e38840b1d\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:31:26 crc kubenswrapper[4779]: I1128 13:31:26.609824 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"prometheus-metric-storage-0\" (UID: \"319ee731-3ce1-42ae-bd3e-0c8e38840b1d\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:31:26 crc kubenswrapper[4779]: I1128 13:31:26.674154 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Nov 28 13:31:26 crc kubenswrapper[4779]: I1128 13:31:26.678353 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 28 13:31:27 crc kubenswrapper[4779]: I1128 13:31:27.154326 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 28 13:31:27 crc kubenswrapper[4779]: W1128 13:31:27.163270 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod319ee731_3ce1_42ae_bd3e_0c8e38840b1d.slice/crio-b60301e21cf544c72f00ae6a4bcec9237ccf989ad7864af7e4219af7ef466b51 WatchSource:0}: Error finding container b60301e21cf544c72f00ae6a4bcec9237ccf989ad7864af7e4219af7ef466b51: Status 404 returned error can't find the container with id b60301e21cf544c72f00ae6a4bcec9237ccf989ad7864af7e4219af7ef466b51 Nov 28 13:31:27 crc kubenswrapper[4779]: I1128 13:31:27.553338 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"319ee731-3ce1-42ae-bd3e-0c8e38840b1d","Type":"ContainerStarted","Data":"b60301e21cf544c72f00ae6a4bcec9237ccf989ad7864af7e4219af7ef466b51"} Nov 28 13:31:27 crc kubenswrapper[4779]: I1128 13:31:27.554580 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"7222438c-fe9c-429a-899e-269d84def6d7","Type":"ContainerStarted","Data":"004821f56625b1a3ea1c9a500a56dcf4fad019b3e6436ea9ea6b4d735ccbbfe2"} Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.156968 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.189221 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6s66\" (UniqueName: \"kubernetes.io/projected/1a8e5660-e380-4665-b764-3fea920548f1-kube-api-access-g6s66\") pod \"1a8e5660-e380-4665-b764-3fea920548f1\" (UID: \"1a8e5660-e380-4665-b764-3fea920548f1\") " Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.189274 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a8e5660-e380-4665-b764-3fea920548f1-config-data\") pod \"1a8e5660-e380-4665-b764-3fea920548f1\" (UID: \"1a8e5660-e380-4665-b764-3fea920548f1\") " Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.189294 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a8e5660-e380-4665-b764-3fea920548f1-combined-ca-bundle\") pod \"1a8e5660-e380-4665-b764-3fea920548f1\" (UID: \"1a8e5660-e380-4665-b764-3fea920548f1\") " Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.189369 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a8e5660-e380-4665-b764-3fea920548f1-scripts\") pod \"1a8e5660-e380-4665-b764-3fea920548f1\" (UID: \"1a8e5660-e380-4665-b764-3fea920548f1\") " Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.189405 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a8e5660-e380-4665-b764-3fea920548f1-public-tls-certs\") pod \"1a8e5660-e380-4665-b764-3fea920548f1\" (UID: \"1a8e5660-e380-4665-b764-3fea920548f1\") " Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.189520 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a8e5660-e380-4665-b764-3fea920548f1-internal-tls-certs\") pod \"1a8e5660-e380-4665-b764-3fea920548f1\" (UID: \"1a8e5660-e380-4665-b764-3fea920548f1\") " Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.194434 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a8e5660-e380-4665-b764-3fea920548f1-kube-api-access-g6s66" (OuterVolumeSpecName: "kube-api-access-g6s66") pod "1a8e5660-e380-4665-b764-3fea920548f1" (UID: "1a8e5660-e380-4665-b764-3fea920548f1"). InnerVolumeSpecName "kube-api-access-g6s66". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.194687 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a8e5660-e380-4665-b764-3fea920548f1-scripts" (OuterVolumeSpecName: "scripts") pod "1a8e5660-e380-4665-b764-3fea920548f1" (UID: "1a8e5660-e380-4665-b764-3fea920548f1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.271680 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a8e5660-e380-4665-b764-3fea920548f1-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1a8e5660-e380-4665-b764-3fea920548f1" (UID: "1a8e5660-e380-4665-b764-3fea920548f1"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.291646 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a8e5660-e380-4665-b764-3fea920548f1-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1a8e5660-e380-4665-b764-3fea920548f1" (UID: "1a8e5660-e380-4665-b764-3fea920548f1"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.292652 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6s66\" (UniqueName: \"kubernetes.io/projected/1a8e5660-e380-4665-b764-3fea920548f1-kube-api-access-g6s66\") on node \"crc\" DevicePath \"\"" Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.292673 4779 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a8e5660-e380-4665-b764-3fea920548f1-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.292682 4779 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a8e5660-e380-4665-b764-3fea920548f1-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.292690 4779 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a8e5660-e380-4665-b764-3fea920548f1-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.330058 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a8e5660-e380-4665-b764-3fea920548f1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1a8e5660-e380-4665-b764-3fea920548f1" (UID: "1a8e5660-e380-4665-b764-3fea920548f1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.340015 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a8e5660-e380-4665-b764-3fea920548f1-config-data" (OuterVolumeSpecName: "config-data") pod "1a8e5660-e380-4665-b764-3fea920548f1" (UID: "1a8e5660-e380-4665-b764-3fea920548f1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.394682 4779 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a8e5660-e380-4665-b764-3fea920548f1-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.395019 4779 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a8e5660-e380-4665-b764-3fea920548f1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.605717 4779 generic.go:334] "Generic (PLEG): container finished" podID="1a8e5660-e380-4665-b764-3fea920548f1" containerID="f0472042d8ef1b14166cecbe53b8904b508cf830837e56ab8d0da6c7e506f021" exitCode=0 Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.605760 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1a8e5660-e380-4665-b764-3fea920548f1","Type":"ContainerDied","Data":"f0472042d8ef1b14166cecbe53b8904b508cf830837e56ab8d0da6c7e506f021"} Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.605785 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1a8e5660-e380-4665-b764-3fea920548f1","Type":"ContainerDied","Data":"25467aad9530ff2e13747dbdfbaf2f429ad22a1b8bca581517d7099b52cdb0a6"} Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.605800 4779 scope.go:117] "RemoveContainer" containerID="a437877253c6baaf6d32dd06c2b9c27eb513e85c1344f650e836e4320b358584" Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.605926 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.640206 4779 scope.go:117] "RemoveContainer" containerID="f0472042d8ef1b14166cecbe53b8904b508cf830837e56ab8d0da6c7e506f021" Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.645786 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.659610 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.660658 4779 scope.go:117] "RemoveContainer" containerID="0c6039808062e2eac3e32b342138d8dd98fa5fc33feb8f36b2c6a08a0652c257" Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.671613 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 28 13:31:32 crc kubenswrapper[4779]: E1128 13:31:32.672016 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a8e5660-e380-4665-b764-3fea920548f1" containerName="aodh-evaluator" Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.672034 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a8e5660-e380-4665-b764-3fea920548f1" containerName="aodh-evaluator" Nov 28 13:31:32 crc kubenswrapper[4779]: E1128 13:31:32.672050 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a8e5660-e380-4665-b764-3fea920548f1" containerName="aodh-notifier" Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.672058 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a8e5660-e380-4665-b764-3fea920548f1" containerName="aodh-notifier" Nov 28 13:31:32 crc kubenswrapper[4779]: E1128 13:31:32.672087 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a8e5660-e380-4665-b764-3fea920548f1" containerName="aodh-api" Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.672096 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a8e5660-e380-4665-b764-3fea920548f1" containerName="aodh-api" Nov 28 13:31:32 crc kubenswrapper[4779]: E1128 13:31:32.672134 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a8e5660-e380-4665-b764-3fea920548f1" containerName="aodh-listener" Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.672143 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a8e5660-e380-4665-b764-3fea920548f1" containerName="aodh-listener" Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.672319 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a8e5660-e380-4665-b764-3fea920548f1" containerName="aodh-notifier" Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.672332 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a8e5660-e380-4665-b764-3fea920548f1" containerName="aodh-evaluator" Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.672347 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a8e5660-e380-4665-b764-3fea920548f1" containerName="aodh-api" Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.672368 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a8e5660-e380-4665-b764-3fea920548f1" containerName="aodh-listener" Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.674571 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.679508 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.679682 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.679808 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.679834 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-wdvkm" Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.681058 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.683403 4779 scope.go:117] "RemoveContainer" containerID="ffb32173505904f113786c182ee2f0a52d8909f8fbd747131e3024dff45119de" Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.686939 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.726940 4779 scope.go:117] "RemoveContainer" containerID="a437877253c6baaf6d32dd06c2b9c27eb513e85c1344f650e836e4320b358584" Nov 28 13:31:32 crc kubenswrapper[4779]: E1128 13:31:32.728415 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a437877253c6baaf6d32dd06c2b9c27eb513e85c1344f650e836e4320b358584\": container with ID starting with a437877253c6baaf6d32dd06c2b9c27eb513e85c1344f650e836e4320b358584 not found: ID does not exist" containerID="a437877253c6baaf6d32dd06c2b9c27eb513e85c1344f650e836e4320b358584" Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.728448 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a437877253c6baaf6d32dd06c2b9c27eb513e85c1344f650e836e4320b358584"} err="failed to get container status \"a437877253c6baaf6d32dd06c2b9c27eb513e85c1344f650e836e4320b358584\": rpc error: code = NotFound desc = could not find container \"a437877253c6baaf6d32dd06c2b9c27eb513e85c1344f650e836e4320b358584\": container with ID starting with a437877253c6baaf6d32dd06c2b9c27eb513e85c1344f650e836e4320b358584 not found: ID does not exist" Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.728465 4779 scope.go:117] "RemoveContainer" containerID="f0472042d8ef1b14166cecbe53b8904b508cf830837e56ab8d0da6c7e506f021" Nov 28 13:31:32 crc kubenswrapper[4779]: E1128 13:31:32.729051 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0472042d8ef1b14166cecbe53b8904b508cf830837e56ab8d0da6c7e506f021\": container with ID starting with f0472042d8ef1b14166cecbe53b8904b508cf830837e56ab8d0da6c7e506f021 not found: ID does not exist" containerID="f0472042d8ef1b14166cecbe53b8904b508cf830837e56ab8d0da6c7e506f021" Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.729110 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0472042d8ef1b14166cecbe53b8904b508cf830837e56ab8d0da6c7e506f021"} err="failed to get container status \"f0472042d8ef1b14166cecbe53b8904b508cf830837e56ab8d0da6c7e506f021\": rpc error: code = NotFound desc = could not find container \"f0472042d8ef1b14166cecbe53b8904b508cf830837e56ab8d0da6c7e506f021\": container with ID starting with f0472042d8ef1b14166cecbe53b8904b508cf830837e56ab8d0da6c7e506f021 not found: ID does not exist" Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.729147 4779 scope.go:117] "RemoveContainer" containerID="0c6039808062e2eac3e32b342138d8dd98fa5fc33feb8f36b2c6a08a0652c257" Nov 28 13:31:32 crc kubenswrapper[4779]: E1128 13:31:32.729381 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c6039808062e2eac3e32b342138d8dd98fa5fc33feb8f36b2c6a08a0652c257\": container with ID starting with 0c6039808062e2eac3e32b342138d8dd98fa5fc33feb8f36b2c6a08a0652c257 not found: ID does not exist" containerID="0c6039808062e2eac3e32b342138d8dd98fa5fc33feb8f36b2c6a08a0652c257" Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.729397 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c6039808062e2eac3e32b342138d8dd98fa5fc33feb8f36b2c6a08a0652c257"} err="failed to get container status \"0c6039808062e2eac3e32b342138d8dd98fa5fc33feb8f36b2c6a08a0652c257\": rpc error: code = NotFound desc = could not find container \"0c6039808062e2eac3e32b342138d8dd98fa5fc33feb8f36b2c6a08a0652c257\": container with ID starting with 0c6039808062e2eac3e32b342138d8dd98fa5fc33feb8f36b2c6a08a0652c257 not found: ID does not exist" Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.729408 4779 scope.go:117] "RemoveContainer" containerID="ffb32173505904f113786c182ee2f0a52d8909f8fbd747131e3024dff45119de" Nov 28 13:31:32 crc kubenswrapper[4779]: E1128 13:31:32.729600 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffb32173505904f113786c182ee2f0a52d8909f8fbd747131e3024dff45119de\": container with ID starting with ffb32173505904f113786c182ee2f0a52d8909f8fbd747131e3024dff45119de not found: ID does not exist" containerID="ffb32173505904f113786c182ee2f0a52d8909f8fbd747131e3024dff45119de" Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.729616 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffb32173505904f113786c182ee2f0a52d8909f8fbd747131e3024dff45119de"} err="failed to get container status \"ffb32173505904f113786c182ee2f0a52d8909f8fbd747131e3024dff45119de\": rpc error: code = NotFound desc = could not find container \"ffb32173505904f113786c182ee2f0a52d8909f8fbd747131e3024dff45119de\": container with ID starting with ffb32173505904f113786c182ee2f0a52d8909f8fbd747131e3024dff45119de not found: ID does not exist" Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.801406 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae81fc0c-b90a-4cc9-a225-caf5d3568ae1-public-tls-certs\") pod \"aodh-0\" (UID: \"ae81fc0c-b90a-4cc9-a225-caf5d3568ae1\") " pod="openstack/aodh-0" Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.801493 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7xgw\" (UniqueName: \"kubernetes.io/projected/ae81fc0c-b90a-4cc9-a225-caf5d3568ae1-kube-api-access-t7xgw\") pod \"aodh-0\" (UID: \"ae81fc0c-b90a-4cc9-a225-caf5d3568ae1\") " pod="openstack/aodh-0" Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.801591 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae81fc0c-b90a-4cc9-a225-caf5d3568ae1-scripts\") pod \"aodh-0\" (UID: \"ae81fc0c-b90a-4cc9-a225-caf5d3568ae1\") " pod="openstack/aodh-0" Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.802038 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae81fc0c-b90a-4cc9-a225-caf5d3568ae1-combined-ca-bundle\") pod \"aodh-0\" (UID: \"ae81fc0c-b90a-4cc9-a225-caf5d3568ae1\") " pod="openstack/aodh-0" Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.802295 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae81fc0c-b90a-4cc9-a225-caf5d3568ae1-config-data\") pod \"aodh-0\" (UID: \"ae81fc0c-b90a-4cc9-a225-caf5d3568ae1\") " pod="openstack/aodh-0" Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.802525 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae81fc0c-b90a-4cc9-a225-caf5d3568ae1-internal-tls-certs\") pod \"aodh-0\" (UID: \"ae81fc0c-b90a-4cc9-a225-caf5d3568ae1\") " pod="openstack/aodh-0" Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.904328 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae81fc0c-b90a-4cc9-a225-caf5d3568ae1-scripts\") pod \"aodh-0\" (UID: \"ae81fc0c-b90a-4cc9-a225-caf5d3568ae1\") " pod="openstack/aodh-0" Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.904436 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae81fc0c-b90a-4cc9-a225-caf5d3568ae1-combined-ca-bundle\") pod \"aodh-0\" (UID: \"ae81fc0c-b90a-4cc9-a225-caf5d3568ae1\") " pod="openstack/aodh-0" Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.904500 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae81fc0c-b90a-4cc9-a225-caf5d3568ae1-config-data\") pod \"aodh-0\" (UID: \"ae81fc0c-b90a-4cc9-a225-caf5d3568ae1\") " pod="openstack/aodh-0" Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.904562 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae81fc0c-b90a-4cc9-a225-caf5d3568ae1-internal-tls-certs\") pod \"aodh-0\" (UID: \"ae81fc0c-b90a-4cc9-a225-caf5d3568ae1\") " pod="openstack/aodh-0" Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.904630 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae81fc0c-b90a-4cc9-a225-caf5d3568ae1-public-tls-certs\") pod \"aodh-0\" (UID: \"ae81fc0c-b90a-4cc9-a225-caf5d3568ae1\") " pod="openstack/aodh-0" Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.904658 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7xgw\" (UniqueName: \"kubernetes.io/projected/ae81fc0c-b90a-4cc9-a225-caf5d3568ae1-kube-api-access-t7xgw\") pod \"aodh-0\" (UID: \"ae81fc0c-b90a-4cc9-a225-caf5d3568ae1\") " pod="openstack/aodh-0" Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.910934 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae81fc0c-b90a-4cc9-a225-caf5d3568ae1-scripts\") pod \"aodh-0\" (UID: \"ae81fc0c-b90a-4cc9-a225-caf5d3568ae1\") " pod="openstack/aodh-0" Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.910961 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae81fc0c-b90a-4cc9-a225-caf5d3568ae1-config-data\") pod \"aodh-0\" (UID: \"ae81fc0c-b90a-4cc9-a225-caf5d3568ae1\") " pod="openstack/aodh-0" Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.912853 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae81fc0c-b90a-4cc9-a225-caf5d3568ae1-combined-ca-bundle\") pod \"aodh-0\" (UID: \"ae81fc0c-b90a-4cc9-a225-caf5d3568ae1\") " pod="openstack/aodh-0" Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.915993 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae81fc0c-b90a-4cc9-a225-caf5d3568ae1-internal-tls-certs\") pod \"aodh-0\" (UID: \"ae81fc0c-b90a-4cc9-a225-caf5d3568ae1\") " pod="openstack/aodh-0" Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.917489 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae81fc0c-b90a-4cc9-a225-caf5d3568ae1-public-tls-certs\") pod \"aodh-0\" (UID: \"ae81fc0c-b90a-4cc9-a225-caf5d3568ae1\") " pod="openstack/aodh-0" Nov 28 13:31:32 crc kubenswrapper[4779]: I1128 13:31:32.922879 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7xgw\" (UniqueName: \"kubernetes.io/projected/ae81fc0c-b90a-4cc9-a225-caf5d3568ae1-kube-api-access-t7xgw\") pod \"aodh-0\" (UID: \"ae81fc0c-b90a-4cc9-a225-caf5d3568ae1\") " pod="openstack/aodh-0" Nov 28 13:31:33 crc kubenswrapper[4779]: I1128 13:31:33.010017 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 28 13:31:33 crc kubenswrapper[4779]: I1128 13:31:33.597844 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 28 13:31:33 crc kubenswrapper[4779]: I1128 13:31:33.745897 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a8e5660-e380-4665-b764-3fea920548f1" path="/var/lib/kubelet/pods/1a8e5660-e380-4665-b764-3fea920548f1/volumes" Nov 28 13:31:33 crc kubenswrapper[4779]: I1128 13:31:33.923912 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5446b9c989-njrck" Nov 28 13:31:34 crc kubenswrapper[4779]: I1128 13:31:34.624555 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"319ee731-3ce1-42ae-bd3e-0c8e38840b1d","Type":"ContainerStarted","Data":"97ef1a4047ed4cc3045abea3848e7952eecb45b096a66f6d243bff6d241154d0"} Nov 28 13:31:34 crc kubenswrapper[4779]: I1128 13:31:34.626219 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"7222438c-fe9c-429a-899e-269d84def6d7","Type":"ContainerStarted","Data":"ee93b75da1d188d8c915729cc6cba004a69051a819dd5220faf83978a540ff04"} Nov 28 13:31:34 crc kubenswrapper[4779]: I1128 13:31:34.627582 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"ae81fc0c-b90a-4cc9-a225-caf5d3568ae1","Type":"ContainerStarted","Data":"145414fd4307da163e256c8758590b277cc951f3e3606379832537080a320591"} Nov 28 13:31:35 crc kubenswrapper[4779]: I1128 13:31:35.640205 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"ae81fc0c-b90a-4cc9-a225-caf5d3568ae1","Type":"ContainerStarted","Data":"ecb7f508c1a9a6987d85aea3f2ef806b9f15a329c619d932647ba1ad408c56ca"} Nov 28 13:31:36 crc kubenswrapper[4779]: I1128 13:31:36.649908 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"ae81fc0c-b90a-4cc9-a225-caf5d3568ae1","Type":"ContainerStarted","Data":"6cbfe3a1a5798f713660a028a50c4f16786efba6254130658fbf77dc6972d28c"} Nov 28 13:31:37 crc kubenswrapper[4779]: I1128 13:31:37.662929 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"ae81fc0c-b90a-4cc9-a225-caf5d3568ae1","Type":"ContainerStarted","Data":"20a40b39e6c8de6c23508a8537df04a1c47a0d64135d620466fbecc5946a5ff5"} Nov 28 13:31:38 crc kubenswrapper[4779]: I1128 13:31:38.676365 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"ae81fc0c-b90a-4cc9-a225-caf5d3568ae1","Type":"ContainerStarted","Data":"39ff709985bff9f1a0a1d4311392d7a600e04de1157b51fc9ce4a9d31ee31ba3"} Nov 28 13:31:38 crc kubenswrapper[4779]: I1128 13:31:38.706603 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.418591729 podStartE2EDuration="6.706585363s" podCreationTimestamp="2025-11-28 13:31:32 +0000 UTC" firstStartedPulling="2025-11-28 13:31:33.698229815 +0000 UTC m=+3354.263905189" lastFinishedPulling="2025-11-28 13:31:37.986223469 +0000 UTC m=+3358.551898823" observedRunningTime="2025-11-28 13:31:38.705833813 +0000 UTC m=+3359.271509167" watchObservedRunningTime="2025-11-28 13:31:38.706585363 +0000 UTC m=+3359.272260717" Nov 28 13:31:40 crc kubenswrapper[4779]: I1128 13:31:40.696506 4779 generic.go:334] "Generic (PLEG): container finished" podID="319ee731-3ce1-42ae-bd3e-0c8e38840b1d" containerID="97ef1a4047ed4cc3045abea3848e7952eecb45b096a66f6d243bff6d241154d0" exitCode=0 Nov 28 13:31:40 crc kubenswrapper[4779]: I1128 13:31:40.696585 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"319ee731-3ce1-42ae-bd3e-0c8e38840b1d","Type":"ContainerDied","Data":"97ef1a4047ed4cc3045abea3848e7952eecb45b096a66f6d243bff6d241154d0"} Nov 28 13:31:41 crc kubenswrapper[4779]: I1128 13:31:41.707261 4779 generic.go:334] "Generic (PLEG): container finished" podID="7222438c-fe9c-429a-899e-269d84def6d7" containerID="ee93b75da1d188d8c915729cc6cba004a69051a819dd5220faf83978a540ff04" exitCode=0 Nov 28 13:31:41 crc kubenswrapper[4779]: I1128 13:31:41.707313 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"7222438c-fe9c-429a-899e-269d84def6d7","Type":"ContainerDied","Data":"ee93b75da1d188d8c915729cc6cba004a69051a819dd5220faf83978a540ff04"} Nov 28 13:31:50 crc kubenswrapper[4779]: I1128 13:31:50.817281 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"319ee731-3ce1-42ae-bd3e-0c8e38840b1d","Type":"ContainerStarted","Data":"12f281c00d07dd0cf97467152a470d764a1210a21ff5af64bb54dc441faa2fe5"} Nov 28 13:31:50 crc kubenswrapper[4779]: I1128 13:31:50.821237 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"7222438c-fe9c-429a-899e-269d84def6d7","Type":"ContainerStarted","Data":"c3ea9851fa5af09923631892446fbc4fa5526b43ba078ce297e432bf352bdb63"} Nov 28 13:31:54 crc kubenswrapper[4779]: I1128 13:31:54.864268 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"7222438c-fe9c-429a-899e-269d84def6d7","Type":"ContainerStarted","Data":"1668853c7dce89a65c01730f5d2a153e77035ff4ba8287af9ee6af84a2fe19d0"} Nov 28 13:31:54 crc kubenswrapper[4779]: I1128 13:31:54.864784 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Nov 28 13:31:54 crc kubenswrapper[4779]: I1128 13:31:54.867883 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Nov 28 13:31:54 crc kubenswrapper[4779]: I1128 13:31:54.869461 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"319ee731-3ce1-42ae-bd3e-0c8e38840b1d","Type":"ContainerStarted","Data":"ade1213fe702ecc2625ffb0eaec599ccb825f142ff1aa5a3cc461ecae8d1709e"} Nov 28 13:31:54 crc kubenswrapper[4779]: I1128 13:31:54.904813 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=6.503071671 podStartE2EDuration="29.904792739s" podCreationTimestamp="2025-11-28 13:31:25 +0000 UTC" firstStartedPulling="2025-11-28 13:31:26.673307164 +0000 UTC m=+3347.238982508" lastFinishedPulling="2025-11-28 13:31:50.075028202 +0000 UTC m=+3370.640703576" observedRunningTime="2025-11-28 13:31:54.884584159 +0000 UTC m=+3375.450259513" watchObservedRunningTime="2025-11-28 13:31:54.904792739 +0000 UTC m=+3375.470468113" Nov 28 13:31:58 crc kubenswrapper[4779]: I1128 13:31:58.912077 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"319ee731-3ce1-42ae-bd3e-0c8e38840b1d","Type":"ContainerStarted","Data":"e8652df2b3c0a0f079c77e4001f691055a7555c13a421136412d2f3a82661ce9"} Nov 28 13:31:58 crc kubenswrapper[4779]: I1128 13:31:58.949585 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=3.303468965 podStartE2EDuration="33.949562235s" podCreationTimestamp="2025-11-28 13:31:25 +0000 UTC" firstStartedPulling="2025-11-28 13:31:27.165599056 +0000 UTC m=+3347.731274410" lastFinishedPulling="2025-11-28 13:31:57.811692326 +0000 UTC m=+3378.377367680" observedRunningTime="2025-11-28 13:31:58.944681114 +0000 UTC m=+3379.510356468" watchObservedRunningTime="2025-11-28 13:31:58.949562235 +0000 UTC m=+3379.515237609" Nov 28 13:32:01 crc kubenswrapper[4779]: I1128 13:32:01.678988 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Nov 28 13:32:11 crc kubenswrapper[4779]: I1128 13:32:11.679070 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Nov 28 13:32:11 crc kubenswrapper[4779]: I1128 13:32:11.683527 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Nov 28 13:32:12 crc kubenswrapper[4779]: I1128 13:32:12.039970 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Nov 28 13:32:13 crc kubenswrapper[4779]: I1128 13:32:13.376296 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Nov 28 13:32:13 crc kubenswrapper[4779]: I1128 13:32:13.377532 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="7f349931-5145-4f53-a9a4-6e2c915d0ab9" containerName="openstackclient" containerID="cri-o://62ac887c34e98db417a37aca60c8ff5dd801b1a13cf3e720d629ae03313a1016" gracePeriod=2 Nov 28 13:32:13 crc kubenswrapper[4779]: I1128 13:32:13.387737 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Nov 28 13:32:13 crc kubenswrapper[4779]: I1128 13:32:13.422649 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 28 13:32:13 crc kubenswrapper[4779]: E1128 13:32:13.423232 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f349931-5145-4f53-a9a4-6e2c915d0ab9" containerName="openstackclient" Nov 28 13:32:13 crc kubenswrapper[4779]: I1128 13:32:13.423250 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f349931-5145-4f53-a9a4-6e2c915d0ab9" containerName="openstackclient" Nov 28 13:32:13 crc kubenswrapper[4779]: I1128 13:32:13.423513 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f349931-5145-4f53-a9a4-6e2c915d0ab9" containerName="openstackclient" Nov 28 13:32:13 crc kubenswrapper[4779]: I1128 13:32:13.424395 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 28 13:32:13 crc kubenswrapper[4779]: I1128 13:32:13.431315 4779 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="7f349931-5145-4f53-a9a4-6e2c915d0ab9" podUID="6880c28c-d000-45ad-9b79-71ab16c628ad" Nov 28 13:32:13 crc kubenswrapper[4779]: I1128 13:32:13.445153 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 28 13:32:13 crc kubenswrapper[4779]: I1128 13:32:13.445325 4779 status_manager.go:875] "Failed to update status for pod" pod="openstack/openstackclient" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6880c28c-d000-45ad-9b79-71ab16c628ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T13:32:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T13:32:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T13:32:13Z\\\",\\\"message\\\":\\\"containers with unready status: [openstackclient]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T13:32:13Z\\\",\\\"message\\\":\\\"containers with unready status: [openstackclient]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"openstackclient\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/home/cloud-admin/.config/openstack/clouds.yaml\\\",\\\"name\\\":\\\"openstack-config\\\"},{\\\"mountPath\\\":\\\"/home/cloud-admin/.config/openstack/secure.yaml\\\",\\\"name\\\":\\\"openstack-config-secret\\\"},{\\\"mountPath\\\":\\\"/home/cloud-admin/cloudrc\\\",\\\"name\\\":\\\"openstack-config-secret\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem\\\",\\\"name\\\":\\\"combined-ca-bundle\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfnxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T13:32:13Z\\\"}}\" for pod \"openstack\"/\"openstackclient\": pods \"openstackclient\" not found" Nov 28 13:32:13 crc kubenswrapper[4779]: I1128 13:32:13.455078 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Nov 28 13:32:13 crc kubenswrapper[4779]: E1128 13:32:13.455958 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle kube-api-access-mfnxh openstack-config openstack-config-secret], unattached volumes=[], failed to process volumes=[combined-ca-bundle kube-api-access-mfnxh openstack-config openstack-config-secret]: context canceled" pod="openstack/openstackclient" podUID="6880c28c-d000-45ad-9b79-71ab16c628ad" Nov 28 13:32:13 crc kubenswrapper[4779]: I1128 13:32:13.468255 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Nov 28 13:32:13 crc kubenswrapper[4779]: I1128 13:32:13.495947 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 28 13:32:13 crc kubenswrapper[4779]: I1128 13:32:13.497265 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 28 13:32:13 crc kubenswrapper[4779]: I1128 13:32:13.502073 4779 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="6880c28c-d000-45ad-9b79-71ab16c628ad" podUID="488dc09e-4b09-40a3-8bfa-fd3116307f09" Nov 28 13:32:13 crc kubenswrapper[4779]: I1128 13:32:13.506086 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 28 13:32:13 crc kubenswrapper[4779]: I1128 13:32:13.579427 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/488dc09e-4b09-40a3-8bfa-fd3116307f09-combined-ca-bundle\") pod \"openstackclient\" (UID: \"488dc09e-4b09-40a3-8bfa-fd3116307f09\") " pod="openstack/openstackclient" Nov 28 13:32:13 crc kubenswrapper[4779]: I1128 13:32:13.579584 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/488dc09e-4b09-40a3-8bfa-fd3116307f09-openstack-config-secret\") pod \"openstackclient\" (UID: \"488dc09e-4b09-40a3-8bfa-fd3116307f09\") " pod="openstack/openstackclient" Nov 28 13:32:13 crc kubenswrapper[4779]: I1128 13:32:13.579672 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwdnf\" (UniqueName: \"kubernetes.io/projected/488dc09e-4b09-40a3-8bfa-fd3116307f09-kube-api-access-pwdnf\") pod \"openstackclient\" (UID: \"488dc09e-4b09-40a3-8bfa-fd3116307f09\") " pod="openstack/openstackclient" Nov 28 13:32:13 crc kubenswrapper[4779]: I1128 13:32:13.579712 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/488dc09e-4b09-40a3-8bfa-fd3116307f09-openstack-config\") pod \"openstackclient\" (UID: \"488dc09e-4b09-40a3-8bfa-fd3116307f09\") " pod="openstack/openstackclient" Nov 28 13:32:13 crc kubenswrapper[4779]: I1128 13:32:13.685756 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwdnf\" (UniqueName: \"kubernetes.io/projected/488dc09e-4b09-40a3-8bfa-fd3116307f09-kube-api-access-pwdnf\") pod \"openstackclient\" (UID: \"488dc09e-4b09-40a3-8bfa-fd3116307f09\") " pod="openstack/openstackclient" Nov 28 13:32:13 crc kubenswrapper[4779]: I1128 13:32:13.685830 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/488dc09e-4b09-40a3-8bfa-fd3116307f09-openstack-config\") pod \"openstackclient\" (UID: \"488dc09e-4b09-40a3-8bfa-fd3116307f09\") " pod="openstack/openstackclient" Nov 28 13:32:13 crc kubenswrapper[4779]: I1128 13:32:13.685914 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/488dc09e-4b09-40a3-8bfa-fd3116307f09-combined-ca-bundle\") pod \"openstackclient\" (UID: \"488dc09e-4b09-40a3-8bfa-fd3116307f09\") " pod="openstack/openstackclient" Nov 28 13:32:13 crc kubenswrapper[4779]: I1128 13:32:13.686053 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/488dc09e-4b09-40a3-8bfa-fd3116307f09-openstack-config-secret\") pod \"openstackclient\" (UID: \"488dc09e-4b09-40a3-8bfa-fd3116307f09\") " pod="openstack/openstackclient" Nov 28 13:32:13 crc kubenswrapper[4779]: I1128 13:32:13.686901 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/488dc09e-4b09-40a3-8bfa-fd3116307f09-openstack-config\") pod \"openstackclient\" (UID: \"488dc09e-4b09-40a3-8bfa-fd3116307f09\") " pod="openstack/openstackclient" Nov 28 13:32:13 crc kubenswrapper[4779]: I1128 13:32:13.699774 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/488dc09e-4b09-40a3-8bfa-fd3116307f09-openstack-config-secret\") pod \"openstackclient\" (UID: \"488dc09e-4b09-40a3-8bfa-fd3116307f09\") " pod="openstack/openstackclient" Nov 28 13:32:13 crc kubenswrapper[4779]: I1128 13:32:13.712713 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/488dc09e-4b09-40a3-8bfa-fd3116307f09-combined-ca-bundle\") pod \"openstackclient\" (UID: \"488dc09e-4b09-40a3-8bfa-fd3116307f09\") " pod="openstack/openstackclient" Nov 28 13:32:13 crc kubenswrapper[4779]: I1128 13:32:13.720762 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwdnf\" (UniqueName: \"kubernetes.io/projected/488dc09e-4b09-40a3-8bfa-fd3116307f09-kube-api-access-pwdnf\") pod \"openstackclient\" (UID: \"488dc09e-4b09-40a3-8bfa-fd3116307f09\") " pod="openstack/openstackclient" Nov 28 13:32:13 crc kubenswrapper[4779]: I1128 13:32:13.746724 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6880c28c-d000-45ad-9b79-71ab16c628ad" path="/var/lib/kubelet/pods/6880c28c-d000-45ad-9b79-71ab16c628ad/volumes" Nov 28 13:32:13 crc kubenswrapper[4779]: I1128 13:32:13.784800 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 28 13:32:13 crc kubenswrapper[4779]: I1128 13:32:13.785138 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="ae81fc0c-b90a-4cc9-a225-caf5d3568ae1" containerName="aodh-api" containerID="cri-o://ecb7f508c1a9a6987d85aea3f2ef806b9f15a329c619d932647ba1ad408c56ca" gracePeriod=30 Nov 28 13:32:13 crc kubenswrapper[4779]: I1128 13:32:13.785258 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="ae81fc0c-b90a-4cc9-a225-caf5d3568ae1" containerName="aodh-evaluator" containerID="cri-o://6cbfe3a1a5798f713660a028a50c4f16786efba6254130658fbf77dc6972d28c" gracePeriod=30 Nov 28 13:32:13 crc kubenswrapper[4779]: I1128 13:32:13.785257 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="ae81fc0c-b90a-4cc9-a225-caf5d3568ae1" containerName="aodh-listener" containerID="cri-o://39ff709985bff9f1a0a1d4311392d7a600e04de1157b51fc9ce4a9d31ee31ba3" gracePeriod=30 Nov 28 13:32:13 crc kubenswrapper[4779]: I1128 13:32:13.785279 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="ae81fc0c-b90a-4cc9-a225-caf5d3568ae1" containerName="aodh-notifier" containerID="cri-o://20a40b39e6c8de6c23508a8537df04a1c47a0d64135d620466fbecc5946a5ff5" gracePeriod=30 Nov 28 13:32:13 crc kubenswrapper[4779]: I1128 13:32:13.834717 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 28 13:32:14 crc kubenswrapper[4779]: I1128 13:32:14.054084 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 28 13:32:14 crc kubenswrapper[4779]: I1128 13:32:14.057655 4779 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="6880c28c-d000-45ad-9b79-71ab16c628ad" podUID="488dc09e-4b09-40a3-8bfa-fd3116307f09" Nov 28 13:32:14 crc kubenswrapper[4779]: I1128 13:32:14.065394 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 28 13:32:14 crc kubenswrapper[4779]: I1128 13:32:14.068682 4779 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="6880c28c-d000-45ad-9b79-71ab16c628ad" podUID="488dc09e-4b09-40a3-8bfa-fd3116307f09" Nov 28 13:32:15 crc kubenswrapper[4779]: I1128 13:32:15.089929 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 28 13:32:15 crc kubenswrapper[4779]: I1128 13:32:15.095448 4779 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="6880c28c-d000-45ad-9b79-71ab16c628ad" podUID="488dc09e-4b09-40a3-8bfa-fd3116307f09" Nov 28 13:32:15 crc kubenswrapper[4779]: I1128 13:32:15.115226 4779 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="6880c28c-d000-45ad-9b79-71ab16c628ad" podUID="488dc09e-4b09-40a3-8bfa-fd3116307f09" Nov 28 13:32:15 crc kubenswrapper[4779]: I1128 13:32:15.571188 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 28 13:32:15 crc kubenswrapper[4779]: I1128 13:32:15.588227 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 28 13:32:15 crc kubenswrapper[4779]: I1128 13:32:15.588490 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="319ee731-3ce1-42ae-bd3e-0c8e38840b1d" containerName="prometheus" containerID="cri-o://12f281c00d07dd0cf97467152a470d764a1210a21ff5af64bb54dc441faa2fe5" gracePeriod=600 Nov 28 13:32:15 crc kubenswrapper[4779]: I1128 13:32:15.588603 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="319ee731-3ce1-42ae-bd3e-0c8e38840b1d" containerName="config-reloader" containerID="cri-o://ade1213fe702ecc2625ffb0eaec599ccb825f142ff1aa5a3cc461ecae8d1709e" gracePeriod=600 Nov 28 13:32:15 crc kubenswrapper[4779]: I1128 13:32:15.588588 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="319ee731-3ce1-42ae-bd3e-0c8e38840b1d" containerName="thanos-sidecar" containerID="cri-o://e8652df2b3c0a0f079c77e4001f691055a7555c13a421136412d2f3a82661ce9" gracePeriod=600 Nov 28 13:32:16 crc kubenswrapper[4779]: I1128 13:32:16.101855 4779 generic.go:334] "Generic (PLEG): container finished" podID="7f349931-5145-4f53-a9a4-6e2c915d0ab9" containerID="62ac887c34e98db417a37aca60c8ff5dd801b1a13cf3e720d629ae03313a1016" exitCode=137 Nov 28 13:32:16 crc kubenswrapper[4779]: I1128 13:32:16.104936 4779 generic.go:334] "Generic (PLEG): container finished" podID="ae81fc0c-b90a-4cc9-a225-caf5d3568ae1" containerID="39ff709985bff9f1a0a1d4311392d7a600e04de1157b51fc9ce4a9d31ee31ba3" exitCode=0 Nov 28 13:32:16 crc kubenswrapper[4779]: I1128 13:32:16.104962 4779 generic.go:334] "Generic (PLEG): container finished" podID="ae81fc0c-b90a-4cc9-a225-caf5d3568ae1" containerID="20a40b39e6c8de6c23508a8537df04a1c47a0d64135d620466fbecc5946a5ff5" exitCode=0 Nov 28 13:32:16 crc kubenswrapper[4779]: I1128 13:32:16.104974 4779 generic.go:334] "Generic (PLEG): container finished" podID="ae81fc0c-b90a-4cc9-a225-caf5d3568ae1" containerID="6cbfe3a1a5798f713660a028a50c4f16786efba6254130658fbf77dc6972d28c" exitCode=0 Nov 28 13:32:16 crc kubenswrapper[4779]: I1128 13:32:16.104984 4779 generic.go:334] "Generic (PLEG): container finished" podID="ae81fc0c-b90a-4cc9-a225-caf5d3568ae1" containerID="ecb7f508c1a9a6987d85aea3f2ef806b9f15a329c619d932647ba1ad408c56ca" exitCode=0 Nov 28 13:32:16 crc kubenswrapper[4779]: I1128 13:32:16.105023 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"ae81fc0c-b90a-4cc9-a225-caf5d3568ae1","Type":"ContainerDied","Data":"39ff709985bff9f1a0a1d4311392d7a600e04de1157b51fc9ce4a9d31ee31ba3"} Nov 28 13:32:16 crc kubenswrapper[4779]: I1128 13:32:16.105046 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"ae81fc0c-b90a-4cc9-a225-caf5d3568ae1","Type":"ContainerDied","Data":"20a40b39e6c8de6c23508a8537df04a1c47a0d64135d620466fbecc5946a5ff5"} Nov 28 13:32:16 crc kubenswrapper[4779]: I1128 13:32:16.105057 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"ae81fc0c-b90a-4cc9-a225-caf5d3568ae1","Type":"ContainerDied","Data":"6cbfe3a1a5798f713660a028a50c4f16786efba6254130658fbf77dc6972d28c"} Nov 28 13:32:16 crc kubenswrapper[4779]: I1128 13:32:16.105069 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"ae81fc0c-b90a-4cc9-a225-caf5d3568ae1","Type":"ContainerDied","Data":"ecb7f508c1a9a6987d85aea3f2ef806b9f15a329c619d932647ba1ad408c56ca"} Nov 28 13:32:16 crc kubenswrapper[4779]: I1128 13:32:16.107253 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"488dc09e-4b09-40a3-8bfa-fd3116307f09","Type":"ContainerStarted","Data":"0fb4dab5e4b4203e38b26fb8dd0b52da39c3f5c6ba86da51635b01ca3bc7263e"} Nov 28 13:32:16 crc kubenswrapper[4779]: I1128 13:32:16.107421 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"488dc09e-4b09-40a3-8bfa-fd3116307f09","Type":"ContainerStarted","Data":"5a904e2c7967754eed0b9e4e92e1913a77c694af20abb0da6193c17c17307366"} Nov 28 13:32:16 crc kubenswrapper[4779]: I1128 13:32:16.114545 4779 generic.go:334] "Generic (PLEG): container finished" podID="319ee731-3ce1-42ae-bd3e-0c8e38840b1d" containerID="e8652df2b3c0a0f079c77e4001f691055a7555c13a421136412d2f3a82661ce9" exitCode=0 Nov 28 13:32:16 crc kubenswrapper[4779]: I1128 13:32:16.114583 4779 generic.go:334] "Generic (PLEG): container finished" podID="319ee731-3ce1-42ae-bd3e-0c8e38840b1d" containerID="ade1213fe702ecc2625ffb0eaec599ccb825f142ff1aa5a3cc461ecae8d1709e" exitCode=0 Nov 28 13:32:16 crc kubenswrapper[4779]: I1128 13:32:16.114596 4779 generic.go:334] "Generic (PLEG): container finished" podID="319ee731-3ce1-42ae-bd3e-0c8e38840b1d" containerID="12f281c00d07dd0cf97467152a470d764a1210a21ff5af64bb54dc441faa2fe5" exitCode=0 Nov 28 13:32:16 crc kubenswrapper[4779]: I1128 13:32:16.114621 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"319ee731-3ce1-42ae-bd3e-0c8e38840b1d","Type":"ContainerDied","Data":"e8652df2b3c0a0f079c77e4001f691055a7555c13a421136412d2f3a82661ce9"} Nov 28 13:32:16 crc kubenswrapper[4779]: I1128 13:32:16.114649 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"319ee731-3ce1-42ae-bd3e-0c8e38840b1d","Type":"ContainerDied","Data":"ade1213fe702ecc2625ffb0eaec599ccb825f142ff1aa5a3cc461ecae8d1709e"} Nov 28 13:32:16 crc kubenswrapper[4779]: I1128 13:32:16.114665 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"319ee731-3ce1-42ae-bd3e-0c8e38840b1d","Type":"ContainerDied","Data":"12f281c00d07dd0cf97467152a470d764a1210a21ff5af64bb54dc441faa2fe5"} Nov 28 13:32:16 crc kubenswrapper[4779]: I1128 13:32:16.129111 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.129078375 podStartE2EDuration="3.129078375s" podCreationTimestamp="2025-11-28 13:32:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 13:32:16.123414874 +0000 UTC m=+3396.689090238" watchObservedRunningTime="2025-11-28 13:32:16.129078375 +0000 UTC m=+3396.694753729" Nov 28 13:32:16 crc kubenswrapper[4779]: I1128 13:32:16.220680 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 28 13:32:16 crc kubenswrapper[4779]: I1128 13:32:16.285506 4779 patch_prober.go:28] interesting pod/machine-config-daemon-kj9g2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 13:32:16 crc kubenswrapper[4779]: I1128 13:32:16.285579 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 13:32:16 crc kubenswrapper[4779]: I1128 13:32:16.341006 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7f349931-5145-4f53-a9a4-6e2c915d0ab9-openstack-config\") pod \"7f349931-5145-4f53-a9a4-6e2c915d0ab9\" (UID: \"7f349931-5145-4f53-a9a4-6e2c915d0ab9\") " Nov 28 13:32:16 crc kubenswrapper[4779]: I1128 13:32:16.341072 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7f349931-5145-4f53-a9a4-6e2c915d0ab9-openstack-config-secret\") pod \"7f349931-5145-4f53-a9a4-6e2c915d0ab9\" (UID: \"7f349931-5145-4f53-a9a4-6e2c915d0ab9\") " Nov 28 13:32:16 crc kubenswrapper[4779]: I1128 13:32:16.341142 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ht62v\" (UniqueName: \"kubernetes.io/projected/7f349931-5145-4f53-a9a4-6e2c915d0ab9-kube-api-access-ht62v\") pod \"7f349931-5145-4f53-a9a4-6e2c915d0ab9\" (UID: \"7f349931-5145-4f53-a9a4-6e2c915d0ab9\") " Nov 28 13:32:16 crc kubenswrapper[4779]: I1128 13:32:16.341177 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f349931-5145-4f53-a9a4-6e2c915d0ab9-combined-ca-bundle\") pod \"7f349931-5145-4f53-a9a4-6e2c915d0ab9\" (UID: \"7f349931-5145-4f53-a9a4-6e2c915d0ab9\") " Nov 28 13:32:16 crc kubenswrapper[4779]: I1128 13:32:16.357361 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f349931-5145-4f53-a9a4-6e2c915d0ab9-kube-api-access-ht62v" (OuterVolumeSpecName: "kube-api-access-ht62v") pod "7f349931-5145-4f53-a9a4-6e2c915d0ab9" (UID: "7f349931-5145-4f53-a9a4-6e2c915d0ab9"). InnerVolumeSpecName "kube-api-access-ht62v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:32:16 crc kubenswrapper[4779]: I1128 13:32:16.378689 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f349931-5145-4f53-a9a4-6e2c915d0ab9-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "7f349931-5145-4f53-a9a4-6e2c915d0ab9" (UID: "7f349931-5145-4f53-a9a4-6e2c915d0ab9"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 13:32:16 crc kubenswrapper[4779]: I1128 13:32:16.398362 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f349931-5145-4f53-a9a4-6e2c915d0ab9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7f349931-5145-4f53-a9a4-6e2c915d0ab9" (UID: "7f349931-5145-4f53-a9a4-6e2c915d0ab9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:32:16 crc kubenswrapper[4779]: I1128 13:32:16.446543 4779 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7f349931-5145-4f53-a9a4-6e2c915d0ab9-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 28 13:32:16 crc kubenswrapper[4779]: I1128 13:32:16.446583 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ht62v\" (UniqueName: \"kubernetes.io/projected/7f349931-5145-4f53-a9a4-6e2c915d0ab9-kube-api-access-ht62v\") on node \"crc\" DevicePath \"\"" Nov 28 13:32:16 crc kubenswrapper[4779]: I1128 13:32:16.446594 4779 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f349931-5145-4f53-a9a4-6e2c915d0ab9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 13:32:16 crc kubenswrapper[4779]: I1128 13:32:16.473382 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f349931-5145-4f53-a9a4-6e2c915d0ab9-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "7f349931-5145-4f53-a9a4-6e2c915d0ab9" (UID: "7f349931-5145-4f53-a9a4-6e2c915d0ab9"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:32:16 crc kubenswrapper[4779]: I1128 13:32:16.552370 4779 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7f349931-5145-4f53-a9a4-6e2c915d0ab9-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 28 13:32:16 crc kubenswrapper[4779]: I1128 13:32:16.675142 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 28 13:32:16 crc kubenswrapper[4779]: I1128 13:32:16.679639 4779 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="319ee731-3ce1-42ae-bd3e-0c8e38840b1d" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.1.11:9090/-/ready\": dial tcp 10.217.1.11:9090: connect: connection refused" Nov 28 13:32:16 crc kubenswrapper[4779]: I1128 13:32:16.858120 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae81fc0c-b90a-4cc9-a225-caf5d3568ae1-combined-ca-bundle\") pod \"ae81fc0c-b90a-4cc9-a225-caf5d3568ae1\" (UID: \"ae81fc0c-b90a-4cc9-a225-caf5d3568ae1\") " Nov 28 13:32:16 crc kubenswrapper[4779]: I1128 13:32:16.858248 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae81fc0c-b90a-4cc9-a225-caf5d3568ae1-public-tls-certs\") pod \"ae81fc0c-b90a-4cc9-a225-caf5d3568ae1\" (UID: \"ae81fc0c-b90a-4cc9-a225-caf5d3568ae1\") " Nov 28 13:32:16 crc kubenswrapper[4779]: I1128 13:32:16.858320 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae81fc0c-b90a-4cc9-a225-caf5d3568ae1-scripts\") pod \"ae81fc0c-b90a-4cc9-a225-caf5d3568ae1\" (UID: \"ae81fc0c-b90a-4cc9-a225-caf5d3568ae1\") " Nov 28 13:32:16 crc kubenswrapper[4779]: I1128 13:32:16.858452 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae81fc0c-b90a-4cc9-a225-caf5d3568ae1-config-data\") pod \"ae81fc0c-b90a-4cc9-a225-caf5d3568ae1\" (UID: \"ae81fc0c-b90a-4cc9-a225-caf5d3568ae1\") " Nov 28 13:32:16 crc kubenswrapper[4779]: I1128 13:32:16.858485 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7xgw\" (UniqueName: \"kubernetes.io/projected/ae81fc0c-b90a-4cc9-a225-caf5d3568ae1-kube-api-access-t7xgw\") pod \"ae81fc0c-b90a-4cc9-a225-caf5d3568ae1\" (UID: \"ae81fc0c-b90a-4cc9-a225-caf5d3568ae1\") " Nov 28 13:32:16 crc kubenswrapper[4779]: I1128 13:32:16.858522 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae81fc0c-b90a-4cc9-a225-caf5d3568ae1-internal-tls-certs\") pod \"ae81fc0c-b90a-4cc9-a225-caf5d3568ae1\" (UID: \"ae81fc0c-b90a-4cc9-a225-caf5d3568ae1\") " Nov 28 13:32:16 crc kubenswrapper[4779]: I1128 13:32:16.863551 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae81fc0c-b90a-4cc9-a225-caf5d3568ae1-scripts" (OuterVolumeSpecName: "scripts") pod "ae81fc0c-b90a-4cc9-a225-caf5d3568ae1" (UID: "ae81fc0c-b90a-4cc9-a225-caf5d3568ae1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:32:16 crc kubenswrapper[4779]: I1128 13:32:16.872318 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae81fc0c-b90a-4cc9-a225-caf5d3568ae1-kube-api-access-t7xgw" (OuterVolumeSpecName: "kube-api-access-t7xgw") pod "ae81fc0c-b90a-4cc9-a225-caf5d3568ae1" (UID: "ae81fc0c-b90a-4cc9-a225-caf5d3568ae1"). InnerVolumeSpecName "kube-api-access-t7xgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:32:16 crc kubenswrapper[4779]: I1128 13:32:16.962999 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7xgw\" (UniqueName: \"kubernetes.io/projected/ae81fc0c-b90a-4cc9-a225-caf5d3568ae1-kube-api-access-t7xgw\") on node \"crc\" DevicePath \"\"" Nov 28 13:32:16 crc kubenswrapper[4779]: I1128 13:32:16.963039 4779 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae81fc0c-b90a-4cc9-a225-caf5d3568ae1-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 13:32:16 crc kubenswrapper[4779]: I1128 13:32:16.995057 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae81fc0c-b90a-4cc9-a225-caf5d3568ae1-config-data" (OuterVolumeSpecName: "config-data") pod "ae81fc0c-b90a-4cc9-a225-caf5d3568ae1" (UID: "ae81fc0c-b90a-4cc9-a225-caf5d3568ae1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.016236 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae81fc0c-b90a-4cc9-a225-caf5d3568ae1-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ae81fc0c-b90a-4cc9-a225-caf5d3568ae1" (UID: "ae81fc0c-b90a-4cc9-a225-caf5d3568ae1"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.029328 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae81fc0c-b90a-4cc9-a225-caf5d3568ae1-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ae81fc0c-b90a-4cc9-a225-caf5d3568ae1" (UID: "ae81fc0c-b90a-4cc9-a225-caf5d3568ae1"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.044261 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae81fc0c-b90a-4cc9-a225-caf5d3568ae1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ae81fc0c-b90a-4cc9-a225-caf5d3568ae1" (UID: "ae81fc0c-b90a-4cc9-a225-caf5d3568ae1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.065545 4779 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae81fc0c-b90a-4cc9-a225-caf5d3568ae1-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.065580 4779 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae81fc0c-b90a-4cc9-a225-caf5d3568ae1-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.065591 4779 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae81fc0c-b90a-4cc9-a225-caf5d3568ae1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.065610 4779 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae81fc0c-b90a-4cc9-a225-caf5d3568ae1-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.125296 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.130427 4779 scope.go:117] "RemoveContainer" containerID="62ac887c34e98db417a37aca60c8ff5dd801b1a13cf3e720d629ae03313a1016" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.130656 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.141542 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"ae81fc0c-b90a-4cc9-a225-caf5d3568ae1","Type":"ContainerDied","Data":"145414fd4307da163e256c8758590b277cc951f3e3606379832537080a320591"} Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.141682 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.148506 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.148495 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"319ee731-3ce1-42ae-bd3e-0c8e38840b1d","Type":"ContainerDied","Data":"b60301e21cf544c72f00ae6a4bcec9237ccf989ad7864af7e4219af7ef466b51"} Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.175130 4779 scope.go:117] "RemoveContainer" containerID="39ff709985bff9f1a0a1d4311392d7a600e04de1157b51fc9ce4a9d31ee31ba3" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.205193 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.207525 4779 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="7f349931-5145-4f53-a9a4-6e2c915d0ab9" podUID="488dc09e-4b09-40a3-8bfa-fd3116307f09" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.221638 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.224591 4779 scope.go:117] "RemoveContainer" containerID="20a40b39e6c8de6c23508a8537df04a1c47a0d64135d620466fbecc5946a5ff5" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.256975 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 28 13:32:17 crc kubenswrapper[4779]: E1128 13:32:17.257467 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="319ee731-3ce1-42ae-bd3e-0c8e38840b1d" containerName="prometheus" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.257483 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="319ee731-3ce1-42ae-bd3e-0c8e38840b1d" containerName="prometheus" Nov 28 13:32:17 crc kubenswrapper[4779]: E1128 13:32:17.257512 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="319ee731-3ce1-42ae-bd3e-0c8e38840b1d" containerName="init-config-reloader" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.257520 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="319ee731-3ce1-42ae-bd3e-0c8e38840b1d" containerName="init-config-reloader" Nov 28 13:32:17 crc kubenswrapper[4779]: E1128 13:32:17.257532 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae81fc0c-b90a-4cc9-a225-caf5d3568ae1" containerName="aodh-evaluator" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.257539 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae81fc0c-b90a-4cc9-a225-caf5d3568ae1" containerName="aodh-evaluator" Nov 28 13:32:17 crc kubenswrapper[4779]: E1128 13:32:17.257561 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae81fc0c-b90a-4cc9-a225-caf5d3568ae1" containerName="aodh-notifier" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.257567 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae81fc0c-b90a-4cc9-a225-caf5d3568ae1" containerName="aodh-notifier" Nov 28 13:32:17 crc kubenswrapper[4779]: E1128 13:32:17.257590 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="319ee731-3ce1-42ae-bd3e-0c8e38840b1d" containerName="config-reloader" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.257597 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="319ee731-3ce1-42ae-bd3e-0c8e38840b1d" containerName="config-reloader" Nov 28 13:32:17 crc kubenswrapper[4779]: E1128 13:32:17.257611 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae81fc0c-b90a-4cc9-a225-caf5d3568ae1" containerName="aodh-listener" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.257617 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae81fc0c-b90a-4cc9-a225-caf5d3568ae1" containerName="aodh-listener" Nov 28 13:32:17 crc kubenswrapper[4779]: E1128 13:32:17.257630 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="319ee731-3ce1-42ae-bd3e-0c8e38840b1d" containerName="thanos-sidecar" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.257636 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="319ee731-3ce1-42ae-bd3e-0c8e38840b1d" containerName="thanos-sidecar" Nov 28 13:32:17 crc kubenswrapper[4779]: E1128 13:32:17.257651 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae81fc0c-b90a-4cc9-a225-caf5d3568ae1" containerName="aodh-api" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.257657 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae81fc0c-b90a-4cc9-a225-caf5d3568ae1" containerName="aodh-api" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.257969 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="319ee731-3ce1-42ae-bd3e-0c8e38840b1d" containerName="thanos-sidecar" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.258009 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae81fc0c-b90a-4cc9-a225-caf5d3568ae1" containerName="aodh-notifier" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.258018 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae81fc0c-b90a-4cc9-a225-caf5d3568ae1" containerName="aodh-api" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.258057 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="319ee731-3ce1-42ae-bd3e-0c8e38840b1d" containerName="config-reloader" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.258078 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae81fc0c-b90a-4cc9-a225-caf5d3568ae1" containerName="aodh-evaluator" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.258100 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae81fc0c-b90a-4cc9-a225-caf5d3568ae1" containerName="aodh-listener" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.258115 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="319ee731-3ce1-42ae-bd3e-0c8e38840b1d" containerName="prometheus" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.260541 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.266818 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.267062 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-wdvkm" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.267193 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.267538 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.267661 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.268879 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"319ee731-3ce1-42ae-bd3e-0c8e38840b1d\" (UID: \"319ee731-3ce1-42ae-bd3e-0c8e38840b1d\") " Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.268977 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phk6p\" (UniqueName: \"kubernetes.io/projected/319ee731-3ce1-42ae-bd3e-0c8e38840b1d-kube-api-access-phk6p\") pod \"319ee731-3ce1-42ae-bd3e-0c8e38840b1d\" (UID: \"319ee731-3ce1-42ae-bd3e-0c8e38840b1d\") " Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.269018 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/319ee731-3ce1-42ae-bd3e-0c8e38840b1d-tls-assets\") pod \"319ee731-3ce1-42ae-bd3e-0c8e38840b1d\" (UID: \"319ee731-3ce1-42ae-bd3e-0c8e38840b1d\") " Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.269053 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/319ee731-3ce1-42ae-bd3e-0c8e38840b1d-config-out\") pod \"319ee731-3ce1-42ae-bd3e-0c8e38840b1d\" (UID: \"319ee731-3ce1-42ae-bd3e-0c8e38840b1d\") " Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.269122 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/319ee731-3ce1-42ae-bd3e-0c8e38840b1d-thanos-prometheus-http-client-file\") pod \"319ee731-3ce1-42ae-bd3e-0c8e38840b1d\" (UID: \"319ee731-3ce1-42ae-bd3e-0c8e38840b1d\") " Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.269161 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/319ee731-3ce1-42ae-bd3e-0c8e38840b1d-web-config\") pod \"319ee731-3ce1-42ae-bd3e-0c8e38840b1d\" (UID: \"319ee731-3ce1-42ae-bd3e-0c8e38840b1d\") " Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.269188 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/319ee731-3ce1-42ae-bd3e-0c8e38840b1d-prometheus-metric-storage-rulefiles-0\") pod \"319ee731-3ce1-42ae-bd3e-0c8e38840b1d\" (UID: \"319ee731-3ce1-42ae-bd3e-0c8e38840b1d\") " Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.269248 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/319ee731-3ce1-42ae-bd3e-0c8e38840b1d-config\") pod \"319ee731-3ce1-42ae-bd3e-0c8e38840b1d\" (UID: \"319ee731-3ce1-42ae-bd3e-0c8e38840b1d\") " Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.269860 4779 scope.go:117] "RemoveContainer" containerID="6cbfe3a1a5798f713660a028a50c4f16786efba6254130658fbf77dc6972d28c" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.270829 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/319ee731-3ce1-42ae-bd3e-0c8e38840b1d-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "319ee731-3ce1-42ae-bd3e-0c8e38840b1d" (UID: "319ee731-3ce1-42ae-bd3e-0c8e38840b1d"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.287173 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/319ee731-3ce1-42ae-bd3e-0c8e38840b1d-config-out" (OuterVolumeSpecName: "config-out") pod "319ee731-3ce1-42ae-bd3e-0c8e38840b1d" (UID: "319ee731-3ce1-42ae-bd3e-0c8e38840b1d"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.287471 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/319ee731-3ce1-42ae-bd3e-0c8e38840b1d-config" (OuterVolumeSpecName: "config") pod "319ee731-3ce1-42ae-bd3e-0c8e38840b1d" (UID: "319ee731-3ce1-42ae-bd3e-0c8e38840b1d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.287586 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/319ee731-3ce1-42ae-bd3e-0c8e38840b1d-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "319ee731-3ce1-42ae-bd3e-0c8e38840b1d" (UID: "319ee731-3ce1-42ae-bd3e-0c8e38840b1d"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.287713 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/319ee731-3ce1-42ae-bd3e-0c8e38840b1d-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "319ee731-3ce1-42ae-bd3e-0c8e38840b1d" (UID: "319ee731-3ce1-42ae-bd3e-0c8e38840b1d"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.295183 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/319ee731-3ce1-42ae-bd3e-0c8e38840b1d-kube-api-access-phk6p" (OuterVolumeSpecName: "kube-api-access-phk6p") pod "319ee731-3ce1-42ae-bd3e-0c8e38840b1d" (UID: "319ee731-3ce1-42ae-bd3e-0c8e38840b1d"). InnerVolumeSpecName "kube-api-access-phk6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.302275 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "319ee731-3ce1-42ae-bd3e-0c8e38840b1d" (UID: "319ee731-3ce1-42ae-bd3e-0c8e38840b1d"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.315864 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.323953 4779 scope.go:117] "RemoveContainer" containerID="ecb7f508c1a9a6987d85aea3f2ef806b9f15a329c619d932647ba1ad408c56ca" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.333438 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/319ee731-3ce1-42ae-bd3e-0c8e38840b1d-web-config" (OuterVolumeSpecName: "web-config") pod "319ee731-3ce1-42ae-bd3e-0c8e38840b1d" (UID: "319ee731-3ce1-42ae-bd3e-0c8e38840b1d"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.356891 4779 scope.go:117] "RemoveContainer" containerID="e8652df2b3c0a0f079c77e4001f691055a7555c13a421136412d2f3a82661ce9" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.372166 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8-scripts\") pod \"aodh-0\" (UID: \"c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8\") " pod="openstack/aodh-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.372222 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8-combined-ca-bundle\") pod \"aodh-0\" (UID: \"c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8\") " pod="openstack/aodh-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.372740 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8-internal-tls-certs\") pod \"aodh-0\" (UID: \"c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8\") " pod="openstack/aodh-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.372765 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgnzw\" (UniqueName: \"kubernetes.io/projected/c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8-kube-api-access-vgnzw\") pod \"aodh-0\" (UID: \"c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8\") " pod="openstack/aodh-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.373198 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8-public-tls-certs\") pod \"aodh-0\" (UID: \"c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8\") " pod="openstack/aodh-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.373233 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8-config-data\") pod \"aodh-0\" (UID: \"c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8\") " pod="openstack/aodh-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.373309 4779 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/319ee731-3ce1-42ae-bd3e-0c8e38840b1d-config\") on node \"crc\" DevicePath \"\"" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.373329 4779 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.373354 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phk6p\" (UniqueName: \"kubernetes.io/projected/319ee731-3ce1-42ae-bd3e-0c8e38840b1d-kube-api-access-phk6p\") on node \"crc\" DevicePath \"\"" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.373365 4779 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/319ee731-3ce1-42ae-bd3e-0c8e38840b1d-tls-assets\") on node \"crc\" DevicePath \"\"" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.373372 4779 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/319ee731-3ce1-42ae-bd3e-0c8e38840b1d-config-out\") on node \"crc\" DevicePath \"\"" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.373381 4779 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/319ee731-3ce1-42ae-bd3e-0c8e38840b1d-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.373389 4779 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/319ee731-3ce1-42ae-bd3e-0c8e38840b1d-web-config\") on node \"crc\" DevicePath \"\"" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.373399 4779 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/319ee731-3ce1-42ae-bd3e-0c8e38840b1d-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.408061 4779 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.408841 4779 scope.go:117] "RemoveContainer" containerID="ade1213fe702ecc2625ffb0eaec599ccb825f142ff1aa5a3cc461ecae8d1709e" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.434448 4779 scope.go:117] "RemoveContainer" containerID="12f281c00d07dd0cf97467152a470d764a1210a21ff5af64bb54dc441faa2fe5" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.460928 4779 scope.go:117] "RemoveContainer" containerID="97ef1a4047ed4cc3045abea3848e7952eecb45b096a66f6d243bff6d241154d0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.475441 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8-internal-tls-certs\") pod \"aodh-0\" (UID: \"c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8\") " pod="openstack/aodh-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.475494 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgnzw\" (UniqueName: \"kubernetes.io/projected/c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8-kube-api-access-vgnzw\") pod \"aodh-0\" (UID: \"c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8\") " pod="openstack/aodh-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.475601 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8-public-tls-certs\") pod \"aodh-0\" (UID: \"c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8\") " pod="openstack/aodh-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.475629 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8-config-data\") pod \"aodh-0\" (UID: \"c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8\") " pod="openstack/aodh-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.475693 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8-scripts\") pod \"aodh-0\" (UID: \"c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8\") " pod="openstack/aodh-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.475715 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8-combined-ca-bundle\") pod \"aodh-0\" (UID: \"c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8\") " pod="openstack/aodh-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.475813 4779 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.487076 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8-internal-tls-certs\") pod \"aodh-0\" (UID: \"c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8\") " pod="openstack/aodh-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.490257 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8-combined-ca-bundle\") pod \"aodh-0\" (UID: \"c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8\") " pod="openstack/aodh-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.492740 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8-public-tls-certs\") pod \"aodh-0\" (UID: \"c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8\") " pod="openstack/aodh-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.500745 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8-scripts\") pod \"aodh-0\" (UID: \"c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8\") " pod="openstack/aodh-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.502894 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgnzw\" (UniqueName: \"kubernetes.io/projected/c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8-kube-api-access-vgnzw\") pod \"aodh-0\" (UID: \"c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8\") " pod="openstack/aodh-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.505346 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8-config-data\") pod \"aodh-0\" (UID: \"c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8\") " pod="openstack/aodh-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.535171 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.538249 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.550146 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.553028 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.562090 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.562457 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.562313 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.562513 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.562697 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.562713 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-k7vw5" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.562883 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.594743 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.608676 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.703398 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"prometheus-metric-storage-0\" (UID: \"9c6b697b-8ae8-4991-90a0-b453212daf19\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.703463 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9c6b697b-8ae8-4991-90a0-b453212daf19-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9c6b697b-8ae8-4991-90a0-b453212daf19\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.703512 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9c6b697b-8ae8-4991-90a0-b453212daf19-config\") pod \"prometheus-metric-storage-0\" (UID: \"9c6b697b-8ae8-4991-90a0-b453212daf19\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.703541 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9c6b697b-8ae8-4991-90a0-b453212daf19-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9c6b697b-8ae8-4991-90a0-b453212daf19\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.703575 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c6b697b-8ae8-4991-90a0-b453212daf19-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"9c6b697b-8ae8-4991-90a0-b453212daf19\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.703599 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9c6b697b-8ae8-4991-90a0-b453212daf19-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9c6b697b-8ae8-4991-90a0-b453212daf19\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.703619 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9c6b697b-8ae8-4991-90a0-b453212daf19-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9c6b697b-8ae8-4991-90a0-b453212daf19\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.703641 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9c6b697b-8ae8-4991-90a0-b453212daf19-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9c6b697b-8ae8-4991-90a0-b453212daf19\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.703668 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czv6c\" (UniqueName: \"kubernetes.io/projected/9c6b697b-8ae8-4991-90a0-b453212daf19-kube-api-access-czv6c\") pod \"prometheus-metric-storage-0\" (UID: \"9c6b697b-8ae8-4991-90a0-b453212daf19\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.703691 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9c6b697b-8ae8-4991-90a0-b453212daf19-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9c6b697b-8ae8-4991-90a0-b453212daf19\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.703724 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9c6b697b-8ae8-4991-90a0-b453212daf19-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9c6b697b-8ae8-4991-90a0-b453212daf19\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.752415 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="319ee731-3ce1-42ae-bd3e-0c8e38840b1d" path="/var/lib/kubelet/pods/319ee731-3ce1-42ae-bd3e-0c8e38840b1d/volumes" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.757607 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f349931-5145-4f53-a9a4-6e2c915d0ab9" path="/var/lib/kubelet/pods/7f349931-5145-4f53-a9a4-6e2c915d0ab9/volumes" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.758318 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae81fc0c-b90a-4cc9-a225-caf5d3568ae1" path="/var/lib/kubelet/pods/ae81fc0c-b90a-4cc9-a225-caf5d3568ae1/volumes" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.805421 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9c6b697b-8ae8-4991-90a0-b453212daf19-config\") pod \"prometheus-metric-storage-0\" (UID: \"9c6b697b-8ae8-4991-90a0-b453212daf19\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.805494 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9c6b697b-8ae8-4991-90a0-b453212daf19-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9c6b697b-8ae8-4991-90a0-b453212daf19\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.805542 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c6b697b-8ae8-4991-90a0-b453212daf19-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"9c6b697b-8ae8-4991-90a0-b453212daf19\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.805574 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9c6b697b-8ae8-4991-90a0-b453212daf19-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9c6b697b-8ae8-4991-90a0-b453212daf19\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.805603 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9c6b697b-8ae8-4991-90a0-b453212daf19-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9c6b697b-8ae8-4991-90a0-b453212daf19\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.805629 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9c6b697b-8ae8-4991-90a0-b453212daf19-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9c6b697b-8ae8-4991-90a0-b453212daf19\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.805669 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czv6c\" (UniqueName: \"kubernetes.io/projected/9c6b697b-8ae8-4991-90a0-b453212daf19-kube-api-access-czv6c\") pod \"prometheus-metric-storage-0\" (UID: \"9c6b697b-8ae8-4991-90a0-b453212daf19\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.805701 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9c6b697b-8ae8-4991-90a0-b453212daf19-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9c6b697b-8ae8-4991-90a0-b453212daf19\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.805749 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9c6b697b-8ae8-4991-90a0-b453212daf19-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9c6b697b-8ae8-4991-90a0-b453212daf19\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.805793 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"prometheus-metric-storage-0\" (UID: \"9c6b697b-8ae8-4991-90a0-b453212daf19\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.805836 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9c6b697b-8ae8-4991-90a0-b453212daf19-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9c6b697b-8ae8-4991-90a0-b453212daf19\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.810736 4779 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"prometheus-metric-storage-0\" (UID: \"9c6b697b-8ae8-4991-90a0-b453212daf19\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/prometheus-metric-storage-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.811850 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9c6b697b-8ae8-4991-90a0-b453212daf19-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9c6b697b-8ae8-4991-90a0-b453212daf19\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.811976 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9c6b697b-8ae8-4991-90a0-b453212daf19-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9c6b697b-8ae8-4991-90a0-b453212daf19\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.813323 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9c6b697b-8ae8-4991-90a0-b453212daf19-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9c6b697b-8ae8-4991-90a0-b453212daf19\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.815010 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9c6b697b-8ae8-4991-90a0-b453212daf19-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9c6b697b-8ae8-4991-90a0-b453212daf19\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.815256 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9c6b697b-8ae8-4991-90a0-b453212daf19-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9c6b697b-8ae8-4991-90a0-b453212daf19\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.816194 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9c6b697b-8ae8-4991-90a0-b453212daf19-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9c6b697b-8ae8-4991-90a0-b453212daf19\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.826454 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9c6b697b-8ae8-4991-90a0-b453212daf19-config\") pod \"prometheus-metric-storage-0\" (UID: \"9c6b697b-8ae8-4991-90a0-b453212daf19\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.828391 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czv6c\" (UniqueName: \"kubernetes.io/projected/9c6b697b-8ae8-4991-90a0-b453212daf19-kube-api-access-czv6c\") pod \"prometheus-metric-storage-0\" (UID: \"9c6b697b-8ae8-4991-90a0-b453212daf19\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.835871 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c6b697b-8ae8-4991-90a0-b453212daf19-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"9c6b697b-8ae8-4991-90a0-b453212daf19\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.836377 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9c6b697b-8ae8-4991-90a0-b453212daf19-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9c6b697b-8ae8-4991-90a0-b453212daf19\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.876254 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"prometheus-metric-storage-0\" (UID: \"9c6b697b-8ae8-4991-90a0-b453212daf19\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:32:17 crc kubenswrapper[4779]: I1128 13:32:17.882448 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 28 13:32:18 crc kubenswrapper[4779]: W1128 13:32:18.115816 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc307f6f9_8be0_4fdf_bd2c_f91cc9d27fe8.slice/crio-df614fe5f0d9d4e65bd3dce70698e4e467b800a1d75b163f8a31fb4ce425064a WatchSource:0}: Error finding container df614fe5f0d9d4e65bd3dce70698e4e467b800a1d75b163f8a31fb4ce425064a: Status 404 returned error can't find the container with id df614fe5f0d9d4e65bd3dce70698e4e467b800a1d75b163f8a31fb4ce425064a Nov 28 13:32:18 crc kubenswrapper[4779]: I1128 13:32:18.116743 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 28 13:32:18 crc kubenswrapper[4779]: I1128 13:32:18.164733 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8","Type":"ContainerStarted","Data":"df614fe5f0d9d4e65bd3dce70698e4e467b800a1d75b163f8a31fb4ce425064a"} Nov 28 13:32:18 crc kubenswrapper[4779]: W1128 13:32:18.445114 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c6b697b_8ae8_4991_90a0_b453212daf19.slice/crio-7b667e9fb4b8401828c1e085a9afdae8aa205684afa077a74d443d8637d2cbc0 WatchSource:0}: Error finding container 7b667e9fb4b8401828c1e085a9afdae8aa205684afa077a74d443d8637d2cbc0: Status 404 returned error can't find the container with id 7b667e9fb4b8401828c1e085a9afdae8aa205684afa077a74d443d8637d2cbc0 Nov 28 13:32:18 crc kubenswrapper[4779]: I1128 13:32:18.445320 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 28 13:32:19 crc kubenswrapper[4779]: I1128 13:32:19.178954 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8","Type":"ContainerStarted","Data":"65d9f4333d5f67ce3f08ead46cc086a5137e47364fd6d0b611c02aa0758b1e67"} Nov 28 13:32:19 crc kubenswrapper[4779]: I1128 13:32:19.182923 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9c6b697b-8ae8-4991-90a0-b453212daf19","Type":"ContainerStarted","Data":"7b667e9fb4b8401828c1e085a9afdae8aa205684afa077a74d443d8637d2cbc0"} Nov 28 13:32:20 crc kubenswrapper[4779]: I1128 13:32:20.193832 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8","Type":"ContainerStarted","Data":"d59d05a8052f2346c3c880b77fffed198b1e8b9b0513f9a07a3023dafdbbc558"} Nov 28 13:32:21 crc kubenswrapper[4779]: I1128 13:32:21.204833 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8","Type":"ContainerStarted","Data":"15604321dc0fff32cd2201373db226dc785e61f91920b9976e77362b71e15b77"} Nov 28 13:32:22 crc kubenswrapper[4779]: I1128 13:32:22.216010 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8","Type":"ContainerStarted","Data":"1569f7fe3a2c5a8a83a36139284d2149d2f103e58644850f81e99bf0156c0e20"} Nov 28 13:32:22 crc kubenswrapper[4779]: I1128 13:32:22.217428 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9c6b697b-8ae8-4991-90a0-b453212daf19","Type":"ContainerStarted","Data":"3ef83135eeae56df36102e72a9a861862e07ab0b3dc0b00489644358f5e88a3b"} Nov 28 13:32:22 crc kubenswrapper[4779]: I1128 13:32:22.251225 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.566862957 podStartE2EDuration="5.251198389s" podCreationTimestamp="2025-11-28 13:32:17 +0000 UTC" firstStartedPulling="2025-11-28 13:32:18.121671118 +0000 UTC m=+3398.687346472" lastFinishedPulling="2025-11-28 13:32:20.80600653 +0000 UTC m=+3401.371681904" observedRunningTime="2025-11-28 13:32:22.237854702 +0000 UTC m=+3402.803530076" watchObservedRunningTime="2025-11-28 13:32:22.251198389 +0000 UTC m=+3402.816873743" Nov 28 13:32:29 crc kubenswrapper[4779]: I1128 13:32:29.288665 4779 generic.go:334] "Generic (PLEG): container finished" podID="9c6b697b-8ae8-4991-90a0-b453212daf19" containerID="3ef83135eeae56df36102e72a9a861862e07ab0b3dc0b00489644358f5e88a3b" exitCode=0 Nov 28 13:32:29 crc kubenswrapper[4779]: I1128 13:32:29.288752 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9c6b697b-8ae8-4991-90a0-b453212daf19","Type":"ContainerDied","Data":"3ef83135eeae56df36102e72a9a861862e07ab0b3dc0b00489644358f5e88a3b"} Nov 28 13:32:30 crc kubenswrapper[4779]: I1128 13:32:30.299796 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9c6b697b-8ae8-4991-90a0-b453212daf19","Type":"ContainerStarted","Data":"c189e4cea3228cdf1a675f651033c8f62ee2f972ff0adb85eb1420163e781bde"} Nov 28 13:32:31 crc kubenswrapper[4779]: I1128 13:32:31.567581 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lxgc4"] Nov 28 13:32:31 crc kubenswrapper[4779]: I1128 13:32:31.570495 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lxgc4" Nov 28 13:32:31 crc kubenswrapper[4779]: I1128 13:32:31.581976 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lxgc4"] Nov 28 13:32:31 crc kubenswrapper[4779]: I1128 13:32:31.701541 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72qwv\" (UniqueName: \"kubernetes.io/projected/f4e91b16-da31-4bdc-8272-a79655413382-kube-api-access-72qwv\") pod \"community-operators-lxgc4\" (UID: \"f4e91b16-da31-4bdc-8272-a79655413382\") " pod="openshift-marketplace/community-operators-lxgc4" Nov 28 13:32:31 crc kubenswrapper[4779]: I1128 13:32:31.701803 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4e91b16-da31-4bdc-8272-a79655413382-catalog-content\") pod \"community-operators-lxgc4\" (UID: \"f4e91b16-da31-4bdc-8272-a79655413382\") " pod="openshift-marketplace/community-operators-lxgc4" Nov 28 13:32:31 crc kubenswrapper[4779]: I1128 13:32:31.702010 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4e91b16-da31-4bdc-8272-a79655413382-utilities\") pod \"community-operators-lxgc4\" (UID: \"f4e91b16-da31-4bdc-8272-a79655413382\") " pod="openshift-marketplace/community-operators-lxgc4" Nov 28 13:32:31 crc kubenswrapper[4779]: I1128 13:32:31.803903 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72qwv\" (UniqueName: \"kubernetes.io/projected/f4e91b16-da31-4bdc-8272-a79655413382-kube-api-access-72qwv\") pod \"community-operators-lxgc4\" (UID: \"f4e91b16-da31-4bdc-8272-a79655413382\") " pod="openshift-marketplace/community-operators-lxgc4" Nov 28 13:32:31 crc kubenswrapper[4779]: I1128 13:32:31.804257 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4e91b16-da31-4bdc-8272-a79655413382-catalog-content\") pod \"community-operators-lxgc4\" (UID: \"f4e91b16-da31-4bdc-8272-a79655413382\") " pod="openshift-marketplace/community-operators-lxgc4" Nov 28 13:32:31 crc kubenswrapper[4779]: I1128 13:32:31.804415 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4e91b16-da31-4bdc-8272-a79655413382-utilities\") pod \"community-operators-lxgc4\" (UID: \"f4e91b16-da31-4bdc-8272-a79655413382\") " pod="openshift-marketplace/community-operators-lxgc4" Nov 28 13:32:31 crc kubenswrapper[4779]: I1128 13:32:31.804930 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4e91b16-da31-4bdc-8272-a79655413382-catalog-content\") pod \"community-operators-lxgc4\" (UID: \"f4e91b16-da31-4bdc-8272-a79655413382\") " pod="openshift-marketplace/community-operators-lxgc4" Nov 28 13:32:31 crc kubenswrapper[4779]: I1128 13:32:31.805038 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4e91b16-da31-4bdc-8272-a79655413382-utilities\") pod \"community-operators-lxgc4\" (UID: \"f4e91b16-da31-4bdc-8272-a79655413382\") " pod="openshift-marketplace/community-operators-lxgc4" Nov 28 13:32:31 crc kubenswrapper[4779]: I1128 13:32:31.830742 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72qwv\" (UniqueName: \"kubernetes.io/projected/f4e91b16-da31-4bdc-8272-a79655413382-kube-api-access-72qwv\") pod \"community-operators-lxgc4\" (UID: \"f4e91b16-da31-4bdc-8272-a79655413382\") " pod="openshift-marketplace/community-operators-lxgc4" Nov 28 13:32:31 crc kubenswrapper[4779]: I1128 13:32:31.904137 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lxgc4" Nov 28 13:32:32 crc kubenswrapper[4779]: I1128 13:32:32.489024 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lxgc4"] Nov 28 13:32:32 crc kubenswrapper[4779]: W1128 13:32:32.497360 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4e91b16_da31_4bdc_8272_a79655413382.slice/crio-5936c4069b78520710ff6ca79137aaebafe3411099f8c03c5b1d283f24677325 WatchSource:0}: Error finding container 5936c4069b78520710ff6ca79137aaebafe3411099f8c03c5b1d283f24677325: Status 404 returned error can't find the container with id 5936c4069b78520710ff6ca79137aaebafe3411099f8c03c5b1d283f24677325 Nov 28 13:32:33 crc kubenswrapper[4779]: I1128 13:32:33.334848 4779 generic.go:334] "Generic (PLEG): container finished" podID="f4e91b16-da31-4bdc-8272-a79655413382" containerID="0111a472bc101702ab4c57746e139e2447309d4967e9fbcdd371fd908e6d925c" exitCode=0 Nov 28 13:32:33 crc kubenswrapper[4779]: I1128 13:32:33.335004 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxgc4" event={"ID":"f4e91b16-da31-4bdc-8272-a79655413382","Type":"ContainerDied","Data":"0111a472bc101702ab4c57746e139e2447309d4967e9fbcdd371fd908e6d925c"} Nov 28 13:32:33 crc kubenswrapper[4779]: I1128 13:32:33.336569 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxgc4" event={"ID":"f4e91b16-da31-4bdc-8272-a79655413382","Type":"ContainerStarted","Data":"5936c4069b78520710ff6ca79137aaebafe3411099f8c03c5b1d283f24677325"} Nov 28 13:32:34 crc kubenswrapper[4779]: I1128 13:32:34.348169 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9c6b697b-8ae8-4991-90a0-b453212daf19","Type":"ContainerStarted","Data":"618007808bfc5ca1950f8e82001d924f5151b3cdeb0ba7e3d67a93d48183d549"} Nov 28 13:32:34 crc kubenswrapper[4779]: I1128 13:32:34.348213 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9c6b697b-8ae8-4991-90a0-b453212daf19","Type":"ContainerStarted","Data":"75f7b6f68fcb514ca2dee03da0e921749efb4f0c7bc20cfcaf4a3e22339a2bbb"} Nov 28 13:32:34 crc kubenswrapper[4779]: I1128 13:32:34.382726 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=17.382705622 podStartE2EDuration="17.382705622s" podCreationTimestamp="2025-11-28 13:32:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 13:32:34.372335515 +0000 UTC m=+3414.938010869" watchObservedRunningTime="2025-11-28 13:32:34.382705622 +0000 UTC m=+3414.948380976" Nov 28 13:32:35 crc kubenswrapper[4779]: I1128 13:32:35.359464 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxgc4" event={"ID":"f4e91b16-da31-4bdc-8272-a79655413382","Type":"ContainerStarted","Data":"df632230c67999942bac84b05a49829416599f370d9acd4ec430dd4778a9c467"} Nov 28 13:32:37 crc kubenswrapper[4779]: I1128 13:32:37.379942 4779 generic.go:334] "Generic (PLEG): container finished" podID="f4e91b16-da31-4bdc-8272-a79655413382" containerID="df632230c67999942bac84b05a49829416599f370d9acd4ec430dd4778a9c467" exitCode=0 Nov 28 13:32:37 crc kubenswrapper[4779]: I1128 13:32:37.380008 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxgc4" event={"ID":"f4e91b16-da31-4bdc-8272-a79655413382","Type":"ContainerDied","Data":"df632230c67999942bac84b05a49829416599f370d9acd4ec430dd4778a9c467"} Nov 28 13:32:37 crc kubenswrapper[4779]: I1128 13:32:37.883515 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Nov 28 13:32:38 crc kubenswrapper[4779]: I1128 13:32:38.391486 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxgc4" event={"ID":"f4e91b16-da31-4bdc-8272-a79655413382","Type":"ContainerStarted","Data":"5d2dd23aeee8ff1aaf24c20ee573d94329d5b03107f4c66ce0ae8a759bb5a577"} Nov 28 13:32:38 crc kubenswrapper[4779]: I1128 13:32:38.414604 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lxgc4" podStartSLOduration=2.839263324 podStartE2EDuration="7.414586594s" podCreationTimestamp="2025-11-28 13:32:31 +0000 UTC" firstStartedPulling="2025-11-28 13:32:33.337302114 +0000 UTC m=+3413.902977468" lastFinishedPulling="2025-11-28 13:32:37.912625384 +0000 UTC m=+3418.478300738" observedRunningTime="2025-11-28 13:32:38.410873375 +0000 UTC m=+3418.976548729" watchObservedRunningTime="2025-11-28 13:32:38.414586594 +0000 UTC m=+3418.980261948" Nov 28 13:32:41 crc kubenswrapper[4779]: I1128 13:32:41.904989 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lxgc4" Nov 28 13:32:41 crc kubenswrapper[4779]: I1128 13:32:41.905552 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lxgc4" Nov 28 13:32:41 crc kubenswrapper[4779]: I1128 13:32:41.951880 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lxgc4" Nov 28 13:32:46 crc kubenswrapper[4779]: I1128 13:32:46.284969 4779 patch_prober.go:28] interesting pod/machine-config-daemon-kj9g2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 13:32:46 crc kubenswrapper[4779]: I1128 13:32:46.285302 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 13:32:47 crc kubenswrapper[4779]: I1128 13:32:47.883870 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Nov 28 13:32:47 crc kubenswrapper[4779]: I1128 13:32:47.891090 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Nov 28 13:32:48 crc kubenswrapper[4779]: I1128 13:32:48.490078 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Nov 28 13:32:51 crc kubenswrapper[4779]: I1128 13:32:51.953497 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lxgc4" Nov 28 13:32:52 crc kubenswrapper[4779]: I1128 13:32:52.003769 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lxgc4"] Nov 28 13:32:52 crc kubenswrapper[4779]: I1128 13:32:52.521885 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lxgc4" podUID="f4e91b16-da31-4bdc-8272-a79655413382" containerName="registry-server" containerID="cri-o://5d2dd23aeee8ff1aaf24c20ee573d94329d5b03107f4c66ce0ae8a759bb5a577" gracePeriod=2 Nov 28 13:32:52 crc kubenswrapper[4779]: I1128 13:32:52.992834 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lxgc4" Nov 28 13:32:53 crc kubenswrapper[4779]: I1128 13:32:53.078505 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4e91b16-da31-4bdc-8272-a79655413382-utilities\") pod \"f4e91b16-da31-4bdc-8272-a79655413382\" (UID: \"f4e91b16-da31-4bdc-8272-a79655413382\") " Nov 28 13:32:53 crc kubenswrapper[4779]: I1128 13:32:53.078680 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72qwv\" (UniqueName: \"kubernetes.io/projected/f4e91b16-da31-4bdc-8272-a79655413382-kube-api-access-72qwv\") pod \"f4e91b16-da31-4bdc-8272-a79655413382\" (UID: \"f4e91b16-da31-4bdc-8272-a79655413382\") " Nov 28 13:32:53 crc kubenswrapper[4779]: I1128 13:32:53.078790 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4e91b16-da31-4bdc-8272-a79655413382-catalog-content\") pod \"f4e91b16-da31-4bdc-8272-a79655413382\" (UID: \"f4e91b16-da31-4bdc-8272-a79655413382\") " Nov 28 13:32:53 crc kubenswrapper[4779]: I1128 13:32:53.079444 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4e91b16-da31-4bdc-8272-a79655413382-utilities" (OuterVolumeSpecName: "utilities") pod "f4e91b16-da31-4bdc-8272-a79655413382" (UID: "f4e91b16-da31-4bdc-8272-a79655413382"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 13:32:53 crc kubenswrapper[4779]: I1128 13:32:53.084595 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4e91b16-da31-4bdc-8272-a79655413382-kube-api-access-72qwv" (OuterVolumeSpecName: "kube-api-access-72qwv") pod "f4e91b16-da31-4bdc-8272-a79655413382" (UID: "f4e91b16-da31-4bdc-8272-a79655413382"). InnerVolumeSpecName "kube-api-access-72qwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:32:53 crc kubenswrapper[4779]: I1128 13:32:53.129834 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4e91b16-da31-4bdc-8272-a79655413382-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f4e91b16-da31-4bdc-8272-a79655413382" (UID: "f4e91b16-da31-4bdc-8272-a79655413382"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 13:32:53 crc kubenswrapper[4779]: I1128 13:32:53.181364 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72qwv\" (UniqueName: \"kubernetes.io/projected/f4e91b16-da31-4bdc-8272-a79655413382-kube-api-access-72qwv\") on node \"crc\" DevicePath \"\"" Nov 28 13:32:53 crc kubenswrapper[4779]: I1128 13:32:53.181403 4779 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4e91b16-da31-4bdc-8272-a79655413382-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 13:32:53 crc kubenswrapper[4779]: I1128 13:32:53.181413 4779 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4e91b16-da31-4bdc-8272-a79655413382-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 13:32:53 crc kubenswrapper[4779]: I1128 13:32:53.533690 4779 generic.go:334] "Generic (PLEG): container finished" podID="f4e91b16-da31-4bdc-8272-a79655413382" containerID="5d2dd23aeee8ff1aaf24c20ee573d94329d5b03107f4c66ce0ae8a759bb5a577" exitCode=0 Nov 28 13:32:53 crc kubenswrapper[4779]: I1128 13:32:53.533731 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxgc4" event={"ID":"f4e91b16-da31-4bdc-8272-a79655413382","Type":"ContainerDied","Data":"5d2dd23aeee8ff1aaf24c20ee573d94329d5b03107f4c66ce0ae8a759bb5a577"} Nov 28 13:32:53 crc kubenswrapper[4779]: I1128 13:32:53.533759 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxgc4" event={"ID":"f4e91b16-da31-4bdc-8272-a79655413382","Type":"ContainerDied","Data":"5936c4069b78520710ff6ca79137aaebafe3411099f8c03c5b1d283f24677325"} Nov 28 13:32:53 crc kubenswrapper[4779]: I1128 13:32:53.533777 4779 scope.go:117] "RemoveContainer" containerID="5d2dd23aeee8ff1aaf24c20ee573d94329d5b03107f4c66ce0ae8a759bb5a577" Nov 28 13:32:53 crc kubenswrapper[4779]: I1128 13:32:53.533833 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lxgc4" Nov 28 13:32:53 crc kubenswrapper[4779]: I1128 13:32:53.559648 4779 scope.go:117] "RemoveContainer" containerID="df632230c67999942bac84b05a49829416599f370d9acd4ec430dd4778a9c467" Nov 28 13:32:53 crc kubenswrapper[4779]: I1128 13:32:53.565719 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lxgc4"] Nov 28 13:32:53 crc kubenswrapper[4779]: I1128 13:32:53.575154 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lxgc4"] Nov 28 13:32:53 crc kubenswrapper[4779]: I1128 13:32:53.596628 4779 scope.go:117] "RemoveContainer" containerID="0111a472bc101702ab4c57746e139e2447309d4967e9fbcdd371fd908e6d925c" Nov 28 13:32:53 crc kubenswrapper[4779]: I1128 13:32:53.628251 4779 scope.go:117] "RemoveContainer" containerID="5d2dd23aeee8ff1aaf24c20ee573d94329d5b03107f4c66ce0ae8a759bb5a577" Nov 28 13:32:53 crc kubenswrapper[4779]: E1128 13:32:53.628663 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d2dd23aeee8ff1aaf24c20ee573d94329d5b03107f4c66ce0ae8a759bb5a577\": container with ID starting with 5d2dd23aeee8ff1aaf24c20ee573d94329d5b03107f4c66ce0ae8a759bb5a577 not found: ID does not exist" containerID="5d2dd23aeee8ff1aaf24c20ee573d94329d5b03107f4c66ce0ae8a759bb5a577" Nov 28 13:32:53 crc kubenswrapper[4779]: I1128 13:32:53.628697 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d2dd23aeee8ff1aaf24c20ee573d94329d5b03107f4c66ce0ae8a759bb5a577"} err="failed to get container status \"5d2dd23aeee8ff1aaf24c20ee573d94329d5b03107f4c66ce0ae8a759bb5a577\": rpc error: code = NotFound desc = could not find container \"5d2dd23aeee8ff1aaf24c20ee573d94329d5b03107f4c66ce0ae8a759bb5a577\": container with ID starting with 5d2dd23aeee8ff1aaf24c20ee573d94329d5b03107f4c66ce0ae8a759bb5a577 not found: ID does not exist" Nov 28 13:32:53 crc kubenswrapper[4779]: I1128 13:32:53.628718 4779 scope.go:117] "RemoveContainer" containerID="df632230c67999942bac84b05a49829416599f370d9acd4ec430dd4778a9c467" Nov 28 13:32:53 crc kubenswrapper[4779]: E1128 13:32:53.629151 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df632230c67999942bac84b05a49829416599f370d9acd4ec430dd4778a9c467\": container with ID starting with df632230c67999942bac84b05a49829416599f370d9acd4ec430dd4778a9c467 not found: ID does not exist" containerID="df632230c67999942bac84b05a49829416599f370d9acd4ec430dd4778a9c467" Nov 28 13:32:53 crc kubenswrapper[4779]: I1128 13:32:53.629193 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df632230c67999942bac84b05a49829416599f370d9acd4ec430dd4778a9c467"} err="failed to get container status \"df632230c67999942bac84b05a49829416599f370d9acd4ec430dd4778a9c467\": rpc error: code = NotFound desc = could not find container \"df632230c67999942bac84b05a49829416599f370d9acd4ec430dd4778a9c467\": container with ID starting with df632230c67999942bac84b05a49829416599f370d9acd4ec430dd4778a9c467 not found: ID does not exist" Nov 28 13:32:53 crc kubenswrapper[4779]: I1128 13:32:53.629230 4779 scope.go:117] "RemoveContainer" containerID="0111a472bc101702ab4c57746e139e2447309d4967e9fbcdd371fd908e6d925c" Nov 28 13:32:53 crc kubenswrapper[4779]: E1128 13:32:53.629633 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0111a472bc101702ab4c57746e139e2447309d4967e9fbcdd371fd908e6d925c\": container with ID starting with 0111a472bc101702ab4c57746e139e2447309d4967e9fbcdd371fd908e6d925c not found: ID does not exist" containerID="0111a472bc101702ab4c57746e139e2447309d4967e9fbcdd371fd908e6d925c" Nov 28 13:32:53 crc kubenswrapper[4779]: I1128 13:32:53.629676 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0111a472bc101702ab4c57746e139e2447309d4967e9fbcdd371fd908e6d925c"} err="failed to get container status \"0111a472bc101702ab4c57746e139e2447309d4967e9fbcdd371fd908e6d925c\": rpc error: code = NotFound desc = could not find container \"0111a472bc101702ab4c57746e139e2447309d4967e9fbcdd371fd908e6d925c\": container with ID starting with 0111a472bc101702ab4c57746e139e2447309d4967e9fbcdd371fd908e6d925c not found: ID does not exist" Nov 28 13:32:53 crc kubenswrapper[4779]: I1128 13:32:53.737575 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4e91b16-da31-4bdc-8272-a79655413382" path="/var/lib/kubelet/pods/f4e91b16-da31-4bdc-8272-a79655413382/volumes" Nov 28 13:33:16 crc kubenswrapper[4779]: I1128 13:33:16.285235 4779 patch_prober.go:28] interesting pod/machine-config-daemon-kj9g2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 13:33:16 crc kubenswrapper[4779]: I1128 13:33:16.285837 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 13:33:16 crc kubenswrapper[4779]: I1128 13:33:16.285891 4779 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" Nov 28 13:33:16 crc kubenswrapper[4779]: I1128 13:33:16.286704 4779 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3e44906474ac2ee22a4abe440e8a06eaadb743a6b2583926c64ac53c9f7bc166"} pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 13:33:16 crc kubenswrapper[4779]: I1128 13:33:16.286788 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" containerID="cri-o://3e44906474ac2ee22a4abe440e8a06eaadb743a6b2583926c64ac53c9f7bc166" gracePeriod=600 Nov 28 13:33:16 crc kubenswrapper[4779]: I1128 13:33:16.788015 4779 generic.go:334] "Generic (PLEG): container finished" podID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerID="3e44906474ac2ee22a4abe440e8a06eaadb743a6b2583926c64ac53c9f7bc166" exitCode=0 Nov 28 13:33:16 crc kubenswrapper[4779]: I1128 13:33:16.788056 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" event={"ID":"3b2a3eb4-4de5-491b-b466-3a35b7d745ec","Type":"ContainerDied","Data":"3e44906474ac2ee22a4abe440e8a06eaadb743a6b2583926c64ac53c9f7bc166"} Nov 28 13:33:16 crc kubenswrapper[4779]: I1128 13:33:16.788409 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" event={"ID":"3b2a3eb4-4de5-491b-b466-3a35b7d745ec","Type":"ContainerStarted","Data":"e0979e2873372762dc22f2d860bfe12ccf1b62b9acc4eb82e9e76a9701d5036b"} Nov 28 13:33:16 crc kubenswrapper[4779]: I1128 13:33:16.788441 4779 scope.go:117] "RemoveContainer" containerID="787907d9b97619607abf8c4f9cecb91840367136040816abeda0737b36259574" Nov 28 13:33:45 crc kubenswrapper[4779]: I1128 13:33:45.113475 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6d4xw"] Nov 28 13:33:45 crc kubenswrapper[4779]: E1128 13:33:45.115064 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4e91b16-da31-4bdc-8272-a79655413382" containerName="extract-content" Nov 28 13:33:45 crc kubenswrapper[4779]: I1128 13:33:45.115079 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4e91b16-da31-4bdc-8272-a79655413382" containerName="extract-content" Nov 28 13:33:45 crc kubenswrapper[4779]: E1128 13:33:45.115107 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4e91b16-da31-4bdc-8272-a79655413382" containerName="extract-utilities" Nov 28 13:33:45 crc kubenswrapper[4779]: I1128 13:33:45.115114 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4e91b16-da31-4bdc-8272-a79655413382" containerName="extract-utilities" Nov 28 13:33:45 crc kubenswrapper[4779]: E1128 13:33:45.115137 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4e91b16-da31-4bdc-8272-a79655413382" containerName="registry-server" Nov 28 13:33:45 crc kubenswrapper[4779]: I1128 13:33:45.115146 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4e91b16-da31-4bdc-8272-a79655413382" containerName="registry-server" Nov 28 13:33:45 crc kubenswrapper[4779]: I1128 13:33:45.115368 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4e91b16-da31-4bdc-8272-a79655413382" containerName="registry-server" Nov 28 13:33:45 crc kubenswrapper[4779]: I1128 13:33:45.116924 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6d4xw" Nov 28 13:33:45 crc kubenswrapper[4779]: I1128 13:33:45.132762 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6d4xw"] Nov 28 13:33:45 crc kubenswrapper[4779]: I1128 13:33:45.256513 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96189f94-2a1f-46f4-9a6c-69a8e0ba0bf3-catalog-content\") pod \"certified-operators-6d4xw\" (UID: \"96189f94-2a1f-46f4-9a6c-69a8e0ba0bf3\") " pod="openshift-marketplace/certified-operators-6d4xw" Nov 28 13:33:45 crc kubenswrapper[4779]: I1128 13:33:45.256835 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh4n2\" (UniqueName: \"kubernetes.io/projected/96189f94-2a1f-46f4-9a6c-69a8e0ba0bf3-kube-api-access-rh4n2\") pod \"certified-operators-6d4xw\" (UID: \"96189f94-2a1f-46f4-9a6c-69a8e0ba0bf3\") " pod="openshift-marketplace/certified-operators-6d4xw" Nov 28 13:33:45 crc kubenswrapper[4779]: I1128 13:33:45.257009 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96189f94-2a1f-46f4-9a6c-69a8e0ba0bf3-utilities\") pod \"certified-operators-6d4xw\" (UID: \"96189f94-2a1f-46f4-9a6c-69a8e0ba0bf3\") " pod="openshift-marketplace/certified-operators-6d4xw" Nov 28 13:33:45 crc kubenswrapper[4779]: I1128 13:33:45.358845 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96189f94-2a1f-46f4-9a6c-69a8e0ba0bf3-utilities\") pod \"certified-operators-6d4xw\" (UID: \"96189f94-2a1f-46f4-9a6c-69a8e0ba0bf3\") " pod="openshift-marketplace/certified-operators-6d4xw" Nov 28 13:33:45 crc kubenswrapper[4779]: I1128 13:33:45.359438 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96189f94-2a1f-46f4-9a6c-69a8e0ba0bf3-utilities\") pod \"certified-operators-6d4xw\" (UID: \"96189f94-2a1f-46f4-9a6c-69a8e0ba0bf3\") " pod="openshift-marketplace/certified-operators-6d4xw" Nov 28 13:33:45 crc kubenswrapper[4779]: I1128 13:33:45.360882 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96189f94-2a1f-46f4-9a6c-69a8e0ba0bf3-catalog-content\") pod \"certified-operators-6d4xw\" (UID: \"96189f94-2a1f-46f4-9a6c-69a8e0ba0bf3\") " pod="openshift-marketplace/certified-operators-6d4xw" Nov 28 13:33:45 crc kubenswrapper[4779]: I1128 13:33:45.360961 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rh4n2\" (UniqueName: \"kubernetes.io/projected/96189f94-2a1f-46f4-9a6c-69a8e0ba0bf3-kube-api-access-rh4n2\") pod \"certified-operators-6d4xw\" (UID: \"96189f94-2a1f-46f4-9a6c-69a8e0ba0bf3\") " pod="openshift-marketplace/certified-operators-6d4xw" Nov 28 13:33:45 crc kubenswrapper[4779]: I1128 13:33:45.361404 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96189f94-2a1f-46f4-9a6c-69a8e0ba0bf3-catalog-content\") pod \"certified-operators-6d4xw\" (UID: \"96189f94-2a1f-46f4-9a6c-69a8e0ba0bf3\") " pod="openshift-marketplace/certified-operators-6d4xw" Nov 28 13:33:45 crc kubenswrapper[4779]: I1128 13:33:45.384288 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rh4n2\" (UniqueName: \"kubernetes.io/projected/96189f94-2a1f-46f4-9a6c-69a8e0ba0bf3-kube-api-access-rh4n2\") pod \"certified-operators-6d4xw\" (UID: \"96189f94-2a1f-46f4-9a6c-69a8e0ba0bf3\") " pod="openshift-marketplace/certified-operators-6d4xw" Nov 28 13:33:45 crc kubenswrapper[4779]: I1128 13:33:45.448709 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6d4xw" Nov 28 13:33:45 crc kubenswrapper[4779]: I1128 13:33:45.993402 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6d4xw"] Nov 28 13:33:46 crc kubenswrapper[4779]: I1128 13:33:46.087215 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6d4xw" event={"ID":"96189f94-2a1f-46f4-9a6c-69a8e0ba0bf3","Type":"ContainerStarted","Data":"71abe56d6712fe9d2b03c566c701e809bfc77461b808e21706e78330884ce045"} Nov 28 13:33:47 crc kubenswrapper[4779]: I1128 13:33:47.098516 4779 generic.go:334] "Generic (PLEG): container finished" podID="96189f94-2a1f-46f4-9a6c-69a8e0ba0bf3" containerID="2bbf7090b595949b97ab6de00896c3c92170376f858fa6500cdfb39f0539e3fc" exitCode=0 Nov 28 13:33:47 crc kubenswrapper[4779]: I1128 13:33:47.098584 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6d4xw" event={"ID":"96189f94-2a1f-46f4-9a6c-69a8e0ba0bf3","Type":"ContainerDied","Data":"2bbf7090b595949b97ab6de00896c3c92170376f858fa6500cdfb39f0539e3fc"} Nov 28 13:33:49 crc kubenswrapper[4779]: I1128 13:33:49.124509 4779 generic.go:334] "Generic (PLEG): container finished" podID="96189f94-2a1f-46f4-9a6c-69a8e0ba0bf3" containerID="e3fa89f962ed539c83311158a1451ed03a25283e79e0f4c81190533f23abb12b" exitCode=0 Nov 28 13:33:49 crc kubenswrapper[4779]: I1128 13:33:49.124573 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6d4xw" event={"ID":"96189f94-2a1f-46f4-9a6c-69a8e0ba0bf3","Type":"ContainerDied","Data":"e3fa89f962ed539c83311158a1451ed03a25283e79e0f4c81190533f23abb12b"} Nov 28 13:33:50 crc kubenswrapper[4779]: I1128 13:33:50.135542 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6d4xw" event={"ID":"96189f94-2a1f-46f4-9a6c-69a8e0ba0bf3","Type":"ContainerStarted","Data":"e2a3c2a1391a806f6a623f5cda56366ddcee38073d6e5605214ea55c7422cb07"} Nov 28 13:33:50 crc kubenswrapper[4779]: I1128 13:33:50.164388 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6d4xw" podStartSLOduration=2.709118582 podStartE2EDuration="5.164367394s" podCreationTimestamp="2025-11-28 13:33:45 +0000 UTC" firstStartedPulling="2025-11-28 13:33:47.101791257 +0000 UTC m=+3487.667466611" lastFinishedPulling="2025-11-28 13:33:49.557040069 +0000 UTC m=+3490.122715423" observedRunningTime="2025-11-28 13:33:50.15597847 +0000 UTC m=+3490.721653824" watchObservedRunningTime="2025-11-28 13:33:50.164367394 +0000 UTC m=+3490.730042748" Nov 28 13:33:55 crc kubenswrapper[4779]: I1128 13:33:55.449980 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6d4xw" Nov 28 13:33:55 crc kubenswrapper[4779]: I1128 13:33:55.450511 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6d4xw" Nov 28 13:33:55 crc kubenswrapper[4779]: I1128 13:33:55.498975 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6d4xw" Nov 28 13:33:56 crc kubenswrapper[4779]: I1128 13:33:56.278786 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6d4xw" Nov 28 13:33:56 crc kubenswrapper[4779]: I1128 13:33:56.336747 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6d4xw"] Nov 28 13:33:58 crc kubenswrapper[4779]: I1128 13:33:58.245833 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6d4xw" podUID="96189f94-2a1f-46f4-9a6c-69a8e0ba0bf3" containerName="registry-server" containerID="cri-o://e2a3c2a1391a806f6a623f5cda56366ddcee38073d6e5605214ea55c7422cb07" gracePeriod=2 Nov 28 13:33:59 crc kubenswrapper[4779]: I1128 13:33:59.259963 4779 generic.go:334] "Generic (PLEG): container finished" podID="96189f94-2a1f-46f4-9a6c-69a8e0ba0bf3" containerID="e2a3c2a1391a806f6a623f5cda56366ddcee38073d6e5605214ea55c7422cb07" exitCode=0 Nov 28 13:33:59 crc kubenswrapper[4779]: I1128 13:33:59.260067 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6d4xw" event={"ID":"96189f94-2a1f-46f4-9a6c-69a8e0ba0bf3","Type":"ContainerDied","Data":"e2a3c2a1391a806f6a623f5cda56366ddcee38073d6e5605214ea55c7422cb07"} Nov 28 13:33:59 crc kubenswrapper[4779]: I1128 13:33:59.517216 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6d4xw" Nov 28 13:33:59 crc kubenswrapper[4779]: I1128 13:33:59.582480 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96189f94-2a1f-46f4-9a6c-69a8e0ba0bf3-utilities\") pod \"96189f94-2a1f-46f4-9a6c-69a8e0ba0bf3\" (UID: \"96189f94-2a1f-46f4-9a6c-69a8e0ba0bf3\") " Nov 28 13:33:59 crc kubenswrapper[4779]: I1128 13:33:59.582535 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96189f94-2a1f-46f4-9a6c-69a8e0ba0bf3-catalog-content\") pod \"96189f94-2a1f-46f4-9a6c-69a8e0ba0bf3\" (UID: \"96189f94-2a1f-46f4-9a6c-69a8e0ba0bf3\") " Nov 28 13:33:59 crc kubenswrapper[4779]: I1128 13:33:59.582768 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rh4n2\" (UniqueName: \"kubernetes.io/projected/96189f94-2a1f-46f4-9a6c-69a8e0ba0bf3-kube-api-access-rh4n2\") pod \"96189f94-2a1f-46f4-9a6c-69a8e0ba0bf3\" (UID: \"96189f94-2a1f-46f4-9a6c-69a8e0ba0bf3\") " Nov 28 13:33:59 crc kubenswrapper[4779]: I1128 13:33:59.583502 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96189f94-2a1f-46f4-9a6c-69a8e0ba0bf3-utilities" (OuterVolumeSpecName: "utilities") pod "96189f94-2a1f-46f4-9a6c-69a8e0ba0bf3" (UID: "96189f94-2a1f-46f4-9a6c-69a8e0ba0bf3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 13:33:59 crc kubenswrapper[4779]: I1128 13:33:59.589964 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96189f94-2a1f-46f4-9a6c-69a8e0ba0bf3-kube-api-access-rh4n2" (OuterVolumeSpecName: "kube-api-access-rh4n2") pod "96189f94-2a1f-46f4-9a6c-69a8e0ba0bf3" (UID: "96189f94-2a1f-46f4-9a6c-69a8e0ba0bf3"). InnerVolumeSpecName "kube-api-access-rh4n2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:33:59 crc kubenswrapper[4779]: I1128 13:33:59.644085 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96189f94-2a1f-46f4-9a6c-69a8e0ba0bf3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "96189f94-2a1f-46f4-9a6c-69a8e0ba0bf3" (UID: "96189f94-2a1f-46f4-9a6c-69a8e0ba0bf3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 13:33:59 crc kubenswrapper[4779]: I1128 13:33:59.683843 4779 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96189f94-2a1f-46f4-9a6c-69a8e0ba0bf3-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 13:33:59 crc kubenswrapper[4779]: I1128 13:33:59.683874 4779 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96189f94-2a1f-46f4-9a6c-69a8e0ba0bf3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 13:33:59 crc kubenswrapper[4779]: I1128 13:33:59.683886 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rh4n2\" (UniqueName: \"kubernetes.io/projected/96189f94-2a1f-46f4-9a6c-69a8e0ba0bf3-kube-api-access-rh4n2\") on node \"crc\" DevicePath \"\"" Nov 28 13:34:00 crc kubenswrapper[4779]: I1128 13:34:00.272515 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6d4xw" event={"ID":"96189f94-2a1f-46f4-9a6c-69a8e0ba0bf3","Type":"ContainerDied","Data":"71abe56d6712fe9d2b03c566c701e809bfc77461b808e21706e78330884ce045"} Nov 28 13:34:00 crc kubenswrapper[4779]: I1128 13:34:00.272582 4779 scope.go:117] "RemoveContainer" containerID="e2a3c2a1391a806f6a623f5cda56366ddcee38073d6e5605214ea55c7422cb07" Nov 28 13:34:00 crc kubenswrapper[4779]: I1128 13:34:00.272595 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6d4xw" Nov 28 13:34:00 crc kubenswrapper[4779]: I1128 13:34:00.303549 4779 scope.go:117] "RemoveContainer" containerID="e3fa89f962ed539c83311158a1451ed03a25283e79e0f4c81190533f23abb12b" Nov 28 13:34:00 crc kubenswrapper[4779]: I1128 13:34:00.305023 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6d4xw"] Nov 28 13:34:00 crc kubenswrapper[4779]: I1128 13:34:00.314942 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6d4xw"] Nov 28 13:34:00 crc kubenswrapper[4779]: I1128 13:34:00.325248 4779 scope.go:117] "RemoveContainer" containerID="2bbf7090b595949b97ab6de00896c3c92170376f858fa6500cdfb39f0539e3fc" Nov 28 13:34:01 crc kubenswrapper[4779]: I1128 13:34:01.748560 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96189f94-2a1f-46f4-9a6c-69a8e0ba0bf3" path="/var/lib/kubelet/pods/96189f94-2a1f-46f4-9a6c-69a8e0ba0bf3/volumes" Nov 28 13:34:17 crc kubenswrapper[4779]: I1128 13:34:17.274637 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7574d9569-x822f_f1d9753d-b49d-4e32-b312-137314283984/manager/0.log" Nov 28 13:34:19 crc kubenswrapper[4779]: I1128 13:34:19.254717 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 28 13:34:19 crc kubenswrapper[4779]: I1128 13:34:19.255467 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="9c6b697b-8ae8-4991-90a0-b453212daf19" containerName="prometheus" containerID="cri-o://c189e4cea3228cdf1a675f651033c8f62ee2f972ff0adb85eb1420163e781bde" gracePeriod=600 Nov 28 13:34:19 crc kubenswrapper[4779]: I1128 13:34:19.255595 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="9c6b697b-8ae8-4991-90a0-b453212daf19" containerName="config-reloader" containerID="cri-o://75f7b6f68fcb514ca2dee03da0e921749efb4f0c7bc20cfcaf4a3e22339a2bbb" gracePeriod=600 Nov 28 13:34:19 crc kubenswrapper[4779]: I1128 13:34:19.255586 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="9c6b697b-8ae8-4991-90a0-b453212daf19" containerName="thanos-sidecar" containerID="cri-o://618007808bfc5ca1950f8e82001d924f5151b3cdeb0ba7e3d67a93d48183d549" gracePeriod=600 Nov 28 13:34:19 crc kubenswrapper[4779]: I1128 13:34:19.477250 4779 generic.go:334] "Generic (PLEG): container finished" podID="9c6b697b-8ae8-4991-90a0-b453212daf19" containerID="618007808bfc5ca1950f8e82001d924f5151b3cdeb0ba7e3d67a93d48183d549" exitCode=0 Nov 28 13:34:19 crc kubenswrapper[4779]: I1128 13:34:19.477297 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9c6b697b-8ae8-4991-90a0-b453212daf19","Type":"ContainerDied","Data":"618007808bfc5ca1950f8e82001d924f5151b3cdeb0ba7e3d67a93d48183d549"} Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.239887 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.335957 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9c6b697b-8ae8-4991-90a0-b453212daf19-thanos-prometheus-http-client-file\") pod \"9c6b697b-8ae8-4991-90a0-b453212daf19\" (UID: \"9c6b697b-8ae8-4991-90a0-b453212daf19\") " Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.336072 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9c6b697b-8ae8-4991-90a0-b453212daf19-prometheus-metric-storage-rulefiles-0\") pod \"9c6b697b-8ae8-4991-90a0-b453212daf19\" (UID: \"9c6b697b-8ae8-4991-90a0-b453212daf19\") " Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.336137 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c6b697b-8ae8-4991-90a0-b453212daf19-secret-combined-ca-bundle\") pod \"9c6b697b-8ae8-4991-90a0-b453212daf19\" (UID: \"9c6b697b-8ae8-4991-90a0-b453212daf19\") " Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.336183 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czv6c\" (UniqueName: \"kubernetes.io/projected/9c6b697b-8ae8-4991-90a0-b453212daf19-kube-api-access-czv6c\") pod \"9c6b697b-8ae8-4991-90a0-b453212daf19\" (UID: \"9c6b697b-8ae8-4991-90a0-b453212daf19\") " Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.336386 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9c6b697b-8ae8-4991-90a0-b453212daf19-config-out\") pod \"9c6b697b-8ae8-4991-90a0-b453212daf19\" (UID: \"9c6b697b-8ae8-4991-90a0-b453212daf19\") " Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.336459 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9c6b697b-8ae8-4991-90a0-b453212daf19-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"9c6b697b-8ae8-4991-90a0-b453212daf19\" (UID: \"9c6b697b-8ae8-4991-90a0-b453212daf19\") " Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.336485 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"9c6b697b-8ae8-4991-90a0-b453212daf19\" (UID: \"9c6b697b-8ae8-4991-90a0-b453212daf19\") " Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.336516 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9c6b697b-8ae8-4991-90a0-b453212daf19-config\") pod \"9c6b697b-8ae8-4991-90a0-b453212daf19\" (UID: \"9c6b697b-8ae8-4991-90a0-b453212daf19\") " Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.336539 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9c6b697b-8ae8-4991-90a0-b453212daf19-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"9c6b697b-8ae8-4991-90a0-b453212daf19\" (UID: \"9c6b697b-8ae8-4991-90a0-b453212daf19\") " Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.336578 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9c6b697b-8ae8-4991-90a0-b453212daf19-web-config\") pod \"9c6b697b-8ae8-4991-90a0-b453212daf19\" (UID: \"9c6b697b-8ae8-4991-90a0-b453212daf19\") " Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.336602 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9c6b697b-8ae8-4991-90a0-b453212daf19-tls-assets\") pod \"9c6b697b-8ae8-4991-90a0-b453212daf19\" (UID: \"9c6b697b-8ae8-4991-90a0-b453212daf19\") " Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.337358 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c6b697b-8ae8-4991-90a0-b453212daf19-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "9c6b697b-8ae8-4991-90a0-b453212daf19" (UID: "9c6b697b-8ae8-4991-90a0-b453212daf19"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.342584 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c6b697b-8ae8-4991-90a0-b453212daf19-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d") pod "9c6b697b-8ae8-4991-90a0-b453212daf19" (UID: "9c6b697b-8ae8-4991-90a0-b453212daf19"). InnerVolumeSpecName "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.343569 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c6b697b-8ae8-4991-90a0-b453212daf19-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "9c6b697b-8ae8-4991-90a0-b453212daf19" (UID: "9c6b697b-8ae8-4991-90a0-b453212daf19"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.343606 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c6b697b-8ae8-4991-90a0-b453212daf19-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "9c6b697b-8ae8-4991-90a0-b453212daf19" (UID: "9c6b697b-8ae8-4991-90a0-b453212daf19"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.343622 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c6b697b-8ae8-4991-90a0-b453212daf19-kube-api-access-czv6c" (OuterVolumeSpecName: "kube-api-access-czv6c") pod "9c6b697b-8ae8-4991-90a0-b453212daf19" (UID: "9c6b697b-8ae8-4991-90a0-b453212daf19"). InnerVolumeSpecName "kube-api-access-czv6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.344536 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c6b697b-8ae8-4991-90a0-b453212daf19-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "9c6b697b-8ae8-4991-90a0-b453212daf19" (UID: "9c6b697b-8ae8-4991-90a0-b453212daf19"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.345417 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c6b697b-8ae8-4991-90a0-b453212daf19-config" (OuterVolumeSpecName: "config") pod "9c6b697b-8ae8-4991-90a0-b453212daf19" (UID: "9c6b697b-8ae8-4991-90a0-b453212daf19"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.346260 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c6b697b-8ae8-4991-90a0-b453212daf19-config-out" (OuterVolumeSpecName: "config-out") pod "9c6b697b-8ae8-4991-90a0-b453212daf19" (UID: "9c6b697b-8ae8-4991-90a0-b453212daf19"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.358230 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c6b697b-8ae8-4991-90a0-b453212daf19-secret-combined-ca-bundle" (OuterVolumeSpecName: "secret-combined-ca-bundle") pod "9c6b697b-8ae8-4991-90a0-b453212daf19" (UID: "9c6b697b-8ae8-4991-90a0-b453212daf19"). InnerVolumeSpecName "secret-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.365261 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "9c6b697b-8ae8-4991-90a0-b453212daf19" (UID: "9c6b697b-8ae8-4991-90a0-b453212daf19"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.441493 4779 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9c6b697b-8ae8-4991-90a0-b453212daf19-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.441615 4779 reconciler_common.go:293] "Volume detached for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c6b697b-8ae8-4991-90a0-b453212daf19-secret-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.441699 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czv6c\" (UniqueName: \"kubernetes.io/projected/9c6b697b-8ae8-4991-90a0-b453212daf19-kube-api-access-czv6c\") on node \"crc\" DevicePath \"\"" Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.441763 4779 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9c6b697b-8ae8-4991-90a0-b453212daf19-config-out\") on node \"crc\" DevicePath \"\"" Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.441854 4779 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9c6b697b-8ae8-4991-90a0-b453212daf19-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") on node \"crc\" DevicePath \"\"" Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.442025 4779 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.442165 4779 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/9c6b697b-8ae8-4991-90a0-b453212daf19-config\") on node \"crc\" DevicePath \"\"" Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.442428 4779 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9c6b697b-8ae8-4991-90a0-b453212daf19-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") on node \"crc\" DevicePath \"\"" Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.442570 4779 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9c6b697b-8ae8-4991-90a0-b453212daf19-tls-assets\") on node \"crc\" DevicePath \"\"" Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.442639 4779 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9c6b697b-8ae8-4991-90a0-b453212daf19-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.451801 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c6b697b-8ae8-4991-90a0-b453212daf19-web-config" (OuterVolumeSpecName: "web-config") pod "9c6b697b-8ae8-4991-90a0-b453212daf19" (UID: "9c6b697b-8ae8-4991-90a0-b453212daf19"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.470321 4779 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.494911 4779 generic.go:334] "Generic (PLEG): container finished" podID="9c6b697b-8ae8-4991-90a0-b453212daf19" containerID="75f7b6f68fcb514ca2dee03da0e921749efb4f0c7bc20cfcaf4a3e22339a2bbb" exitCode=0 Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.494938 4779 generic.go:334] "Generic (PLEG): container finished" podID="9c6b697b-8ae8-4991-90a0-b453212daf19" containerID="c189e4cea3228cdf1a675f651033c8f62ee2f972ff0adb85eb1420163e781bde" exitCode=0 Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.494956 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9c6b697b-8ae8-4991-90a0-b453212daf19","Type":"ContainerDied","Data":"75f7b6f68fcb514ca2dee03da0e921749efb4f0c7bc20cfcaf4a3e22339a2bbb"} Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.494978 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9c6b697b-8ae8-4991-90a0-b453212daf19","Type":"ContainerDied","Data":"c189e4cea3228cdf1a675f651033c8f62ee2f972ff0adb85eb1420163e781bde"} Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.494979 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.494989 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9c6b697b-8ae8-4991-90a0-b453212daf19","Type":"ContainerDied","Data":"7b667e9fb4b8401828c1e085a9afdae8aa205684afa077a74d443d8637d2cbc0"} Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.495002 4779 scope.go:117] "RemoveContainer" containerID="618007808bfc5ca1950f8e82001d924f5151b3cdeb0ba7e3d67a93d48183d549" Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.544263 4779 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9c6b697b-8ae8-4991-90a0-b453212daf19-web-config\") on node \"crc\" DevicePath \"\"" Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.544295 4779 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.565011 4779 scope.go:117] "RemoveContainer" containerID="75f7b6f68fcb514ca2dee03da0e921749efb4f0c7bc20cfcaf4a3e22339a2bbb" Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.572710 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.581727 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.593511 4779 scope.go:117] "RemoveContainer" containerID="c189e4cea3228cdf1a675f651033c8f62ee2f972ff0adb85eb1420163e781bde" Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.613306 4779 scope.go:117] "RemoveContainer" containerID="3ef83135eeae56df36102e72a9a861862e07ab0b3dc0b00489644358f5e88a3b" Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.639986 4779 scope.go:117] "RemoveContainer" containerID="618007808bfc5ca1950f8e82001d924f5151b3cdeb0ba7e3d67a93d48183d549" Nov 28 13:34:20 crc kubenswrapper[4779]: E1128 13:34:20.640414 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"618007808bfc5ca1950f8e82001d924f5151b3cdeb0ba7e3d67a93d48183d549\": container with ID starting with 618007808bfc5ca1950f8e82001d924f5151b3cdeb0ba7e3d67a93d48183d549 not found: ID does not exist" containerID="618007808bfc5ca1950f8e82001d924f5151b3cdeb0ba7e3d67a93d48183d549" Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.640440 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"618007808bfc5ca1950f8e82001d924f5151b3cdeb0ba7e3d67a93d48183d549"} err="failed to get container status \"618007808bfc5ca1950f8e82001d924f5151b3cdeb0ba7e3d67a93d48183d549\": rpc error: code = NotFound desc = could not find container \"618007808bfc5ca1950f8e82001d924f5151b3cdeb0ba7e3d67a93d48183d549\": container with ID starting with 618007808bfc5ca1950f8e82001d924f5151b3cdeb0ba7e3d67a93d48183d549 not found: ID does not exist" Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.640459 4779 scope.go:117] "RemoveContainer" containerID="75f7b6f68fcb514ca2dee03da0e921749efb4f0c7bc20cfcaf4a3e22339a2bbb" Nov 28 13:34:20 crc kubenswrapper[4779]: E1128 13:34:20.641235 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75f7b6f68fcb514ca2dee03da0e921749efb4f0c7bc20cfcaf4a3e22339a2bbb\": container with ID starting with 75f7b6f68fcb514ca2dee03da0e921749efb4f0c7bc20cfcaf4a3e22339a2bbb not found: ID does not exist" containerID="75f7b6f68fcb514ca2dee03da0e921749efb4f0c7bc20cfcaf4a3e22339a2bbb" Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.641260 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75f7b6f68fcb514ca2dee03da0e921749efb4f0c7bc20cfcaf4a3e22339a2bbb"} err="failed to get container status \"75f7b6f68fcb514ca2dee03da0e921749efb4f0c7bc20cfcaf4a3e22339a2bbb\": rpc error: code = NotFound desc = could not find container \"75f7b6f68fcb514ca2dee03da0e921749efb4f0c7bc20cfcaf4a3e22339a2bbb\": container with ID starting with 75f7b6f68fcb514ca2dee03da0e921749efb4f0c7bc20cfcaf4a3e22339a2bbb not found: ID does not exist" Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.641273 4779 scope.go:117] "RemoveContainer" containerID="c189e4cea3228cdf1a675f651033c8f62ee2f972ff0adb85eb1420163e781bde" Nov 28 13:34:20 crc kubenswrapper[4779]: E1128 13:34:20.641700 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c189e4cea3228cdf1a675f651033c8f62ee2f972ff0adb85eb1420163e781bde\": container with ID starting with c189e4cea3228cdf1a675f651033c8f62ee2f972ff0adb85eb1420163e781bde not found: ID does not exist" containerID="c189e4cea3228cdf1a675f651033c8f62ee2f972ff0adb85eb1420163e781bde" Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.641744 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c189e4cea3228cdf1a675f651033c8f62ee2f972ff0adb85eb1420163e781bde"} err="failed to get container status \"c189e4cea3228cdf1a675f651033c8f62ee2f972ff0adb85eb1420163e781bde\": rpc error: code = NotFound desc = could not find container \"c189e4cea3228cdf1a675f651033c8f62ee2f972ff0adb85eb1420163e781bde\": container with ID starting with c189e4cea3228cdf1a675f651033c8f62ee2f972ff0adb85eb1420163e781bde not found: ID does not exist" Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.641778 4779 scope.go:117] "RemoveContainer" containerID="3ef83135eeae56df36102e72a9a861862e07ab0b3dc0b00489644358f5e88a3b" Nov 28 13:34:20 crc kubenswrapper[4779]: E1128 13:34:20.642080 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ef83135eeae56df36102e72a9a861862e07ab0b3dc0b00489644358f5e88a3b\": container with ID starting with 3ef83135eeae56df36102e72a9a861862e07ab0b3dc0b00489644358f5e88a3b not found: ID does not exist" containerID="3ef83135eeae56df36102e72a9a861862e07ab0b3dc0b00489644358f5e88a3b" Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.642142 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ef83135eeae56df36102e72a9a861862e07ab0b3dc0b00489644358f5e88a3b"} err="failed to get container status \"3ef83135eeae56df36102e72a9a861862e07ab0b3dc0b00489644358f5e88a3b\": rpc error: code = NotFound desc = could not find container \"3ef83135eeae56df36102e72a9a861862e07ab0b3dc0b00489644358f5e88a3b\": container with ID starting with 3ef83135eeae56df36102e72a9a861862e07ab0b3dc0b00489644358f5e88a3b not found: ID does not exist" Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.642161 4779 scope.go:117] "RemoveContainer" containerID="618007808bfc5ca1950f8e82001d924f5151b3cdeb0ba7e3d67a93d48183d549" Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.642493 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"618007808bfc5ca1950f8e82001d924f5151b3cdeb0ba7e3d67a93d48183d549"} err="failed to get container status \"618007808bfc5ca1950f8e82001d924f5151b3cdeb0ba7e3d67a93d48183d549\": rpc error: code = NotFound desc = could not find container \"618007808bfc5ca1950f8e82001d924f5151b3cdeb0ba7e3d67a93d48183d549\": container with ID starting with 618007808bfc5ca1950f8e82001d924f5151b3cdeb0ba7e3d67a93d48183d549 not found: ID does not exist" Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.642515 4779 scope.go:117] "RemoveContainer" containerID="75f7b6f68fcb514ca2dee03da0e921749efb4f0c7bc20cfcaf4a3e22339a2bbb" Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.642703 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75f7b6f68fcb514ca2dee03da0e921749efb4f0c7bc20cfcaf4a3e22339a2bbb"} err="failed to get container status \"75f7b6f68fcb514ca2dee03da0e921749efb4f0c7bc20cfcaf4a3e22339a2bbb\": rpc error: code = NotFound desc = could not find container \"75f7b6f68fcb514ca2dee03da0e921749efb4f0c7bc20cfcaf4a3e22339a2bbb\": container with ID starting with 75f7b6f68fcb514ca2dee03da0e921749efb4f0c7bc20cfcaf4a3e22339a2bbb not found: ID does not exist" Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.642721 4779 scope.go:117] "RemoveContainer" containerID="c189e4cea3228cdf1a675f651033c8f62ee2f972ff0adb85eb1420163e781bde" Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.642873 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c189e4cea3228cdf1a675f651033c8f62ee2f972ff0adb85eb1420163e781bde"} err="failed to get container status \"c189e4cea3228cdf1a675f651033c8f62ee2f972ff0adb85eb1420163e781bde\": rpc error: code = NotFound desc = could not find container \"c189e4cea3228cdf1a675f651033c8f62ee2f972ff0adb85eb1420163e781bde\": container with ID starting with c189e4cea3228cdf1a675f651033c8f62ee2f972ff0adb85eb1420163e781bde not found: ID does not exist" Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.642892 4779 scope.go:117] "RemoveContainer" containerID="3ef83135eeae56df36102e72a9a861862e07ab0b3dc0b00489644358f5e88a3b" Nov 28 13:34:20 crc kubenswrapper[4779]: I1128 13:34:20.643040 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ef83135eeae56df36102e72a9a861862e07ab0b3dc0b00489644358f5e88a3b"} err="failed to get container status \"3ef83135eeae56df36102e72a9a861862e07ab0b3dc0b00489644358f5e88a3b\": rpc error: code = NotFound desc = could not find container \"3ef83135eeae56df36102e72a9a861862e07ab0b3dc0b00489644358f5e88a3b\": container with ID starting with 3ef83135eeae56df36102e72a9a861862e07ab0b3dc0b00489644358f5e88a3b not found: ID does not exist" Nov 28 13:34:21 crc kubenswrapper[4779]: I1128 13:34:21.403798 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 28 13:34:21 crc kubenswrapper[4779]: E1128 13:34:21.407904 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c6b697b-8ae8-4991-90a0-b453212daf19" containerName="prometheus" Nov 28 13:34:21 crc kubenswrapper[4779]: I1128 13:34:21.407948 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c6b697b-8ae8-4991-90a0-b453212daf19" containerName="prometheus" Nov 28 13:34:21 crc kubenswrapper[4779]: E1128 13:34:21.408799 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96189f94-2a1f-46f4-9a6c-69a8e0ba0bf3" containerName="extract-content" Nov 28 13:34:21 crc kubenswrapper[4779]: I1128 13:34:21.408822 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="96189f94-2a1f-46f4-9a6c-69a8e0ba0bf3" containerName="extract-content" Nov 28 13:34:21 crc kubenswrapper[4779]: E1128 13:34:21.408844 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c6b697b-8ae8-4991-90a0-b453212daf19" containerName="thanos-sidecar" Nov 28 13:34:21 crc kubenswrapper[4779]: I1128 13:34:21.408856 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c6b697b-8ae8-4991-90a0-b453212daf19" containerName="thanos-sidecar" Nov 28 13:34:21 crc kubenswrapper[4779]: E1128 13:34:21.408884 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96189f94-2a1f-46f4-9a6c-69a8e0ba0bf3" containerName="extract-utilities" Nov 28 13:34:21 crc kubenswrapper[4779]: I1128 13:34:21.408895 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="96189f94-2a1f-46f4-9a6c-69a8e0ba0bf3" containerName="extract-utilities" Nov 28 13:34:21 crc kubenswrapper[4779]: E1128 13:34:21.408916 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c6b697b-8ae8-4991-90a0-b453212daf19" containerName="init-config-reloader" Nov 28 13:34:21 crc kubenswrapper[4779]: I1128 13:34:21.408927 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c6b697b-8ae8-4991-90a0-b453212daf19" containerName="init-config-reloader" Nov 28 13:34:21 crc kubenswrapper[4779]: E1128 13:34:21.408959 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96189f94-2a1f-46f4-9a6c-69a8e0ba0bf3" containerName="registry-server" Nov 28 13:34:21 crc kubenswrapper[4779]: I1128 13:34:21.408967 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="96189f94-2a1f-46f4-9a6c-69a8e0ba0bf3" containerName="registry-server" Nov 28 13:34:21 crc kubenswrapper[4779]: E1128 13:34:21.409010 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c6b697b-8ae8-4991-90a0-b453212daf19" containerName="config-reloader" Nov 28 13:34:21 crc kubenswrapper[4779]: I1128 13:34:21.409046 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c6b697b-8ae8-4991-90a0-b453212daf19" containerName="config-reloader" Nov 28 13:34:21 crc kubenswrapper[4779]: I1128 13:34:21.409907 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c6b697b-8ae8-4991-90a0-b453212daf19" containerName="prometheus" Nov 28 13:34:21 crc kubenswrapper[4779]: I1128 13:34:21.409958 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c6b697b-8ae8-4991-90a0-b453212daf19" containerName="thanos-sidecar" Nov 28 13:34:21 crc kubenswrapper[4779]: I1128 13:34:21.409981 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="96189f94-2a1f-46f4-9a6c-69a8e0ba0bf3" containerName="registry-server" Nov 28 13:34:21 crc kubenswrapper[4779]: I1128 13:34:21.410006 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c6b697b-8ae8-4991-90a0-b453212daf19" containerName="config-reloader" Nov 28 13:34:21 crc kubenswrapper[4779]: I1128 13:34:21.428443 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 28 13:34:21 crc kubenswrapper[4779]: I1128 13:34:21.431050 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Nov 28 13:34:21 crc kubenswrapper[4779]: I1128 13:34:21.431228 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Nov 28 13:34:21 crc kubenswrapper[4779]: I1128 13:34:21.431064 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Nov 28 13:34:21 crc kubenswrapper[4779]: I1128 13:34:21.431413 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Nov 28 13:34:21 crc kubenswrapper[4779]: I1128 13:34:21.431691 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-k7vw5" Nov 28 13:34:21 crc kubenswrapper[4779]: I1128 13:34:21.432010 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Nov 28 13:34:21 crc kubenswrapper[4779]: I1128 13:34:21.439075 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Nov 28 13:34:21 crc kubenswrapper[4779]: I1128 13:34:21.448005 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 28 13:34:21 crc kubenswrapper[4779]: I1128 13:34:21.574406 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c406f7f5-9e76-44f6-b507-59c1b6d813d5-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c406f7f5-9e76-44f6-b507-59c1b6d813d5\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:34:21 crc kubenswrapper[4779]: I1128 13:34:21.574787 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c406f7f5-9e76-44f6-b507-59c1b6d813d5-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c406f7f5-9e76-44f6-b507-59c1b6d813d5\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:34:21 crc kubenswrapper[4779]: I1128 13:34:21.574823 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgwz7\" (UniqueName: \"kubernetes.io/projected/c406f7f5-9e76-44f6-b507-59c1b6d813d5-kube-api-access-xgwz7\") pod \"prometheus-metric-storage-0\" (UID: \"c406f7f5-9e76-44f6-b507-59c1b6d813d5\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:34:21 crc kubenswrapper[4779]: I1128 13:34:21.574858 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c406f7f5-9e76-44f6-b507-59c1b6d813d5-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c406f7f5-9e76-44f6-b507-59c1b6d813d5\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:34:21 crc kubenswrapper[4779]: I1128 13:34:21.574896 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c406f7f5-9e76-44f6-b507-59c1b6d813d5-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"c406f7f5-9e76-44f6-b507-59c1b6d813d5\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:34:21 crc kubenswrapper[4779]: I1128 13:34:21.574976 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c406f7f5-9e76-44f6-b507-59c1b6d813d5-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c406f7f5-9e76-44f6-b507-59c1b6d813d5\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:34:21 crc kubenswrapper[4779]: I1128 13:34:21.575015 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c406f7f5-9e76-44f6-b507-59c1b6d813d5-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c406f7f5-9e76-44f6-b507-59c1b6d813d5\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:34:21 crc kubenswrapper[4779]: I1128 13:34:21.575107 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c406f7f5-9e76-44f6-b507-59c1b6d813d5-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c406f7f5-9e76-44f6-b507-59c1b6d813d5\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:34:21 crc kubenswrapper[4779]: I1128 13:34:21.575138 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/c406f7f5-9e76-44f6-b507-59c1b6d813d5-prometheus-metric-storage-db\") pod \"prometheus-metric-storage-0\" (UID: \"c406f7f5-9e76-44f6-b507-59c1b6d813d5\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:34:21 crc kubenswrapper[4779]: I1128 13:34:21.575171 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c406f7f5-9e76-44f6-b507-59c1b6d813d5-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c406f7f5-9e76-44f6-b507-59c1b6d813d5\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:34:21 crc kubenswrapper[4779]: I1128 13:34:21.575209 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c406f7f5-9e76-44f6-b507-59c1b6d813d5-config\") pod \"prometheus-metric-storage-0\" (UID: \"c406f7f5-9e76-44f6-b507-59c1b6d813d5\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:34:21 crc kubenswrapper[4779]: I1128 13:34:21.676587 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c406f7f5-9e76-44f6-b507-59c1b6d813d5-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c406f7f5-9e76-44f6-b507-59c1b6d813d5\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:34:21 crc kubenswrapper[4779]: I1128 13:34:21.676963 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c406f7f5-9e76-44f6-b507-59c1b6d813d5-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c406f7f5-9e76-44f6-b507-59c1b6d813d5\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:34:21 crc kubenswrapper[4779]: I1128 13:34:21.677058 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/c406f7f5-9e76-44f6-b507-59c1b6d813d5-prometheus-metric-storage-db\") pod \"prometheus-metric-storage-0\" (UID: \"c406f7f5-9e76-44f6-b507-59c1b6d813d5\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:34:21 crc kubenswrapper[4779]: I1128 13:34:21.677182 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c406f7f5-9e76-44f6-b507-59c1b6d813d5-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c406f7f5-9e76-44f6-b507-59c1b6d813d5\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:34:21 crc kubenswrapper[4779]: I1128 13:34:21.677308 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c406f7f5-9e76-44f6-b507-59c1b6d813d5-config\") pod \"prometheus-metric-storage-0\" (UID: \"c406f7f5-9e76-44f6-b507-59c1b6d813d5\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:34:21 crc kubenswrapper[4779]: I1128 13:34:21.677430 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c406f7f5-9e76-44f6-b507-59c1b6d813d5-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c406f7f5-9e76-44f6-b507-59c1b6d813d5\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:34:21 crc kubenswrapper[4779]: I1128 13:34:21.677510 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c406f7f5-9e76-44f6-b507-59c1b6d813d5-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c406f7f5-9e76-44f6-b507-59c1b6d813d5\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:34:21 crc kubenswrapper[4779]: I1128 13:34:21.677577 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgwz7\" (UniqueName: \"kubernetes.io/projected/c406f7f5-9e76-44f6-b507-59c1b6d813d5-kube-api-access-xgwz7\") pod \"prometheus-metric-storage-0\" (UID: \"c406f7f5-9e76-44f6-b507-59c1b6d813d5\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:34:21 crc kubenswrapper[4779]: I1128 13:34:21.677652 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c406f7f5-9e76-44f6-b507-59c1b6d813d5-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c406f7f5-9e76-44f6-b507-59c1b6d813d5\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:34:21 crc kubenswrapper[4779]: I1128 13:34:21.677723 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/c406f7f5-9e76-44f6-b507-59c1b6d813d5-prometheus-metric-storage-db\") pod \"prometheus-metric-storage-0\" (UID: \"c406f7f5-9e76-44f6-b507-59c1b6d813d5\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:34:21 crc kubenswrapper[4779]: I1128 13:34:21.677733 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c406f7f5-9e76-44f6-b507-59c1b6d813d5-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"c406f7f5-9e76-44f6-b507-59c1b6d813d5\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:34:21 crc kubenswrapper[4779]: I1128 13:34:21.677871 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c406f7f5-9e76-44f6-b507-59c1b6d813d5-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c406f7f5-9e76-44f6-b507-59c1b6d813d5\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:34:21 crc kubenswrapper[4779]: I1128 13:34:21.678501 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c406f7f5-9e76-44f6-b507-59c1b6d813d5-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c406f7f5-9e76-44f6-b507-59c1b6d813d5\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:34:21 crc kubenswrapper[4779]: I1128 13:34:21.682017 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c406f7f5-9e76-44f6-b507-59c1b6d813d5-config\") pod \"prometheus-metric-storage-0\" (UID: \"c406f7f5-9e76-44f6-b507-59c1b6d813d5\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:34:21 crc kubenswrapper[4779]: I1128 13:34:21.683279 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c406f7f5-9e76-44f6-b507-59c1b6d813d5-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c406f7f5-9e76-44f6-b507-59c1b6d813d5\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:34:21 crc kubenswrapper[4779]: I1128 13:34:21.683702 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c406f7f5-9e76-44f6-b507-59c1b6d813d5-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"c406f7f5-9e76-44f6-b507-59c1b6d813d5\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:34:21 crc kubenswrapper[4779]: I1128 13:34:21.683797 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c406f7f5-9e76-44f6-b507-59c1b6d813d5-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c406f7f5-9e76-44f6-b507-59c1b6d813d5\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:34:21 crc kubenswrapper[4779]: I1128 13:34:21.684387 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c406f7f5-9e76-44f6-b507-59c1b6d813d5-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c406f7f5-9e76-44f6-b507-59c1b6d813d5\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:34:21 crc kubenswrapper[4779]: I1128 13:34:21.685302 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c406f7f5-9e76-44f6-b507-59c1b6d813d5-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c406f7f5-9e76-44f6-b507-59c1b6d813d5\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:34:21 crc kubenswrapper[4779]: I1128 13:34:21.686831 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c406f7f5-9e76-44f6-b507-59c1b6d813d5-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c406f7f5-9e76-44f6-b507-59c1b6d813d5\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:34:21 crc kubenswrapper[4779]: I1128 13:34:21.696528 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgwz7\" (UniqueName: \"kubernetes.io/projected/c406f7f5-9e76-44f6-b507-59c1b6d813d5-kube-api-access-xgwz7\") pod \"prometheus-metric-storage-0\" (UID: \"c406f7f5-9e76-44f6-b507-59c1b6d813d5\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:34:21 crc kubenswrapper[4779]: I1128 13:34:21.696601 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c406f7f5-9e76-44f6-b507-59c1b6d813d5-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c406f7f5-9e76-44f6-b507-59c1b6d813d5\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:34:21 crc kubenswrapper[4779]: I1128 13:34:21.738500 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c6b697b-8ae8-4991-90a0-b453212daf19" path="/var/lib/kubelet/pods/9c6b697b-8ae8-4991-90a0-b453212daf19/volumes" Nov 28 13:34:21 crc kubenswrapper[4779]: I1128 13:34:21.768939 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 28 13:34:22 crc kubenswrapper[4779]: I1128 13:34:22.361299 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 28 13:34:22 crc kubenswrapper[4779]: I1128 13:34:22.516486 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c406f7f5-9e76-44f6-b507-59c1b6d813d5","Type":"ContainerStarted","Data":"e5ebe7b95a1afb818686b9fb73795ff4c3b2fb53607691395b48c5b574a9dda5"} Nov 28 13:34:26 crc kubenswrapper[4779]: I1128 13:34:26.561068 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c406f7f5-9e76-44f6-b507-59c1b6d813d5","Type":"ContainerStarted","Data":"88f4a6826d6087cc6124157db64ab859f19b03d7bed5d3f7253fd1494c220fdc"} Nov 28 13:34:33 crc kubenswrapper[4779]: I1128 13:34:33.631702 4779 generic.go:334] "Generic (PLEG): container finished" podID="c406f7f5-9e76-44f6-b507-59c1b6d813d5" containerID="88f4a6826d6087cc6124157db64ab859f19b03d7bed5d3f7253fd1494c220fdc" exitCode=0 Nov 28 13:34:33 crc kubenswrapper[4779]: I1128 13:34:33.631793 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c406f7f5-9e76-44f6-b507-59c1b6d813d5","Type":"ContainerDied","Data":"88f4a6826d6087cc6124157db64ab859f19b03d7bed5d3f7253fd1494c220fdc"} Nov 28 13:34:34 crc kubenswrapper[4779]: I1128 13:34:34.644297 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c406f7f5-9e76-44f6-b507-59c1b6d813d5","Type":"ContainerStarted","Data":"b2ddc23e2889eb987c122d6d78362a07a1bd627f16329e71e39b75aa73535ceb"} Nov 28 13:34:37 crc kubenswrapper[4779]: I1128 13:34:37.676166 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c406f7f5-9e76-44f6-b507-59c1b6d813d5","Type":"ContainerStarted","Data":"47cb579ae249a9a130c5906e3c8d4c95f186297ccb64728c117d40e2d3f62192"} Nov 28 13:34:38 crc kubenswrapper[4779]: I1128 13:34:38.693766 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c406f7f5-9e76-44f6-b507-59c1b6d813d5","Type":"ContainerStarted","Data":"b8844ee70f1989e958bb097a732de0b3b39cb3b35b5ec9b0efed624d1723af49"} Nov 28 13:34:38 crc kubenswrapper[4779]: I1128 13:34:38.725841 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=17.725816623 podStartE2EDuration="17.725816623s" podCreationTimestamp="2025-11-28 13:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 13:34:38.72233925 +0000 UTC m=+3539.288014624" watchObservedRunningTime="2025-11-28 13:34:38.725816623 +0000 UTC m=+3539.291492007" Nov 28 13:34:41 crc kubenswrapper[4779]: I1128 13:34:41.770037 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Nov 28 13:34:51 crc kubenswrapper[4779]: I1128 13:34:51.769858 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Nov 28 13:34:51 crc kubenswrapper[4779]: I1128 13:34:51.776972 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Nov 28 13:34:51 crc kubenswrapper[4779]: I1128 13:34:51.829501 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Nov 28 13:35:10 crc kubenswrapper[4779]: I1128 13:35:10.173018 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-l5kft"] Nov 28 13:35:10 crc kubenswrapper[4779]: I1128 13:35:10.179349 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l5kft" Nov 28 13:35:10 crc kubenswrapper[4779]: I1128 13:35:10.189268 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l5kft"] Nov 28 13:35:10 crc kubenswrapper[4779]: I1128 13:35:10.291024 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88f94bff-39c4-498c-a999-64c10e767465-catalog-content\") pod \"redhat-marketplace-l5kft\" (UID: \"88f94bff-39c4-498c-a999-64c10e767465\") " pod="openshift-marketplace/redhat-marketplace-l5kft" Nov 28 13:35:10 crc kubenswrapper[4779]: I1128 13:35:10.291105 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88f94bff-39c4-498c-a999-64c10e767465-utilities\") pod \"redhat-marketplace-l5kft\" (UID: \"88f94bff-39c4-498c-a999-64c10e767465\") " pod="openshift-marketplace/redhat-marketplace-l5kft" Nov 28 13:35:10 crc kubenswrapper[4779]: I1128 13:35:10.291129 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqn62\" (UniqueName: \"kubernetes.io/projected/88f94bff-39c4-498c-a999-64c10e767465-kube-api-access-rqn62\") pod \"redhat-marketplace-l5kft\" (UID: \"88f94bff-39c4-498c-a999-64c10e767465\") " pod="openshift-marketplace/redhat-marketplace-l5kft" Nov 28 13:35:10 crc kubenswrapper[4779]: I1128 13:35:10.392947 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88f94bff-39c4-498c-a999-64c10e767465-catalog-content\") pod \"redhat-marketplace-l5kft\" (UID: \"88f94bff-39c4-498c-a999-64c10e767465\") " pod="openshift-marketplace/redhat-marketplace-l5kft" Nov 28 13:35:10 crc kubenswrapper[4779]: I1128 13:35:10.393499 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88f94bff-39c4-498c-a999-64c10e767465-utilities\") pod \"redhat-marketplace-l5kft\" (UID: \"88f94bff-39c4-498c-a999-64c10e767465\") " pod="openshift-marketplace/redhat-marketplace-l5kft" Nov 28 13:35:10 crc kubenswrapper[4779]: I1128 13:35:10.393794 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqn62\" (UniqueName: \"kubernetes.io/projected/88f94bff-39c4-498c-a999-64c10e767465-kube-api-access-rqn62\") pod \"redhat-marketplace-l5kft\" (UID: \"88f94bff-39c4-498c-a999-64c10e767465\") " pod="openshift-marketplace/redhat-marketplace-l5kft" Nov 28 13:35:10 crc kubenswrapper[4779]: I1128 13:35:10.393445 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88f94bff-39c4-498c-a999-64c10e767465-catalog-content\") pod \"redhat-marketplace-l5kft\" (UID: \"88f94bff-39c4-498c-a999-64c10e767465\") " pod="openshift-marketplace/redhat-marketplace-l5kft" Nov 28 13:35:10 crc kubenswrapper[4779]: I1128 13:35:10.393750 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88f94bff-39c4-498c-a999-64c10e767465-utilities\") pod \"redhat-marketplace-l5kft\" (UID: \"88f94bff-39c4-498c-a999-64c10e767465\") " pod="openshift-marketplace/redhat-marketplace-l5kft" Nov 28 13:35:10 crc kubenswrapper[4779]: I1128 13:35:10.421151 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqn62\" (UniqueName: \"kubernetes.io/projected/88f94bff-39c4-498c-a999-64c10e767465-kube-api-access-rqn62\") pod \"redhat-marketplace-l5kft\" (UID: \"88f94bff-39c4-498c-a999-64c10e767465\") " pod="openshift-marketplace/redhat-marketplace-l5kft" Nov 28 13:35:10 crc kubenswrapper[4779]: I1128 13:35:10.506511 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l5kft" Nov 28 13:35:10 crc kubenswrapper[4779]: I1128 13:35:10.996641 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l5kft"] Nov 28 13:35:11 crc kubenswrapper[4779]: W1128 13:35:11.002007 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88f94bff_39c4_498c_a999_64c10e767465.slice/crio-77575f58ff7b8da58d7d6bca5ce569848e2ca1a4ac3bb62a7ab1d07498c48d37 WatchSource:0}: Error finding container 77575f58ff7b8da58d7d6bca5ce569848e2ca1a4ac3bb62a7ab1d07498c48d37: Status 404 returned error can't find the container with id 77575f58ff7b8da58d7d6bca5ce569848e2ca1a4ac3bb62a7ab1d07498c48d37 Nov 28 13:35:12 crc kubenswrapper[4779]: I1128 13:35:12.005724 4779 generic.go:334] "Generic (PLEG): container finished" podID="88f94bff-39c4-498c-a999-64c10e767465" containerID="5d42436fac9922c6142be1e67f193a6fdf164f7b7edc768328367ccf8a786348" exitCode=0 Nov 28 13:35:12 crc kubenswrapper[4779]: I1128 13:35:12.005923 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l5kft" event={"ID":"88f94bff-39c4-498c-a999-64c10e767465","Type":"ContainerDied","Data":"5d42436fac9922c6142be1e67f193a6fdf164f7b7edc768328367ccf8a786348"} Nov 28 13:35:12 crc kubenswrapper[4779]: I1128 13:35:12.006151 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l5kft" event={"ID":"88f94bff-39c4-498c-a999-64c10e767465","Type":"ContainerStarted","Data":"77575f58ff7b8da58d7d6bca5ce569848e2ca1a4ac3bb62a7ab1d07498c48d37"} Nov 28 13:35:14 crc kubenswrapper[4779]: I1128 13:35:14.024263 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l5kft" event={"ID":"88f94bff-39c4-498c-a999-64c10e767465","Type":"ContainerStarted","Data":"05faa8ed70d1e128ee274abf923c4daffb6940be10d9121bd837782bbd8b31c2"} Nov 28 13:35:15 crc kubenswrapper[4779]: I1128 13:35:15.034298 4779 generic.go:334] "Generic (PLEG): container finished" podID="88f94bff-39c4-498c-a999-64c10e767465" containerID="05faa8ed70d1e128ee274abf923c4daffb6940be10d9121bd837782bbd8b31c2" exitCode=0 Nov 28 13:35:15 crc kubenswrapper[4779]: I1128 13:35:15.034372 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l5kft" event={"ID":"88f94bff-39c4-498c-a999-64c10e767465","Type":"ContainerDied","Data":"05faa8ed70d1e128ee274abf923c4daffb6940be10d9121bd837782bbd8b31c2"} Nov 28 13:35:16 crc kubenswrapper[4779]: I1128 13:35:16.289666 4779 patch_prober.go:28] interesting pod/machine-config-daemon-kj9g2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 13:35:16 crc kubenswrapper[4779]: I1128 13:35:16.290021 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 13:35:18 crc kubenswrapper[4779]: I1128 13:35:18.069559 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l5kft" event={"ID":"88f94bff-39c4-498c-a999-64c10e767465","Type":"ContainerStarted","Data":"fccf199ebdf27f4da9295d42d105b48e6162faf28a634ae4689662203fb2e5cd"} Nov 28 13:35:18 crc kubenswrapper[4779]: I1128 13:35:18.092421 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-l5kft" podStartSLOduration=3.529385961 podStartE2EDuration="8.092401172s" podCreationTimestamp="2025-11-28 13:35:10 +0000 UTC" firstStartedPulling="2025-11-28 13:35:12.010118294 +0000 UTC m=+3572.575793648" lastFinishedPulling="2025-11-28 13:35:16.573133505 +0000 UTC m=+3577.138808859" observedRunningTime="2025-11-28 13:35:18.08597467 +0000 UTC m=+3578.651650024" watchObservedRunningTime="2025-11-28 13:35:18.092401172 +0000 UTC m=+3578.658076526" Nov 28 13:35:20 crc kubenswrapper[4779]: I1128 13:35:20.507304 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-l5kft" Nov 28 13:35:20 crc kubenswrapper[4779]: I1128 13:35:20.507655 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-l5kft" Nov 28 13:35:20 crc kubenswrapper[4779]: I1128 13:35:20.564014 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-l5kft" Nov 28 13:35:30 crc kubenswrapper[4779]: I1128 13:35:30.564722 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-l5kft" Nov 28 13:35:30 crc kubenswrapper[4779]: I1128 13:35:30.623274 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l5kft"] Nov 28 13:35:31 crc kubenswrapper[4779]: I1128 13:35:31.196361 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-l5kft" podUID="88f94bff-39c4-498c-a999-64c10e767465" containerName="registry-server" containerID="cri-o://fccf199ebdf27f4da9295d42d105b48e6162faf28a634ae4689662203fb2e5cd" gracePeriod=2 Nov 28 13:35:32 crc kubenswrapper[4779]: I1128 13:35:32.210260 4779 generic.go:334] "Generic (PLEG): container finished" podID="88f94bff-39c4-498c-a999-64c10e767465" containerID="fccf199ebdf27f4da9295d42d105b48e6162faf28a634ae4689662203fb2e5cd" exitCode=0 Nov 28 13:35:32 crc kubenswrapper[4779]: I1128 13:35:32.210328 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l5kft" event={"ID":"88f94bff-39c4-498c-a999-64c10e767465","Type":"ContainerDied","Data":"fccf199ebdf27f4da9295d42d105b48e6162faf28a634ae4689662203fb2e5cd"} Nov 28 13:35:33 crc kubenswrapper[4779]: I1128 13:35:33.390606 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l5kft" Nov 28 13:35:33 crc kubenswrapper[4779]: I1128 13:35:33.570530 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88f94bff-39c4-498c-a999-64c10e767465-catalog-content\") pod \"88f94bff-39c4-498c-a999-64c10e767465\" (UID: \"88f94bff-39c4-498c-a999-64c10e767465\") " Nov 28 13:35:33 crc kubenswrapper[4779]: I1128 13:35:33.570889 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88f94bff-39c4-498c-a999-64c10e767465-utilities\") pod \"88f94bff-39c4-498c-a999-64c10e767465\" (UID: \"88f94bff-39c4-498c-a999-64c10e767465\") " Nov 28 13:35:33 crc kubenswrapper[4779]: I1128 13:35:33.571083 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqn62\" (UniqueName: \"kubernetes.io/projected/88f94bff-39c4-498c-a999-64c10e767465-kube-api-access-rqn62\") pod \"88f94bff-39c4-498c-a999-64c10e767465\" (UID: \"88f94bff-39c4-498c-a999-64c10e767465\") " Nov 28 13:35:33 crc kubenswrapper[4779]: I1128 13:35:33.571676 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88f94bff-39c4-498c-a999-64c10e767465-utilities" (OuterVolumeSpecName: "utilities") pod "88f94bff-39c4-498c-a999-64c10e767465" (UID: "88f94bff-39c4-498c-a999-64c10e767465"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 13:35:33 crc kubenswrapper[4779]: I1128 13:35:33.577000 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88f94bff-39c4-498c-a999-64c10e767465-kube-api-access-rqn62" (OuterVolumeSpecName: "kube-api-access-rqn62") pod "88f94bff-39c4-498c-a999-64c10e767465" (UID: "88f94bff-39c4-498c-a999-64c10e767465"). InnerVolumeSpecName "kube-api-access-rqn62". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:35:33 crc kubenswrapper[4779]: I1128 13:35:33.592744 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88f94bff-39c4-498c-a999-64c10e767465-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "88f94bff-39c4-498c-a999-64c10e767465" (UID: "88f94bff-39c4-498c-a999-64c10e767465"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 13:35:33 crc kubenswrapper[4779]: I1128 13:35:33.673832 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqn62\" (UniqueName: \"kubernetes.io/projected/88f94bff-39c4-498c-a999-64c10e767465-kube-api-access-rqn62\") on node \"crc\" DevicePath \"\"" Nov 28 13:35:33 crc kubenswrapper[4779]: I1128 13:35:33.673872 4779 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88f94bff-39c4-498c-a999-64c10e767465-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 13:35:33 crc kubenswrapper[4779]: I1128 13:35:33.673881 4779 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88f94bff-39c4-498c-a999-64c10e767465-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 13:35:34 crc kubenswrapper[4779]: I1128 13:35:34.235376 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l5kft" event={"ID":"88f94bff-39c4-498c-a999-64c10e767465","Type":"ContainerDied","Data":"77575f58ff7b8da58d7d6bca5ce569848e2ca1a4ac3bb62a7ab1d07498c48d37"} Nov 28 13:35:34 crc kubenswrapper[4779]: I1128 13:35:34.235432 4779 scope.go:117] "RemoveContainer" containerID="fccf199ebdf27f4da9295d42d105b48e6162faf28a634ae4689662203fb2e5cd" Nov 28 13:35:34 crc kubenswrapper[4779]: I1128 13:35:34.235755 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l5kft" Nov 28 13:35:34 crc kubenswrapper[4779]: I1128 13:35:34.269388 4779 scope.go:117] "RemoveContainer" containerID="05faa8ed70d1e128ee274abf923c4daffb6940be10d9121bd837782bbd8b31c2" Nov 28 13:35:34 crc kubenswrapper[4779]: I1128 13:35:34.273320 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l5kft"] Nov 28 13:35:34 crc kubenswrapper[4779]: I1128 13:35:34.285744 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-l5kft"] Nov 28 13:35:34 crc kubenswrapper[4779]: I1128 13:35:34.301847 4779 scope.go:117] "RemoveContainer" containerID="5d42436fac9922c6142be1e67f193a6fdf164f7b7edc768328367ccf8a786348" Nov 28 13:35:35 crc kubenswrapper[4779]: I1128 13:35:35.743455 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88f94bff-39c4-498c-a999-64c10e767465" path="/var/lib/kubelet/pods/88f94bff-39c4-498c-a999-64c10e767465/volumes" Nov 28 13:35:46 crc kubenswrapper[4779]: I1128 13:35:46.284625 4779 patch_prober.go:28] interesting pod/machine-config-daemon-kj9g2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 13:35:46 crc kubenswrapper[4779]: I1128 13:35:46.285268 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 13:36:16 crc kubenswrapper[4779]: I1128 13:36:16.285528 4779 patch_prober.go:28] interesting pod/machine-config-daemon-kj9g2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 13:36:16 crc kubenswrapper[4779]: I1128 13:36:16.286069 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 13:36:16 crc kubenswrapper[4779]: I1128 13:36:16.286137 4779 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" Nov 28 13:36:16 crc kubenswrapper[4779]: I1128 13:36:16.286901 4779 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e0979e2873372762dc22f2d860bfe12ccf1b62b9acc4eb82e9e76a9701d5036b"} pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 13:36:16 crc kubenswrapper[4779]: I1128 13:36:16.286957 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" containerID="cri-o://e0979e2873372762dc22f2d860bfe12ccf1b62b9acc4eb82e9e76a9701d5036b" gracePeriod=600 Nov 28 13:36:16 crc kubenswrapper[4779]: E1128 13:36:16.405620 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:36:16 crc kubenswrapper[4779]: I1128 13:36:16.699299 4779 generic.go:334] "Generic (PLEG): container finished" podID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerID="e0979e2873372762dc22f2d860bfe12ccf1b62b9acc4eb82e9e76a9701d5036b" exitCode=0 Nov 28 13:36:16 crc kubenswrapper[4779]: I1128 13:36:16.699332 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" event={"ID":"3b2a3eb4-4de5-491b-b466-3a35b7d745ec","Type":"ContainerDied","Data":"e0979e2873372762dc22f2d860bfe12ccf1b62b9acc4eb82e9e76a9701d5036b"} Nov 28 13:36:16 crc kubenswrapper[4779]: I1128 13:36:16.699400 4779 scope.go:117] "RemoveContainer" containerID="3e44906474ac2ee22a4abe440e8a06eaadb743a6b2583926c64ac53c9f7bc166" Nov 28 13:36:16 crc kubenswrapper[4779]: I1128 13:36:16.700138 4779 scope.go:117] "RemoveContainer" containerID="e0979e2873372762dc22f2d860bfe12ccf1b62b9acc4eb82e9e76a9701d5036b" Nov 28 13:36:16 crc kubenswrapper[4779]: E1128 13:36:16.700410 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:36:19 crc kubenswrapper[4779]: I1128 13:36:19.168505 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7574d9569-x822f_f1d9753d-b49d-4e32-b312-137314283984/manager/0.log" Nov 28 13:36:20 crc kubenswrapper[4779]: I1128 13:36:20.564355 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 28 13:36:20 crc kubenswrapper[4779]: I1128 13:36:20.564672 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8" containerName="aodh-api" containerID="cri-o://65d9f4333d5f67ce3f08ead46cc086a5137e47364fd6d0b611c02aa0758b1e67" gracePeriod=30 Nov 28 13:36:20 crc kubenswrapper[4779]: I1128 13:36:20.564749 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8" containerName="aodh-listener" containerID="cri-o://1569f7fe3a2c5a8a83a36139284d2149d2f103e58644850f81e99bf0156c0e20" gracePeriod=30 Nov 28 13:36:20 crc kubenswrapper[4779]: I1128 13:36:20.564794 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8" containerName="aodh-evaluator" containerID="cri-o://d59d05a8052f2346c3c880b77fffed198b1e8b9b0513f9a07a3023dafdbbc558" gracePeriod=30 Nov 28 13:36:20 crc kubenswrapper[4779]: I1128 13:36:20.564783 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8" containerName="aodh-notifier" containerID="cri-o://15604321dc0fff32cd2201373db226dc785e61f91920b9976e77362b71e15b77" gracePeriod=30 Nov 28 13:36:21 crc kubenswrapper[4779]: I1128 13:36:21.758897 4779 generic.go:334] "Generic (PLEG): container finished" podID="c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8" containerID="d59d05a8052f2346c3c880b77fffed198b1e8b9b0513f9a07a3023dafdbbc558" exitCode=0 Nov 28 13:36:21 crc kubenswrapper[4779]: I1128 13:36:21.759244 4779 generic.go:334] "Generic (PLEG): container finished" podID="c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8" containerID="65d9f4333d5f67ce3f08ead46cc086a5137e47364fd6d0b611c02aa0758b1e67" exitCode=0 Nov 28 13:36:21 crc kubenswrapper[4779]: I1128 13:36:21.758927 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8","Type":"ContainerDied","Data":"d59d05a8052f2346c3c880b77fffed198b1e8b9b0513f9a07a3023dafdbbc558"} Nov 28 13:36:21 crc kubenswrapper[4779]: I1128 13:36:21.759305 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8","Type":"ContainerDied","Data":"65d9f4333d5f67ce3f08ead46cc086a5137e47364fd6d0b611c02aa0758b1e67"} Nov 28 13:36:26 crc kubenswrapper[4779]: I1128 13:36:26.051454 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-1e1c-account-create-update-g8rjh"] Nov 28 13:36:26 crc kubenswrapper[4779]: I1128 13:36:26.063121 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-qdwnv"] Nov 28 13:36:26 crc kubenswrapper[4779]: I1128 13:36:26.075897 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-qdwnv"] Nov 28 13:36:26 crc kubenswrapper[4779]: I1128 13:36:26.101697 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-1e1c-account-create-update-g8rjh"] Nov 28 13:36:27 crc kubenswrapper[4779]: I1128 13:36:27.741670 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77380600-4b57-4e7d-94a1-b3ec588f6989" path="/var/lib/kubelet/pods/77380600-4b57-4e7d-94a1-b3ec588f6989/volumes" Nov 28 13:36:27 crc kubenswrapper[4779]: I1128 13:36:27.743934 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6d3798e-7ea3-4c5d-8d4f-26f180cfdc81" path="/var/lib/kubelet/pods/e6d3798e-7ea3-4c5d-8d4f-26f180cfdc81/volumes" Nov 28 13:36:27 crc kubenswrapper[4779]: I1128 13:36:27.826265 4779 generic.go:334] "Generic (PLEG): container finished" podID="c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8" containerID="15604321dc0fff32cd2201373db226dc785e61f91920b9976e77362b71e15b77" exitCode=0 Nov 28 13:36:27 crc kubenswrapper[4779]: I1128 13:36:27.826308 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8","Type":"ContainerDied","Data":"15604321dc0fff32cd2201373db226dc785e61f91920b9976e77362b71e15b77"} Nov 28 13:36:28 crc kubenswrapper[4779]: I1128 13:36:28.726736 4779 scope.go:117] "RemoveContainer" containerID="e0979e2873372762dc22f2d860bfe12ccf1b62b9acc4eb82e9e76a9701d5036b" Nov 28 13:36:28 crc kubenswrapper[4779]: E1128 13:36:28.727545 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:36:28 crc kubenswrapper[4779]: I1128 13:36:28.848943 4779 generic.go:334] "Generic (PLEG): container finished" podID="c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8" containerID="1569f7fe3a2c5a8a83a36139284d2149d2f103e58644850f81e99bf0156c0e20" exitCode=0 Nov 28 13:36:28 crc kubenswrapper[4779]: I1128 13:36:28.848982 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8","Type":"ContainerDied","Data":"1569f7fe3a2c5a8a83a36139284d2149d2f103e58644850f81e99bf0156c0e20"} Nov 28 13:36:29 crc kubenswrapper[4779]: I1128 13:36:29.085604 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 28 13:36:29 crc kubenswrapper[4779]: I1128 13:36:29.257080 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8-combined-ca-bundle\") pod \"c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8\" (UID: \"c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8\") " Nov 28 13:36:29 crc kubenswrapper[4779]: I1128 13:36:29.257495 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8-config-data\") pod \"c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8\" (UID: \"c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8\") " Nov 28 13:36:29 crc kubenswrapper[4779]: I1128 13:36:29.257579 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8-public-tls-certs\") pod \"c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8\" (UID: \"c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8\") " Nov 28 13:36:29 crc kubenswrapper[4779]: I1128 13:36:29.257639 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8-internal-tls-certs\") pod \"c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8\" (UID: \"c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8\") " Nov 28 13:36:29 crc kubenswrapper[4779]: I1128 13:36:29.257762 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8-scripts\") pod \"c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8\" (UID: \"c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8\") " Nov 28 13:36:29 crc kubenswrapper[4779]: I1128 13:36:29.258948 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgnzw\" (UniqueName: \"kubernetes.io/projected/c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8-kube-api-access-vgnzw\") pod \"c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8\" (UID: \"c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8\") " Nov 28 13:36:29 crc kubenswrapper[4779]: I1128 13:36:29.278943 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8-scripts" (OuterVolumeSpecName: "scripts") pod "c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8" (UID: "c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:36:29 crc kubenswrapper[4779]: I1128 13:36:29.281958 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8-kube-api-access-vgnzw" (OuterVolumeSpecName: "kube-api-access-vgnzw") pod "c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8" (UID: "c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8"). InnerVolumeSpecName "kube-api-access-vgnzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:36:29 crc kubenswrapper[4779]: I1128 13:36:29.333192 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8" (UID: "c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:36:29 crc kubenswrapper[4779]: I1128 13:36:29.343384 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8" (UID: "c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:36:29 crc kubenswrapper[4779]: I1128 13:36:29.364314 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgnzw\" (UniqueName: \"kubernetes.io/projected/c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8-kube-api-access-vgnzw\") on node \"crc\" DevicePath \"\"" Nov 28 13:36:29 crc kubenswrapper[4779]: I1128 13:36:29.364355 4779 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 13:36:29 crc kubenswrapper[4779]: I1128 13:36:29.364369 4779 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 13:36:29 crc kubenswrapper[4779]: I1128 13:36:29.364383 4779 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 13:36:29 crc kubenswrapper[4779]: I1128 13:36:29.385234 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8-config-data" (OuterVolumeSpecName: "config-data") pod "c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8" (UID: "c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:36:29 crc kubenswrapper[4779]: I1128 13:36:29.385306 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8" (UID: "c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:36:29 crc kubenswrapper[4779]: I1128 13:36:29.467192 4779 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 13:36:29 crc kubenswrapper[4779]: I1128 13:36:29.467264 4779 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 13:36:29 crc kubenswrapper[4779]: I1128 13:36:29.861338 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8","Type":"ContainerDied","Data":"df614fe5f0d9d4e65bd3dce70698e4e467b800a1d75b163f8a31fb4ce425064a"} Nov 28 13:36:29 crc kubenswrapper[4779]: I1128 13:36:29.861400 4779 scope.go:117] "RemoveContainer" containerID="1569f7fe3a2c5a8a83a36139284d2149d2f103e58644850f81e99bf0156c0e20" Nov 28 13:36:29 crc kubenswrapper[4779]: I1128 13:36:29.861444 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 28 13:36:29 crc kubenswrapper[4779]: I1128 13:36:29.895408 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 28 13:36:29 crc kubenswrapper[4779]: I1128 13:36:29.897916 4779 scope.go:117] "RemoveContainer" containerID="15604321dc0fff32cd2201373db226dc785e61f91920b9976e77362b71e15b77" Nov 28 13:36:29 crc kubenswrapper[4779]: I1128 13:36:29.911897 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Nov 28 13:36:29 crc kubenswrapper[4779]: I1128 13:36:29.923586 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 28 13:36:29 crc kubenswrapper[4779]: E1128 13:36:29.924023 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88f94bff-39c4-498c-a999-64c10e767465" containerName="extract-content" Nov 28 13:36:29 crc kubenswrapper[4779]: I1128 13:36:29.924043 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="88f94bff-39c4-498c-a999-64c10e767465" containerName="extract-content" Nov 28 13:36:29 crc kubenswrapper[4779]: E1128 13:36:29.924056 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8" containerName="aodh-evaluator" Nov 28 13:36:29 crc kubenswrapper[4779]: I1128 13:36:29.924066 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8" containerName="aodh-evaluator" Nov 28 13:36:29 crc kubenswrapper[4779]: E1128 13:36:29.924117 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8" containerName="aodh-listener" Nov 28 13:36:29 crc kubenswrapper[4779]: I1128 13:36:29.924126 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8" containerName="aodh-listener" Nov 28 13:36:29 crc kubenswrapper[4779]: E1128 13:36:29.924137 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8" containerName="aodh-notifier" Nov 28 13:36:29 crc kubenswrapper[4779]: I1128 13:36:29.924145 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8" containerName="aodh-notifier" Nov 28 13:36:29 crc kubenswrapper[4779]: E1128 13:36:29.924163 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8" containerName="aodh-api" Nov 28 13:36:29 crc kubenswrapper[4779]: I1128 13:36:29.924171 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8" containerName="aodh-api" Nov 28 13:36:29 crc kubenswrapper[4779]: E1128 13:36:29.924181 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88f94bff-39c4-498c-a999-64c10e767465" containerName="registry-server" Nov 28 13:36:29 crc kubenswrapper[4779]: I1128 13:36:29.924189 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="88f94bff-39c4-498c-a999-64c10e767465" containerName="registry-server" Nov 28 13:36:29 crc kubenswrapper[4779]: E1128 13:36:29.925304 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88f94bff-39c4-498c-a999-64c10e767465" containerName="extract-utilities" Nov 28 13:36:29 crc kubenswrapper[4779]: I1128 13:36:29.925328 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="88f94bff-39c4-498c-a999-64c10e767465" containerName="extract-utilities" Nov 28 13:36:29 crc kubenswrapper[4779]: I1128 13:36:29.925719 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8" containerName="aodh-evaluator" Nov 28 13:36:29 crc kubenswrapper[4779]: I1128 13:36:29.925738 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8" containerName="aodh-listener" Nov 28 13:36:29 crc kubenswrapper[4779]: I1128 13:36:29.925756 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8" containerName="aodh-api" Nov 28 13:36:29 crc kubenswrapper[4779]: I1128 13:36:29.925767 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8" containerName="aodh-notifier" Nov 28 13:36:29 crc kubenswrapper[4779]: I1128 13:36:29.925775 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="88f94bff-39c4-498c-a999-64c10e767465" containerName="registry-server" Nov 28 13:36:29 crc kubenswrapper[4779]: I1128 13:36:29.927473 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 28 13:36:29 crc kubenswrapper[4779]: I1128 13:36:29.929738 4779 scope.go:117] "RemoveContainer" containerID="d59d05a8052f2346c3c880b77fffed198b1e8b9b0513f9a07a3023dafdbbc558" Nov 28 13:36:29 crc kubenswrapper[4779]: I1128 13:36:29.930551 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Nov 28 13:36:29 crc kubenswrapper[4779]: I1128 13:36:29.931285 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 28 13:36:29 crc kubenswrapper[4779]: I1128 13:36:29.931475 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 28 13:36:29 crc kubenswrapper[4779]: I1128 13:36:29.931805 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-wdvkm" Nov 28 13:36:29 crc kubenswrapper[4779]: I1128 13:36:29.936974 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Nov 28 13:36:29 crc kubenswrapper[4779]: I1128 13:36:29.938911 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 28 13:36:29 crc kubenswrapper[4779]: I1128 13:36:29.981968 4779 scope.go:117] "RemoveContainer" containerID="65d9f4333d5f67ce3f08ead46cc086a5137e47364fd6d0b611c02aa0758b1e67" Nov 28 13:36:30 crc kubenswrapper[4779]: I1128 13:36:30.082143 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/11b20377-b66b-48ee-a4ac-a9f12faf621c-public-tls-certs\") pod \"aodh-0\" (UID: \"11b20377-b66b-48ee-a4ac-a9f12faf621c\") " pod="openstack/aodh-0" Nov 28 13:36:30 crc kubenswrapper[4779]: I1128 13:36:30.082398 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11b20377-b66b-48ee-a4ac-a9f12faf621c-config-data\") pod \"aodh-0\" (UID: \"11b20377-b66b-48ee-a4ac-a9f12faf621c\") " pod="openstack/aodh-0" Nov 28 13:36:30 crc kubenswrapper[4779]: I1128 13:36:30.082451 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/11b20377-b66b-48ee-a4ac-a9f12faf621c-internal-tls-certs\") pod \"aodh-0\" (UID: \"11b20377-b66b-48ee-a4ac-a9f12faf621c\") " pod="openstack/aodh-0" Nov 28 13:36:30 crc kubenswrapper[4779]: I1128 13:36:30.082485 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11b20377-b66b-48ee-a4ac-a9f12faf621c-combined-ca-bundle\") pod \"aodh-0\" (UID: \"11b20377-b66b-48ee-a4ac-a9f12faf621c\") " pod="openstack/aodh-0" Nov 28 13:36:30 crc kubenswrapper[4779]: I1128 13:36:30.082539 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp4js\" (UniqueName: \"kubernetes.io/projected/11b20377-b66b-48ee-a4ac-a9f12faf621c-kube-api-access-pp4js\") pod \"aodh-0\" (UID: \"11b20377-b66b-48ee-a4ac-a9f12faf621c\") " pod="openstack/aodh-0" Nov 28 13:36:30 crc kubenswrapper[4779]: I1128 13:36:30.082564 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11b20377-b66b-48ee-a4ac-a9f12faf621c-scripts\") pod \"aodh-0\" (UID: \"11b20377-b66b-48ee-a4ac-a9f12faf621c\") " pod="openstack/aodh-0" Nov 28 13:36:30 crc kubenswrapper[4779]: I1128 13:36:30.183928 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/11b20377-b66b-48ee-a4ac-a9f12faf621c-internal-tls-certs\") pod \"aodh-0\" (UID: \"11b20377-b66b-48ee-a4ac-a9f12faf621c\") " pod="openstack/aodh-0" Nov 28 13:36:30 crc kubenswrapper[4779]: I1128 13:36:30.184006 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11b20377-b66b-48ee-a4ac-a9f12faf621c-combined-ca-bundle\") pod \"aodh-0\" (UID: \"11b20377-b66b-48ee-a4ac-a9f12faf621c\") " pod="openstack/aodh-0" Nov 28 13:36:30 crc kubenswrapper[4779]: I1128 13:36:30.184112 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp4js\" (UniqueName: \"kubernetes.io/projected/11b20377-b66b-48ee-a4ac-a9f12faf621c-kube-api-access-pp4js\") pod \"aodh-0\" (UID: \"11b20377-b66b-48ee-a4ac-a9f12faf621c\") " pod="openstack/aodh-0" Nov 28 13:36:30 crc kubenswrapper[4779]: I1128 13:36:30.184148 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11b20377-b66b-48ee-a4ac-a9f12faf621c-scripts\") pod \"aodh-0\" (UID: \"11b20377-b66b-48ee-a4ac-a9f12faf621c\") " pod="openstack/aodh-0" Nov 28 13:36:30 crc kubenswrapper[4779]: I1128 13:36:30.184220 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/11b20377-b66b-48ee-a4ac-a9f12faf621c-public-tls-certs\") pod \"aodh-0\" (UID: \"11b20377-b66b-48ee-a4ac-a9f12faf621c\") " pod="openstack/aodh-0" Nov 28 13:36:30 crc kubenswrapper[4779]: I1128 13:36:30.184244 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11b20377-b66b-48ee-a4ac-a9f12faf621c-config-data\") pod \"aodh-0\" (UID: \"11b20377-b66b-48ee-a4ac-a9f12faf621c\") " pod="openstack/aodh-0" Nov 28 13:36:30 crc kubenswrapper[4779]: I1128 13:36:30.188188 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11b20377-b66b-48ee-a4ac-a9f12faf621c-scripts\") pod \"aodh-0\" (UID: \"11b20377-b66b-48ee-a4ac-a9f12faf621c\") " pod="openstack/aodh-0" Nov 28 13:36:30 crc kubenswrapper[4779]: I1128 13:36:30.188729 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/11b20377-b66b-48ee-a4ac-a9f12faf621c-internal-tls-certs\") pod \"aodh-0\" (UID: \"11b20377-b66b-48ee-a4ac-a9f12faf621c\") " pod="openstack/aodh-0" Nov 28 13:36:30 crc kubenswrapper[4779]: I1128 13:36:30.189126 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11b20377-b66b-48ee-a4ac-a9f12faf621c-config-data\") pod \"aodh-0\" (UID: \"11b20377-b66b-48ee-a4ac-a9f12faf621c\") " pod="openstack/aodh-0" Nov 28 13:36:30 crc kubenswrapper[4779]: I1128 13:36:30.190333 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/11b20377-b66b-48ee-a4ac-a9f12faf621c-public-tls-certs\") pod \"aodh-0\" (UID: \"11b20377-b66b-48ee-a4ac-a9f12faf621c\") " pod="openstack/aodh-0" Nov 28 13:36:30 crc kubenswrapper[4779]: I1128 13:36:30.191879 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11b20377-b66b-48ee-a4ac-a9f12faf621c-combined-ca-bundle\") pod \"aodh-0\" (UID: \"11b20377-b66b-48ee-a4ac-a9f12faf621c\") " pod="openstack/aodh-0" Nov 28 13:36:30 crc kubenswrapper[4779]: I1128 13:36:30.207831 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp4js\" (UniqueName: \"kubernetes.io/projected/11b20377-b66b-48ee-a4ac-a9f12faf621c-kube-api-access-pp4js\") pod \"aodh-0\" (UID: \"11b20377-b66b-48ee-a4ac-a9f12faf621c\") " pod="openstack/aodh-0" Nov 28 13:36:30 crc kubenswrapper[4779]: I1128 13:36:30.247173 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 28 13:36:30 crc kubenswrapper[4779]: I1128 13:36:30.745253 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 28 13:36:30 crc kubenswrapper[4779]: I1128 13:36:30.750791 4779 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 13:36:30 crc kubenswrapper[4779]: I1128 13:36:30.872227 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"11b20377-b66b-48ee-a4ac-a9f12faf621c","Type":"ContainerStarted","Data":"2b1374b91dbe792feff4019f34bcd7a2fbc98433c0fde0f53883e1974aa63fd4"} Nov 28 13:36:31 crc kubenswrapper[4779]: I1128 13:36:31.738389 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8" path="/var/lib/kubelet/pods/c307f6f9-8be0-4fdf-bd2c-f91cc9d27fe8/volumes" Nov 28 13:36:31 crc kubenswrapper[4779]: I1128 13:36:31.884525 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"11b20377-b66b-48ee-a4ac-a9f12faf621c","Type":"ContainerStarted","Data":"79a518f45ed8149f6ca8b5ec20e3a6070726e43d07997ea052bf2e49852dd4ef"} Nov 28 13:36:32 crc kubenswrapper[4779]: I1128 13:36:32.901758 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"11b20377-b66b-48ee-a4ac-a9f12faf621c","Type":"ContainerStarted","Data":"b2b64aabe38de7278b3a748c71108ad4d8cb346eaa55b629fcab8245fc28fc65"} Nov 28 13:36:33 crc kubenswrapper[4779]: I1128 13:36:33.913753 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"11b20377-b66b-48ee-a4ac-a9f12faf621c","Type":"ContainerStarted","Data":"a2d516325e89ef55edcd3a867bc5033b64c8571901b83ffe238d05891415e4d1"} Nov 28 13:36:34 crc kubenswrapper[4779]: I1128 13:36:34.931711 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"11b20377-b66b-48ee-a4ac-a9f12faf621c","Type":"ContainerStarted","Data":"2307398f9763774f0a4f821a82915e01174369dae061798c6095239a7cf580f6"} Nov 28 13:36:34 crc kubenswrapper[4779]: I1128 13:36:34.954570 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.95294781 podStartE2EDuration="5.954556068s" podCreationTimestamp="2025-11-28 13:36:29 +0000 UTC" firstStartedPulling="2025-11-28 13:36:30.750600839 +0000 UTC m=+3651.316276193" lastFinishedPulling="2025-11-28 13:36:33.752209087 +0000 UTC m=+3654.317884451" observedRunningTime="2025-11-28 13:36:34.951571308 +0000 UTC m=+3655.517246702" watchObservedRunningTime="2025-11-28 13:36:34.954556068 +0000 UTC m=+3655.520231422" Nov 28 13:36:37 crc kubenswrapper[4779]: I1128 13:36:37.067244 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-82qkk"] Nov 28 13:36:37 crc kubenswrapper[4779]: I1128 13:36:37.081796 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-82qkk"] Nov 28 13:36:37 crc kubenswrapper[4779]: I1128 13:36:37.737176 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1809c14b-75a0-4a41-b67f-e0a62aa53f0d" path="/var/lib/kubelet/pods/1809c14b-75a0-4a41-b67f-e0a62aa53f0d/volumes" Nov 28 13:36:39 crc kubenswrapper[4779]: I1128 13:36:39.734878 4779 scope.go:117] "RemoveContainer" containerID="e0979e2873372762dc22f2d860bfe12ccf1b62b9acc4eb82e9e76a9701d5036b" Nov 28 13:36:39 crc kubenswrapper[4779]: E1128 13:36:39.735408 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:36:50 crc kubenswrapper[4779]: I1128 13:36:50.726859 4779 scope.go:117] "RemoveContainer" containerID="e0979e2873372762dc22f2d860bfe12ccf1b62b9acc4eb82e9e76a9701d5036b" Nov 28 13:36:50 crc kubenswrapper[4779]: E1128 13:36:50.728012 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:37:05 crc kubenswrapper[4779]: I1128 13:37:05.727601 4779 scope.go:117] "RemoveContainer" containerID="e0979e2873372762dc22f2d860bfe12ccf1b62b9acc4eb82e9e76a9701d5036b" Nov 28 13:37:05 crc kubenswrapper[4779]: E1128 13:37:05.728514 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:37:17 crc kubenswrapper[4779]: I1128 13:37:17.726815 4779 scope.go:117] "RemoveContainer" containerID="e0979e2873372762dc22f2d860bfe12ccf1b62b9acc4eb82e9e76a9701d5036b" Nov 28 13:37:17 crc kubenswrapper[4779]: E1128 13:37:17.727561 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:37:18 crc kubenswrapper[4779]: I1128 13:37:18.152316 4779 scope.go:117] "RemoveContainer" containerID="e808832d088605400992406c8f85f8cba6a1b7abe639f6668f8560035c59d1b7" Nov 28 13:37:18 crc kubenswrapper[4779]: I1128 13:37:18.182825 4779 scope.go:117] "RemoveContainer" containerID="d2e71ec8d0c2de5825b847e5cad41abbc75bcf6d7be2354aa6e25512d15b1321" Nov 28 13:37:18 crc kubenswrapper[4779]: I1128 13:37:18.274308 4779 scope.go:117] "RemoveContainer" containerID="8d706fa6c23626859c5045eaaaf12eea75ce69da87e1f3b013146f958eacfeb1" Nov 28 13:37:31 crc kubenswrapper[4779]: I1128 13:37:31.726974 4779 scope.go:117] "RemoveContainer" containerID="e0979e2873372762dc22f2d860bfe12ccf1b62b9acc4eb82e9e76a9701d5036b" Nov 28 13:37:31 crc kubenswrapper[4779]: E1128 13:37:31.728167 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:37:44 crc kubenswrapper[4779]: I1128 13:37:44.726587 4779 scope.go:117] "RemoveContainer" containerID="e0979e2873372762dc22f2d860bfe12ccf1b62b9acc4eb82e9e76a9701d5036b" Nov 28 13:37:44 crc kubenswrapper[4779]: E1128 13:37:44.728538 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:37:57 crc kubenswrapper[4779]: I1128 13:37:57.727555 4779 scope.go:117] "RemoveContainer" containerID="e0979e2873372762dc22f2d860bfe12ccf1b62b9acc4eb82e9e76a9701d5036b" Nov 28 13:37:57 crc kubenswrapper[4779]: E1128 13:37:57.728833 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:38:07 crc kubenswrapper[4779]: I1128 13:38:07.097419 4779 patch_prober.go:28] interesting pod/console-66db7f5f8b-2swkg container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.44:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 28 13:38:07 crc kubenswrapper[4779]: I1128 13:38:07.097919 4779 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-66db7f5f8b-2swkg" podUID="53fefe07-fd13-4ed1-b985-8f1c3ed47ce4" containerName="console" probeResult="failure" output="Get \"https://10.217.0.44:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 28 13:38:08 crc kubenswrapper[4779]: I1128 13:38:08.726190 4779 scope.go:117] "RemoveContainer" containerID="e0979e2873372762dc22f2d860bfe12ccf1b62b9acc4eb82e9e76a9701d5036b" Nov 28 13:38:08 crc kubenswrapper[4779]: E1128 13:38:08.726898 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:38:20 crc kubenswrapper[4779]: I1128 13:38:20.726877 4779 scope.go:117] "RemoveContainer" containerID="e0979e2873372762dc22f2d860bfe12ccf1b62b9acc4eb82e9e76a9701d5036b" Nov 28 13:38:20 crc kubenswrapper[4779]: E1128 13:38:20.727839 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:38:21 crc kubenswrapper[4779]: I1128 13:38:21.182842 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7574d9569-x822f_f1d9753d-b49d-4e32-b312-137314283984/manager/0.log" Nov 28 13:38:25 crc kubenswrapper[4779]: I1128 13:38:25.523145 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 28 13:38:25 crc kubenswrapper[4779]: I1128 13:38:25.527898 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="c406f7f5-9e76-44f6-b507-59c1b6d813d5" containerName="prometheus" containerID="cri-o://b2ddc23e2889eb987c122d6d78362a07a1bd627f16329e71e39b75aa73535ceb" gracePeriod=600 Nov 28 13:38:25 crc kubenswrapper[4779]: I1128 13:38:25.528795 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="c406f7f5-9e76-44f6-b507-59c1b6d813d5" containerName="thanos-sidecar" containerID="cri-o://b8844ee70f1989e958bb097a732de0b3b39cb3b35b5ec9b0efed624d1723af49" gracePeriod=600 Nov 28 13:38:25 crc kubenswrapper[4779]: I1128 13:38:25.528878 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="c406f7f5-9e76-44f6-b507-59c1b6d813d5" containerName="config-reloader" containerID="cri-o://47cb579ae249a9a130c5906e3c8d4c95f186297ccb64728c117d40e2d3f62192" gracePeriod=600 Nov 28 13:38:26 crc kubenswrapper[4779]: I1128 13:38:26.092854 4779 generic.go:334] "Generic (PLEG): container finished" podID="c406f7f5-9e76-44f6-b507-59c1b6d813d5" containerID="b8844ee70f1989e958bb097a732de0b3b39cb3b35b5ec9b0efed624d1723af49" exitCode=0 Nov 28 13:38:26 crc kubenswrapper[4779]: I1128 13:38:26.093205 4779 generic.go:334] "Generic (PLEG): container finished" podID="c406f7f5-9e76-44f6-b507-59c1b6d813d5" containerID="b2ddc23e2889eb987c122d6d78362a07a1bd627f16329e71e39b75aa73535ceb" exitCode=0 Nov 28 13:38:26 crc kubenswrapper[4779]: I1128 13:38:26.092926 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c406f7f5-9e76-44f6-b507-59c1b6d813d5","Type":"ContainerDied","Data":"b8844ee70f1989e958bb097a732de0b3b39cb3b35b5ec9b0efed624d1723af49"} Nov 28 13:38:26 crc kubenswrapper[4779]: I1128 13:38:26.093244 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c406f7f5-9e76-44f6-b507-59c1b6d813d5","Type":"ContainerDied","Data":"b2ddc23e2889eb987c122d6d78362a07a1bd627f16329e71e39b75aa73535ceb"} Nov 28 13:38:26 crc kubenswrapper[4779]: I1128 13:38:26.769988 4779 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="c406f7f5-9e76-44f6-b507-59c1b6d813d5" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.1.19:9090/-/ready\": dial tcp 10.217.1.19:9090: connect: connection refused" Nov 28 13:38:27 crc kubenswrapper[4779]: I1128 13:38:27.107568 4779 generic.go:334] "Generic (PLEG): container finished" podID="c406f7f5-9e76-44f6-b507-59c1b6d813d5" containerID="47cb579ae249a9a130c5906e3c8d4c95f186297ccb64728c117d40e2d3f62192" exitCode=0 Nov 28 13:38:27 crc kubenswrapper[4779]: I1128 13:38:27.107645 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c406f7f5-9e76-44f6-b507-59c1b6d813d5","Type":"ContainerDied","Data":"47cb579ae249a9a130c5906e3c8d4c95f186297ccb64728c117d40e2d3f62192"} Nov 28 13:38:27 crc kubenswrapper[4779]: I1128 13:38:27.961380 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.126911 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c406f7f5-9e76-44f6-b507-59c1b6d813d5","Type":"ContainerDied","Data":"e5ebe7b95a1afb818686b9fb73795ff4c3b2fb53607691395b48c5b574a9dda5"} Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.126966 4779 scope.go:117] "RemoveContainer" containerID="b8844ee70f1989e958bb097a732de0b3b39cb3b35b5ec9b0efed624d1723af49" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.127109 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.134810 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c406f7f5-9e76-44f6-b507-59c1b6d813d5-tls-assets\") pod \"c406f7f5-9e76-44f6-b507-59c1b6d813d5\" (UID: \"c406f7f5-9e76-44f6-b507-59c1b6d813d5\") " Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.134850 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c406f7f5-9e76-44f6-b507-59c1b6d813d5-config\") pod \"c406f7f5-9e76-44f6-b507-59c1b6d813d5\" (UID: \"c406f7f5-9e76-44f6-b507-59c1b6d813d5\") " Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.135014 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c406f7f5-9e76-44f6-b507-59c1b6d813d5-config-out\") pod \"c406f7f5-9e76-44f6-b507-59c1b6d813d5\" (UID: \"c406f7f5-9e76-44f6-b507-59c1b6d813d5\") " Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.135043 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c406f7f5-9e76-44f6-b507-59c1b6d813d5-prometheus-metric-storage-rulefiles-0\") pod \"c406f7f5-9e76-44f6-b507-59c1b6d813d5\" (UID: \"c406f7f5-9e76-44f6-b507-59c1b6d813d5\") " Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.135143 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgwz7\" (UniqueName: \"kubernetes.io/projected/c406f7f5-9e76-44f6-b507-59c1b6d813d5-kube-api-access-xgwz7\") pod \"c406f7f5-9e76-44f6-b507-59c1b6d813d5\" (UID: \"c406f7f5-9e76-44f6-b507-59c1b6d813d5\") " Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.135170 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c406f7f5-9e76-44f6-b507-59c1b6d813d5-secret-combined-ca-bundle\") pod \"c406f7f5-9e76-44f6-b507-59c1b6d813d5\" (UID: \"c406f7f5-9e76-44f6-b507-59c1b6d813d5\") " Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.135209 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c406f7f5-9e76-44f6-b507-59c1b6d813d5-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"c406f7f5-9e76-44f6-b507-59c1b6d813d5\" (UID: \"c406f7f5-9e76-44f6-b507-59c1b6d813d5\") " Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.135245 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c406f7f5-9e76-44f6-b507-59c1b6d813d5-web-config\") pod \"c406f7f5-9e76-44f6-b507-59c1b6d813d5\" (UID: \"c406f7f5-9e76-44f6-b507-59c1b6d813d5\") " Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.135269 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c406f7f5-9e76-44f6-b507-59c1b6d813d5-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"c406f7f5-9e76-44f6-b507-59c1b6d813d5\" (UID: \"c406f7f5-9e76-44f6-b507-59c1b6d813d5\") " Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.135309 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c406f7f5-9e76-44f6-b507-59c1b6d813d5-thanos-prometheus-http-client-file\") pod \"c406f7f5-9e76-44f6-b507-59c1b6d813d5\" (UID: \"c406f7f5-9e76-44f6-b507-59c1b6d813d5\") " Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.135411 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/c406f7f5-9e76-44f6-b507-59c1b6d813d5-prometheus-metric-storage-db\") pod \"c406f7f5-9e76-44f6-b507-59c1b6d813d5\" (UID: \"c406f7f5-9e76-44f6-b507-59c1b6d813d5\") " Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.136696 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c406f7f5-9e76-44f6-b507-59c1b6d813d5-prometheus-metric-storage-db" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "c406f7f5-9e76-44f6-b507-59c1b6d813d5" (UID: "c406f7f5-9e76-44f6-b507-59c1b6d813d5"). InnerVolumeSpecName "prometheus-metric-storage-db". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.141672 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c406f7f5-9e76-44f6-b507-59c1b6d813d5-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "c406f7f5-9e76-44f6-b507-59c1b6d813d5" (UID: "c406f7f5-9e76-44f6-b507-59c1b6d813d5"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.143026 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c406f7f5-9e76-44f6-b507-59c1b6d813d5-config" (OuterVolumeSpecName: "config") pod "c406f7f5-9e76-44f6-b507-59c1b6d813d5" (UID: "c406f7f5-9e76-44f6-b507-59c1b6d813d5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.145859 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c406f7f5-9e76-44f6-b507-59c1b6d813d5-secret-combined-ca-bundle" (OuterVolumeSpecName: "secret-combined-ca-bundle") pod "c406f7f5-9e76-44f6-b507-59c1b6d813d5" (UID: "c406f7f5-9e76-44f6-b507-59c1b6d813d5"). InnerVolumeSpecName "secret-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.146006 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c406f7f5-9e76-44f6-b507-59c1b6d813d5-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "c406f7f5-9e76-44f6-b507-59c1b6d813d5" (UID: "c406f7f5-9e76-44f6-b507-59c1b6d813d5"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.146253 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c406f7f5-9e76-44f6-b507-59c1b6d813d5-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "c406f7f5-9e76-44f6-b507-59c1b6d813d5" (UID: "c406f7f5-9e76-44f6-b507-59c1b6d813d5"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.146990 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c406f7f5-9e76-44f6-b507-59c1b6d813d5-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d") pod "c406f7f5-9e76-44f6-b507-59c1b6d813d5" (UID: "c406f7f5-9e76-44f6-b507-59c1b6d813d5"). InnerVolumeSpecName "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.147812 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c406f7f5-9e76-44f6-b507-59c1b6d813d5-kube-api-access-xgwz7" (OuterVolumeSpecName: "kube-api-access-xgwz7") pod "c406f7f5-9e76-44f6-b507-59c1b6d813d5" (UID: "c406f7f5-9e76-44f6-b507-59c1b6d813d5"). InnerVolumeSpecName "kube-api-access-xgwz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.155166 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c406f7f5-9e76-44f6-b507-59c1b6d813d5-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "c406f7f5-9e76-44f6-b507-59c1b6d813d5" (UID: "c406f7f5-9e76-44f6-b507-59c1b6d813d5"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.167391 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c406f7f5-9e76-44f6-b507-59c1b6d813d5-config-out" (OuterVolumeSpecName: "config-out") pod "c406f7f5-9e76-44f6-b507-59c1b6d813d5" (UID: "c406f7f5-9e76-44f6-b507-59c1b6d813d5"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.169533 4779 scope.go:117] "RemoveContainer" containerID="47cb579ae249a9a130c5906e3c8d4c95f186297ccb64728c117d40e2d3f62192" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.238135 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgwz7\" (UniqueName: \"kubernetes.io/projected/c406f7f5-9e76-44f6-b507-59c1b6d813d5-kube-api-access-xgwz7\") on node \"crc\" DevicePath \"\"" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.238171 4779 reconciler_common.go:293] "Volume detached for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c406f7f5-9e76-44f6-b507-59c1b6d813d5-secret-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.238184 4779 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c406f7f5-9e76-44f6-b507-59c1b6d813d5-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") on node \"crc\" DevicePath \"\"" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.238197 4779 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c406f7f5-9e76-44f6-b507-59c1b6d813d5-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") on node \"crc\" DevicePath \"\"" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.238581 4779 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c406f7f5-9e76-44f6-b507-59c1b6d813d5-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.238609 4779 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/c406f7f5-9e76-44f6-b507-59c1b6d813d5-prometheus-metric-storage-db\") on node \"crc\" DevicePath \"\"" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.238618 4779 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c406f7f5-9e76-44f6-b507-59c1b6d813d5-tls-assets\") on node \"crc\" DevicePath \"\"" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.238628 4779 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c406f7f5-9e76-44f6-b507-59c1b6d813d5-config\") on node \"crc\" DevicePath \"\"" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.238636 4779 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c406f7f5-9e76-44f6-b507-59c1b6d813d5-config-out\") on node \"crc\" DevicePath \"\"" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.238657 4779 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c406f7f5-9e76-44f6-b507-59c1b6d813d5-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.253901 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c406f7f5-9e76-44f6-b507-59c1b6d813d5-web-config" (OuterVolumeSpecName: "web-config") pod "c406f7f5-9e76-44f6-b507-59c1b6d813d5" (UID: "c406f7f5-9e76-44f6-b507-59c1b6d813d5"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.300891 4779 scope.go:117] "RemoveContainer" containerID="b2ddc23e2889eb987c122d6d78362a07a1bd627f16329e71e39b75aa73535ceb" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.319389 4779 scope.go:117] "RemoveContainer" containerID="88f4a6826d6087cc6124157db64ab859f19b03d7bed5d3f7253fd1494c220fdc" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.340013 4779 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c406f7f5-9e76-44f6-b507-59c1b6d813d5-web-config\") on node \"crc\" DevicePath \"\"" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.461192 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.469990 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.504511 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 28 13:38:28 crc kubenswrapper[4779]: E1128 13:38:28.504930 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c406f7f5-9e76-44f6-b507-59c1b6d813d5" containerName="prometheus" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.504949 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="c406f7f5-9e76-44f6-b507-59c1b6d813d5" containerName="prometheus" Nov 28 13:38:28 crc kubenswrapper[4779]: E1128 13:38:28.504956 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c406f7f5-9e76-44f6-b507-59c1b6d813d5" containerName="config-reloader" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.504963 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="c406f7f5-9e76-44f6-b507-59c1b6d813d5" containerName="config-reloader" Nov 28 13:38:28 crc kubenswrapper[4779]: E1128 13:38:28.504977 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c406f7f5-9e76-44f6-b507-59c1b6d813d5" containerName="thanos-sidecar" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.504984 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="c406f7f5-9e76-44f6-b507-59c1b6d813d5" containerName="thanos-sidecar" Nov 28 13:38:28 crc kubenswrapper[4779]: E1128 13:38:28.505008 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c406f7f5-9e76-44f6-b507-59c1b6d813d5" containerName="init-config-reloader" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.505014 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="c406f7f5-9e76-44f6-b507-59c1b6d813d5" containerName="init-config-reloader" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.505238 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="c406f7f5-9e76-44f6-b507-59c1b6d813d5" containerName="thanos-sidecar" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.505257 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="c406f7f5-9e76-44f6-b507-59c1b6d813d5" containerName="prometheus" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.505271 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="c406f7f5-9e76-44f6-b507-59c1b6d813d5" containerName="config-reloader" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.507036 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.508963 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.509070 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-k7vw5" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.509454 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.509606 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.509882 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.509924 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.523339 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.526157 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.644453 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ee069283-02ed-414e-960c-7ae288363bb4-config\") pod \"prometheus-metric-storage-0\" (UID: \"ee069283-02ed-414e-960c-7ae288363bb4\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.644703 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gsdp\" (UniqueName: \"kubernetes.io/projected/ee069283-02ed-414e-960c-7ae288363bb4-kube-api-access-6gsdp\") pod \"prometheus-metric-storage-0\" (UID: \"ee069283-02ed-414e-960c-7ae288363bb4\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.644791 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/ee069283-02ed-414e-960c-7ae288363bb4-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"ee069283-02ed-414e-960c-7ae288363bb4\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.644913 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ee069283-02ed-414e-960c-7ae288363bb4-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"ee069283-02ed-414e-960c-7ae288363bb4\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.645022 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/ee069283-02ed-414e-960c-7ae288363bb4-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"ee069283-02ed-414e-960c-7ae288363bb4\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.645112 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/ee069283-02ed-414e-960c-7ae288363bb4-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"ee069283-02ed-414e-960c-7ae288363bb4\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.645203 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ee069283-02ed-414e-960c-7ae288363bb4-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"ee069283-02ed-414e-960c-7ae288363bb4\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.645309 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ee069283-02ed-414e-960c-7ae288363bb4-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"ee069283-02ed-414e-960c-7ae288363bb4\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.645416 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/ee069283-02ed-414e-960c-7ae288363bb4-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"ee069283-02ed-414e-960c-7ae288363bb4\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.645569 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/ee069283-02ed-414e-960c-7ae288363bb4-prometheus-metric-storage-db\") pod \"prometheus-metric-storage-0\" (UID: \"ee069283-02ed-414e-960c-7ae288363bb4\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.645660 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee069283-02ed-414e-960c-7ae288363bb4-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"ee069283-02ed-414e-960c-7ae288363bb4\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.747558 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gsdp\" (UniqueName: \"kubernetes.io/projected/ee069283-02ed-414e-960c-7ae288363bb4-kube-api-access-6gsdp\") pod \"prometheus-metric-storage-0\" (UID: \"ee069283-02ed-414e-960c-7ae288363bb4\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.747861 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/ee069283-02ed-414e-960c-7ae288363bb4-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"ee069283-02ed-414e-960c-7ae288363bb4\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.748010 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ee069283-02ed-414e-960c-7ae288363bb4-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"ee069283-02ed-414e-960c-7ae288363bb4\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.748592 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/ee069283-02ed-414e-960c-7ae288363bb4-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"ee069283-02ed-414e-960c-7ae288363bb4\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.748759 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/ee069283-02ed-414e-960c-7ae288363bb4-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"ee069283-02ed-414e-960c-7ae288363bb4\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.748838 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ee069283-02ed-414e-960c-7ae288363bb4-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"ee069283-02ed-414e-960c-7ae288363bb4\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.748932 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ee069283-02ed-414e-960c-7ae288363bb4-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"ee069283-02ed-414e-960c-7ae288363bb4\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.749046 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/ee069283-02ed-414e-960c-7ae288363bb4-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"ee069283-02ed-414e-960c-7ae288363bb4\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.749193 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/ee069283-02ed-414e-960c-7ae288363bb4-prometheus-metric-storage-db\") pod \"prometheus-metric-storage-0\" (UID: \"ee069283-02ed-414e-960c-7ae288363bb4\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.749274 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee069283-02ed-414e-960c-7ae288363bb4-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"ee069283-02ed-414e-960c-7ae288363bb4\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.749412 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ee069283-02ed-414e-960c-7ae288363bb4-config\") pod \"prometheus-metric-storage-0\" (UID: \"ee069283-02ed-414e-960c-7ae288363bb4\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.749715 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/ee069283-02ed-414e-960c-7ae288363bb4-prometheus-metric-storage-db\") pod \"prometheus-metric-storage-0\" (UID: \"ee069283-02ed-414e-960c-7ae288363bb4\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.750496 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/ee069283-02ed-414e-960c-7ae288363bb4-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"ee069283-02ed-414e-960c-7ae288363bb4\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.751168 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ee069283-02ed-414e-960c-7ae288363bb4-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"ee069283-02ed-414e-960c-7ae288363bb4\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.751533 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ee069283-02ed-414e-960c-7ae288363bb4-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"ee069283-02ed-414e-960c-7ae288363bb4\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.751756 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/ee069283-02ed-414e-960c-7ae288363bb4-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"ee069283-02ed-414e-960c-7ae288363bb4\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.751912 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ee069283-02ed-414e-960c-7ae288363bb4-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"ee069283-02ed-414e-960c-7ae288363bb4\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.752320 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ee069283-02ed-414e-960c-7ae288363bb4-config\") pod \"prometheus-metric-storage-0\" (UID: \"ee069283-02ed-414e-960c-7ae288363bb4\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.754458 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee069283-02ed-414e-960c-7ae288363bb4-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"ee069283-02ed-414e-960c-7ae288363bb4\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.754723 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/ee069283-02ed-414e-960c-7ae288363bb4-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"ee069283-02ed-414e-960c-7ae288363bb4\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.757557 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/ee069283-02ed-414e-960c-7ae288363bb4-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"ee069283-02ed-414e-960c-7ae288363bb4\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.764967 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gsdp\" (UniqueName: \"kubernetes.io/projected/ee069283-02ed-414e-960c-7ae288363bb4-kube-api-access-6gsdp\") pod \"prometheus-metric-storage-0\" (UID: \"ee069283-02ed-414e-960c-7ae288363bb4\") " pod="openstack/prometheus-metric-storage-0" Nov 28 13:38:28 crc kubenswrapper[4779]: I1128 13:38:28.837578 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 28 13:38:29 crc kubenswrapper[4779]: I1128 13:38:29.499685 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 28 13:38:29 crc kubenswrapper[4779]: I1128 13:38:29.745384 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c406f7f5-9e76-44f6-b507-59c1b6d813d5" path="/var/lib/kubelet/pods/c406f7f5-9e76-44f6-b507-59c1b6d813d5/volumes" Nov 28 13:38:30 crc kubenswrapper[4779]: I1128 13:38:30.148168 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ee069283-02ed-414e-960c-7ae288363bb4","Type":"ContainerStarted","Data":"93652ad092ed0e315f6a8431e238e1e73bc7649be6b28b2c65c08c23a7ca7ddc"} Nov 28 13:38:32 crc kubenswrapper[4779]: I1128 13:38:32.726735 4779 scope.go:117] "RemoveContainer" containerID="e0979e2873372762dc22f2d860bfe12ccf1b62b9acc4eb82e9e76a9701d5036b" Nov 28 13:38:32 crc kubenswrapper[4779]: E1128 13:38:32.727551 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:38:34 crc kubenswrapper[4779]: I1128 13:38:34.197812 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ee069283-02ed-414e-960c-7ae288363bb4","Type":"ContainerStarted","Data":"a980e4cbcb8237dde72ad559f1289fbc0bbf99eafee800c4f9773afa44401140"} Nov 28 13:38:42 crc kubenswrapper[4779]: I1128 13:38:42.275965 4779 generic.go:334] "Generic (PLEG): container finished" podID="ee069283-02ed-414e-960c-7ae288363bb4" containerID="a980e4cbcb8237dde72ad559f1289fbc0bbf99eafee800c4f9773afa44401140" exitCode=0 Nov 28 13:38:42 crc kubenswrapper[4779]: I1128 13:38:42.276066 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ee069283-02ed-414e-960c-7ae288363bb4","Type":"ContainerDied","Data":"a980e4cbcb8237dde72ad559f1289fbc0bbf99eafee800c4f9773afa44401140"} Nov 28 13:38:43 crc kubenswrapper[4779]: I1128 13:38:43.286337 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ee069283-02ed-414e-960c-7ae288363bb4","Type":"ContainerStarted","Data":"c8822c822842d53df21e29bfb34115ac504983467342c89d7ed270c0a3ddcb5e"} Nov 28 13:38:46 crc kubenswrapper[4779]: I1128 13:38:46.727372 4779 scope.go:117] "RemoveContainer" containerID="e0979e2873372762dc22f2d860bfe12ccf1b62b9acc4eb82e9e76a9701d5036b" Nov 28 13:38:46 crc kubenswrapper[4779]: E1128 13:38:46.728382 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:38:48 crc kubenswrapper[4779]: I1128 13:38:48.347878 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ee069283-02ed-414e-960c-7ae288363bb4","Type":"ContainerStarted","Data":"9db902fe6e575fc2efa2dbdb72b73549f728585eca378e58eeb7bdefb888aa02"} Nov 28 13:38:49 crc kubenswrapper[4779]: I1128 13:38:49.359444 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ee069283-02ed-414e-960c-7ae288363bb4","Type":"ContainerStarted","Data":"c3bca5878ff139db98da1e83380dee46c5d9241f7ded7b227c035667bff5026f"} Nov 28 13:38:49 crc kubenswrapper[4779]: I1128 13:38:49.391742 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=21.3917238 podStartE2EDuration="21.3917238s" podCreationTimestamp="2025-11-28 13:38:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 13:38:49.384630701 +0000 UTC m=+3789.950306075" watchObservedRunningTime="2025-11-28 13:38:49.3917238 +0000 UTC m=+3789.957399154" Nov 28 13:38:53 crc kubenswrapper[4779]: I1128 13:38:53.838249 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Nov 28 13:38:58 crc kubenswrapper[4779]: I1128 13:38:58.837753 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Nov 28 13:38:58 crc kubenswrapper[4779]: I1128 13:38:58.844939 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Nov 28 13:38:59 crc kubenswrapper[4779]: I1128 13:38:59.471598 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Nov 28 13:38:59 crc kubenswrapper[4779]: I1128 13:38:59.726828 4779 scope.go:117] "RemoveContainer" containerID="e0979e2873372762dc22f2d860bfe12ccf1b62b9acc4eb82e9e76a9701d5036b" Nov 28 13:38:59 crc kubenswrapper[4779]: E1128 13:38:59.727214 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:39:12 crc kubenswrapper[4779]: I1128 13:39:12.726787 4779 scope.go:117] "RemoveContainer" containerID="e0979e2873372762dc22f2d860bfe12ccf1b62b9acc4eb82e9e76a9701d5036b" Nov 28 13:39:12 crc kubenswrapper[4779]: E1128 13:39:12.727578 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:39:23 crc kubenswrapper[4779]: I1128 13:39:23.729973 4779 scope.go:117] "RemoveContainer" containerID="e0979e2873372762dc22f2d860bfe12ccf1b62b9acc4eb82e9e76a9701d5036b" Nov 28 13:39:23 crc kubenswrapper[4779]: E1128 13:39:23.731489 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:39:35 crc kubenswrapper[4779]: I1128 13:39:35.727405 4779 scope.go:117] "RemoveContainer" containerID="e0979e2873372762dc22f2d860bfe12ccf1b62b9acc4eb82e9e76a9701d5036b" Nov 28 13:39:35 crc kubenswrapper[4779]: E1128 13:39:35.728260 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:39:49 crc kubenswrapper[4779]: I1128 13:39:49.733721 4779 scope.go:117] "RemoveContainer" containerID="e0979e2873372762dc22f2d860bfe12ccf1b62b9acc4eb82e9e76a9701d5036b" Nov 28 13:39:49 crc kubenswrapper[4779]: E1128 13:39:49.734667 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:40:01 crc kubenswrapper[4779]: I1128 13:40:01.725564 4779 scope.go:117] "RemoveContainer" containerID="e0979e2873372762dc22f2d860bfe12ccf1b62b9acc4eb82e9e76a9701d5036b" Nov 28 13:40:01 crc kubenswrapper[4779]: E1128 13:40:01.726589 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:40:14 crc kubenswrapper[4779]: I1128 13:40:14.726742 4779 scope.go:117] "RemoveContainer" containerID="e0979e2873372762dc22f2d860bfe12ccf1b62b9acc4eb82e9e76a9701d5036b" Nov 28 13:40:14 crc kubenswrapper[4779]: E1128 13:40:14.728069 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:40:25 crc kubenswrapper[4779]: I1128 13:40:25.481449 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7574d9569-x822f_f1d9753d-b49d-4e32-b312-137314283984/manager/0.log" Nov 28 13:40:25 crc kubenswrapper[4779]: I1128 13:40:25.726486 4779 scope.go:117] "RemoveContainer" containerID="e0979e2873372762dc22f2d860bfe12ccf1b62b9acc4eb82e9e76a9701d5036b" Nov 28 13:40:25 crc kubenswrapper[4779]: E1128 13:40:25.726805 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:40:39 crc kubenswrapper[4779]: I1128 13:40:39.744563 4779 scope.go:117] "RemoveContainer" containerID="e0979e2873372762dc22f2d860bfe12ccf1b62b9acc4eb82e9e76a9701d5036b" Nov 28 13:40:39 crc kubenswrapper[4779]: E1128 13:40:39.745457 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:40:45 crc kubenswrapper[4779]: I1128 13:40:45.116922 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-xp8br/must-gather-cmg8t"] Nov 28 13:40:45 crc kubenswrapper[4779]: I1128 13:40:45.118908 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xp8br/must-gather-cmg8t" Nov 28 13:40:45 crc kubenswrapper[4779]: I1128 13:40:45.122059 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-xp8br"/"default-dockercfg-77sqh" Nov 28 13:40:45 crc kubenswrapper[4779]: I1128 13:40:45.122830 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-xp8br"/"kube-root-ca.crt" Nov 28 13:40:45 crc kubenswrapper[4779]: I1128 13:40:45.124707 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-xp8br"/"openshift-service-ca.crt" Nov 28 13:40:45 crc kubenswrapper[4779]: I1128 13:40:45.127994 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-xp8br/must-gather-cmg8t"] Nov 28 13:40:45 crc kubenswrapper[4779]: I1128 13:40:45.284765 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f527f\" (UniqueName: \"kubernetes.io/projected/7515361b-565f-4285-b116-a04b2e17a118-kube-api-access-f527f\") pod \"must-gather-cmg8t\" (UID: \"7515361b-565f-4285-b116-a04b2e17a118\") " pod="openshift-must-gather-xp8br/must-gather-cmg8t" Nov 28 13:40:45 crc kubenswrapper[4779]: I1128 13:40:45.285074 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7515361b-565f-4285-b116-a04b2e17a118-must-gather-output\") pod \"must-gather-cmg8t\" (UID: \"7515361b-565f-4285-b116-a04b2e17a118\") " pod="openshift-must-gather-xp8br/must-gather-cmg8t" Nov 28 13:40:45 crc kubenswrapper[4779]: I1128 13:40:45.387450 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f527f\" (UniqueName: \"kubernetes.io/projected/7515361b-565f-4285-b116-a04b2e17a118-kube-api-access-f527f\") pod \"must-gather-cmg8t\" (UID: \"7515361b-565f-4285-b116-a04b2e17a118\") " pod="openshift-must-gather-xp8br/must-gather-cmg8t" Nov 28 13:40:45 crc kubenswrapper[4779]: I1128 13:40:45.387530 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7515361b-565f-4285-b116-a04b2e17a118-must-gather-output\") pod \"must-gather-cmg8t\" (UID: \"7515361b-565f-4285-b116-a04b2e17a118\") " pod="openshift-must-gather-xp8br/must-gather-cmg8t" Nov 28 13:40:45 crc kubenswrapper[4779]: I1128 13:40:45.388037 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7515361b-565f-4285-b116-a04b2e17a118-must-gather-output\") pod \"must-gather-cmg8t\" (UID: \"7515361b-565f-4285-b116-a04b2e17a118\") " pod="openshift-must-gather-xp8br/must-gather-cmg8t" Nov 28 13:40:45 crc kubenswrapper[4779]: I1128 13:40:45.408188 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f527f\" (UniqueName: \"kubernetes.io/projected/7515361b-565f-4285-b116-a04b2e17a118-kube-api-access-f527f\") pod \"must-gather-cmg8t\" (UID: \"7515361b-565f-4285-b116-a04b2e17a118\") " pod="openshift-must-gather-xp8br/must-gather-cmg8t" Nov 28 13:40:45 crc kubenswrapper[4779]: I1128 13:40:45.438576 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xp8br/must-gather-cmg8t" Nov 28 13:40:45 crc kubenswrapper[4779]: I1128 13:40:45.951473 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-xp8br/must-gather-cmg8t"] Nov 28 13:40:45 crc kubenswrapper[4779]: W1128 13:40:45.953145 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7515361b_565f_4285_b116_a04b2e17a118.slice/crio-14b3c62bd9360210dce2d9b6b956a80420fafc9a5f31ed10e6969b49c722fbfa WatchSource:0}: Error finding container 14b3c62bd9360210dce2d9b6b956a80420fafc9a5f31ed10e6969b49c722fbfa: Status 404 returned error can't find the container with id 14b3c62bd9360210dce2d9b6b956a80420fafc9a5f31ed10e6969b49c722fbfa Nov 28 13:40:46 crc kubenswrapper[4779]: I1128 13:40:46.687044 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xp8br/must-gather-cmg8t" event={"ID":"7515361b-565f-4285-b116-a04b2e17a118","Type":"ContainerStarted","Data":"14b3c62bd9360210dce2d9b6b956a80420fafc9a5f31ed10e6969b49c722fbfa"} Nov 28 13:40:52 crc kubenswrapper[4779]: I1128 13:40:52.778070 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xp8br/must-gather-cmg8t" event={"ID":"7515361b-565f-4285-b116-a04b2e17a118","Type":"ContainerStarted","Data":"6987d79ccb0853d037bf530cbbde94901b356ecc956e9e70668379d3b3f48970"} Nov 28 13:40:53 crc kubenswrapper[4779]: I1128 13:40:53.793549 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xp8br/must-gather-cmg8t" event={"ID":"7515361b-565f-4285-b116-a04b2e17a118","Type":"ContainerStarted","Data":"714911f522596752886c3ef56d13df25b835fee6526c5037b116dbe61e0eeb92"} Nov 28 13:40:53 crc kubenswrapper[4779]: I1128 13:40:53.843195 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-xp8br/must-gather-cmg8t" podStartSLOduration=4.806315623 podStartE2EDuration="8.843176055s" podCreationTimestamp="2025-11-28 13:40:45 +0000 UTC" firstStartedPulling="2025-11-28 13:40:45.955306164 +0000 UTC m=+3906.520981518" lastFinishedPulling="2025-11-28 13:40:49.992166596 +0000 UTC m=+3910.557841950" observedRunningTime="2025-11-28 13:40:53.8414821 +0000 UTC m=+3914.407157454" watchObservedRunningTime="2025-11-28 13:40:53.843176055 +0000 UTC m=+3914.408851409" Nov 28 13:40:54 crc kubenswrapper[4779]: I1128 13:40:54.726454 4779 scope.go:117] "RemoveContainer" containerID="e0979e2873372762dc22f2d860bfe12ccf1b62b9acc4eb82e9e76a9701d5036b" Nov 28 13:40:54 crc kubenswrapper[4779]: E1128 13:40:54.726954 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:40:58 crc kubenswrapper[4779]: I1128 13:40:58.990255 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-xp8br/crc-debug-zcn46"] Nov 28 13:40:58 crc kubenswrapper[4779]: I1128 13:40:58.995525 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xp8br/crc-debug-zcn46" Nov 28 13:40:59 crc kubenswrapper[4779]: I1128 13:40:59.098407 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/66a187e4-9101-403e-8881-4192fe3d0db6-host\") pod \"crc-debug-zcn46\" (UID: \"66a187e4-9101-403e-8881-4192fe3d0db6\") " pod="openshift-must-gather-xp8br/crc-debug-zcn46" Nov 28 13:40:59 crc kubenswrapper[4779]: I1128 13:40:59.098864 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7pxr\" (UniqueName: \"kubernetes.io/projected/66a187e4-9101-403e-8881-4192fe3d0db6-kube-api-access-x7pxr\") pod \"crc-debug-zcn46\" (UID: \"66a187e4-9101-403e-8881-4192fe3d0db6\") " pod="openshift-must-gather-xp8br/crc-debug-zcn46" Nov 28 13:40:59 crc kubenswrapper[4779]: I1128 13:40:59.201213 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7pxr\" (UniqueName: \"kubernetes.io/projected/66a187e4-9101-403e-8881-4192fe3d0db6-kube-api-access-x7pxr\") pod \"crc-debug-zcn46\" (UID: \"66a187e4-9101-403e-8881-4192fe3d0db6\") " pod="openshift-must-gather-xp8br/crc-debug-zcn46" Nov 28 13:40:59 crc kubenswrapper[4779]: I1128 13:40:59.201322 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/66a187e4-9101-403e-8881-4192fe3d0db6-host\") pod \"crc-debug-zcn46\" (UID: \"66a187e4-9101-403e-8881-4192fe3d0db6\") " pod="openshift-must-gather-xp8br/crc-debug-zcn46" Nov 28 13:40:59 crc kubenswrapper[4779]: I1128 13:40:59.201521 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/66a187e4-9101-403e-8881-4192fe3d0db6-host\") pod \"crc-debug-zcn46\" (UID: \"66a187e4-9101-403e-8881-4192fe3d0db6\") " pod="openshift-must-gather-xp8br/crc-debug-zcn46" Nov 28 13:40:59 crc kubenswrapper[4779]: I1128 13:40:59.221127 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7pxr\" (UniqueName: \"kubernetes.io/projected/66a187e4-9101-403e-8881-4192fe3d0db6-kube-api-access-x7pxr\") pod \"crc-debug-zcn46\" (UID: \"66a187e4-9101-403e-8881-4192fe3d0db6\") " pod="openshift-must-gather-xp8br/crc-debug-zcn46" Nov 28 13:40:59 crc kubenswrapper[4779]: I1128 13:40:59.312752 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xp8br/crc-debug-zcn46" Nov 28 13:40:59 crc kubenswrapper[4779]: I1128 13:40:59.853738 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xp8br/crc-debug-zcn46" event={"ID":"66a187e4-9101-403e-8881-4192fe3d0db6","Type":"ContainerStarted","Data":"15b29eaebfda5771c2fec6b0a6c80825265ba636d7d88325ef0b4e5b44aaa5ca"} Nov 28 13:41:06 crc kubenswrapper[4779]: I1128 13:41:06.725814 4779 scope.go:117] "RemoveContainer" containerID="e0979e2873372762dc22f2d860bfe12ccf1b62b9acc4eb82e9e76a9701d5036b" Nov 28 13:41:06 crc kubenswrapper[4779]: E1128 13:41:06.726596 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:41:18 crc kubenswrapper[4779]: E1128 13:41:18.671514 4779 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296" Nov 28 13:41:18 crc kubenswrapper[4779]: E1128 13:41:18.672136 4779 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:container-00,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296,Command:[chroot /host bash -c echo 'TOOLBOX_NAME=toolbox-osp' > /root/.toolboxrc ; rm -rf \"/var/tmp/sos-osp\" && mkdir -p \"/var/tmp/sos-osp\" && sudo podman rm --force toolbox-osp; sudo --preserve-env podman pull --authfile /var/lib/kubelet/config.json registry.redhat.io/rhel9/support-tools && toolbox sos report --batch --all-logs --only-plugins block,cifs,crio,devicemapper,devices,firewall_tables,firewalld,iscsi,lvm2,memory,multipath,nfs,nis,nvme,podman,process,processor,selinux,scsi,udev,logs,crypto --tmp-dir=\"/var/tmp/sos-osp\" && if [[ \"$(ls /var/log/pods/*/{*.log.*,*/*.log.*} 2>/dev/null)\" != '' ]]; then tar --ignore-failed-read --warning=no-file-changed -cJf \"/var/tmp/sos-osp/podlogs.tar.xz\" --transform 's,^,podlogs/,' /var/log/pods/*/{*.log.*,*/*.log.*} || true; fi],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:TMOUT,Value:900,ValueFrom:nil,},EnvVar{Name:HOST,Value:/host,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host,ReadOnly:false,MountPath:/host,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x7pxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod crc-debug-zcn46_openshift-must-gather-xp8br(66a187e4-9101-403e-8881-4192fe3d0db6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 13:41:18 crc kubenswrapper[4779]: E1128 13:41:18.673310 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-must-gather-xp8br/crc-debug-zcn46" podUID="66a187e4-9101-403e-8881-4192fe3d0db6" Nov 28 13:41:19 crc kubenswrapper[4779]: E1128 13:41:19.100748 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296\\\"\"" pod="openshift-must-gather-xp8br/crc-debug-zcn46" podUID="66a187e4-9101-403e-8881-4192fe3d0db6" Nov 28 13:41:20 crc kubenswrapper[4779]: I1128 13:41:20.726218 4779 scope.go:117] "RemoveContainer" containerID="e0979e2873372762dc22f2d860bfe12ccf1b62b9acc4eb82e9e76a9701d5036b" Nov 28 13:41:26 crc kubenswrapper[4779]: I1128 13:41:26.186324 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" event={"ID":"3b2a3eb4-4de5-491b-b466-3a35b7d745ec","Type":"ContainerStarted","Data":"e9f5178ede5c569f5567852868c11a94380c8b3f324c6b6ddd47699da5e82c76"} Nov 28 13:41:32 crc kubenswrapper[4779]: I1128 13:41:32.728285 4779 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 13:41:34 crc kubenswrapper[4779]: I1128 13:41:34.284152 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xp8br/crc-debug-zcn46" event={"ID":"66a187e4-9101-403e-8881-4192fe3d0db6","Type":"ContainerStarted","Data":"0b1f970a4261be770a135e85fa6038a1f22f6010531bf41d118a82b47a6430ce"} Nov 28 13:41:34 crc kubenswrapper[4779]: I1128 13:41:34.302556 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-xp8br/crc-debug-zcn46" podStartSLOduration=2.345013181 podStartE2EDuration="36.302541622s" podCreationTimestamp="2025-11-28 13:40:58 +0000 UTC" firstStartedPulling="2025-11-28 13:40:59.379157271 +0000 UTC m=+3919.944832635" lastFinishedPulling="2025-11-28 13:41:33.336685702 +0000 UTC m=+3953.902361076" observedRunningTime="2025-11-28 13:41:34.297354996 +0000 UTC m=+3954.863030350" watchObservedRunningTime="2025-11-28 13:41:34.302541622 +0000 UTC m=+3954.868216976" Nov 28 13:41:50 crc kubenswrapper[4779]: I1128 13:41:50.442580 4779 generic.go:334] "Generic (PLEG): container finished" podID="66a187e4-9101-403e-8881-4192fe3d0db6" containerID="0b1f970a4261be770a135e85fa6038a1f22f6010531bf41d118a82b47a6430ce" exitCode=0 Nov 28 13:41:50 crc kubenswrapper[4779]: I1128 13:41:50.443147 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xp8br/crc-debug-zcn46" event={"ID":"66a187e4-9101-403e-8881-4192fe3d0db6","Type":"ContainerDied","Data":"0b1f970a4261be770a135e85fa6038a1f22f6010531bf41d118a82b47a6430ce"} Nov 28 13:41:51 crc kubenswrapper[4779]: I1128 13:41:51.579805 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xp8br/crc-debug-zcn46" Nov 28 13:41:51 crc kubenswrapper[4779]: I1128 13:41:51.610132 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-xp8br/crc-debug-zcn46"] Nov 28 13:41:51 crc kubenswrapper[4779]: I1128 13:41:51.619746 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-xp8br/crc-debug-zcn46"] Nov 28 13:41:51 crc kubenswrapper[4779]: I1128 13:41:51.680247 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/66a187e4-9101-403e-8881-4192fe3d0db6-host\") pod \"66a187e4-9101-403e-8881-4192fe3d0db6\" (UID: \"66a187e4-9101-403e-8881-4192fe3d0db6\") " Nov 28 13:41:51 crc kubenswrapper[4779]: I1128 13:41:51.680296 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7pxr\" (UniqueName: \"kubernetes.io/projected/66a187e4-9101-403e-8881-4192fe3d0db6-kube-api-access-x7pxr\") pod \"66a187e4-9101-403e-8881-4192fe3d0db6\" (UID: \"66a187e4-9101-403e-8881-4192fe3d0db6\") " Nov 28 13:41:51 crc kubenswrapper[4779]: I1128 13:41:51.680387 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/66a187e4-9101-403e-8881-4192fe3d0db6-host" (OuterVolumeSpecName: "host") pod "66a187e4-9101-403e-8881-4192fe3d0db6" (UID: "66a187e4-9101-403e-8881-4192fe3d0db6"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 13:41:51 crc kubenswrapper[4779]: I1128 13:41:51.680791 4779 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/66a187e4-9101-403e-8881-4192fe3d0db6-host\") on node \"crc\" DevicePath \"\"" Nov 28 13:41:51 crc kubenswrapper[4779]: I1128 13:41:51.687371 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66a187e4-9101-403e-8881-4192fe3d0db6-kube-api-access-x7pxr" (OuterVolumeSpecName: "kube-api-access-x7pxr") pod "66a187e4-9101-403e-8881-4192fe3d0db6" (UID: "66a187e4-9101-403e-8881-4192fe3d0db6"). InnerVolumeSpecName "kube-api-access-x7pxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:41:51 crc kubenswrapper[4779]: I1128 13:41:51.737984 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66a187e4-9101-403e-8881-4192fe3d0db6" path="/var/lib/kubelet/pods/66a187e4-9101-403e-8881-4192fe3d0db6/volumes" Nov 28 13:41:51 crc kubenswrapper[4779]: I1128 13:41:51.782450 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7pxr\" (UniqueName: \"kubernetes.io/projected/66a187e4-9101-403e-8881-4192fe3d0db6-kube-api-access-x7pxr\") on node \"crc\" DevicePath \"\"" Nov 28 13:41:52 crc kubenswrapper[4779]: I1128 13:41:52.462908 4779 scope.go:117] "RemoveContainer" containerID="0b1f970a4261be770a135e85fa6038a1f22f6010531bf41d118a82b47a6430ce" Nov 28 13:41:52 crc kubenswrapper[4779]: I1128 13:41:52.463000 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xp8br/crc-debug-zcn46" Nov 28 13:41:52 crc kubenswrapper[4779]: I1128 13:41:52.822856 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-xp8br/crc-debug-mjwhx"] Nov 28 13:41:52 crc kubenswrapper[4779]: E1128 13:41:52.824451 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66a187e4-9101-403e-8881-4192fe3d0db6" containerName="container-00" Nov 28 13:41:52 crc kubenswrapper[4779]: I1128 13:41:52.824672 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="66a187e4-9101-403e-8881-4192fe3d0db6" containerName="container-00" Nov 28 13:41:52 crc kubenswrapper[4779]: I1128 13:41:52.825030 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="66a187e4-9101-403e-8881-4192fe3d0db6" containerName="container-00" Nov 28 13:41:52 crc kubenswrapper[4779]: I1128 13:41:52.825969 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xp8br/crc-debug-mjwhx" Nov 28 13:41:52 crc kubenswrapper[4779]: I1128 13:41:52.903899 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e5e87be3-9201-490d-afd4-da02870f3f02-host\") pod \"crc-debug-mjwhx\" (UID: \"e5e87be3-9201-490d-afd4-da02870f3f02\") " pod="openshift-must-gather-xp8br/crc-debug-mjwhx" Nov 28 13:41:52 crc kubenswrapper[4779]: I1128 13:41:52.903967 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smm8b\" (UniqueName: \"kubernetes.io/projected/e5e87be3-9201-490d-afd4-da02870f3f02-kube-api-access-smm8b\") pod \"crc-debug-mjwhx\" (UID: \"e5e87be3-9201-490d-afd4-da02870f3f02\") " pod="openshift-must-gather-xp8br/crc-debug-mjwhx" Nov 28 13:41:53 crc kubenswrapper[4779]: I1128 13:41:53.006234 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e5e87be3-9201-490d-afd4-da02870f3f02-host\") pod \"crc-debug-mjwhx\" (UID: \"e5e87be3-9201-490d-afd4-da02870f3f02\") " pod="openshift-must-gather-xp8br/crc-debug-mjwhx" Nov 28 13:41:53 crc kubenswrapper[4779]: I1128 13:41:53.006289 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smm8b\" (UniqueName: \"kubernetes.io/projected/e5e87be3-9201-490d-afd4-da02870f3f02-kube-api-access-smm8b\") pod \"crc-debug-mjwhx\" (UID: \"e5e87be3-9201-490d-afd4-da02870f3f02\") " pod="openshift-must-gather-xp8br/crc-debug-mjwhx" Nov 28 13:41:53 crc kubenswrapper[4779]: I1128 13:41:53.006777 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e5e87be3-9201-490d-afd4-da02870f3f02-host\") pod \"crc-debug-mjwhx\" (UID: \"e5e87be3-9201-490d-afd4-da02870f3f02\") " pod="openshift-must-gather-xp8br/crc-debug-mjwhx" Nov 28 13:41:53 crc kubenswrapper[4779]: I1128 13:41:53.022702 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smm8b\" (UniqueName: \"kubernetes.io/projected/e5e87be3-9201-490d-afd4-da02870f3f02-kube-api-access-smm8b\") pod \"crc-debug-mjwhx\" (UID: \"e5e87be3-9201-490d-afd4-da02870f3f02\") " pod="openshift-must-gather-xp8br/crc-debug-mjwhx" Nov 28 13:41:53 crc kubenswrapper[4779]: I1128 13:41:53.155502 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xp8br/crc-debug-mjwhx" Nov 28 13:41:53 crc kubenswrapper[4779]: W1128 13:41:53.195376 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5e87be3_9201_490d_afd4_da02870f3f02.slice/crio-e0594ed377faf1cb3fd13db5b594efdbdbb2129766cb37b0058f9e3c3dd5d907 WatchSource:0}: Error finding container e0594ed377faf1cb3fd13db5b594efdbdbb2129766cb37b0058f9e3c3dd5d907: Status 404 returned error can't find the container with id e0594ed377faf1cb3fd13db5b594efdbdbb2129766cb37b0058f9e3c3dd5d907 Nov 28 13:41:53 crc kubenswrapper[4779]: I1128 13:41:53.482033 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xp8br/crc-debug-mjwhx" event={"ID":"e5e87be3-9201-490d-afd4-da02870f3f02","Type":"ContainerStarted","Data":"2707d4e69fba2a322dd87cca4f5b71f51dcac63a280853aac7bca2b1fd7133e5"} Nov 28 13:41:53 crc kubenswrapper[4779]: I1128 13:41:53.482548 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xp8br/crc-debug-mjwhx" event={"ID":"e5e87be3-9201-490d-afd4-da02870f3f02","Type":"ContainerStarted","Data":"e0594ed377faf1cb3fd13db5b594efdbdbb2129766cb37b0058f9e3c3dd5d907"} Nov 28 13:41:53 crc kubenswrapper[4779]: I1128 13:41:53.532245 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-xp8br/crc-debug-mjwhx"] Nov 28 13:41:53 crc kubenswrapper[4779]: I1128 13:41:53.539082 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-xp8br/crc-debug-mjwhx"] Nov 28 13:41:53 crc kubenswrapper[4779]: E1128 13:41:53.641114 4779 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5e87be3_9201_490d_afd4_da02870f3f02.slice/crio-2707d4e69fba2a322dd87cca4f5b71f51dcac63a280853aac7bca2b1fd7133e5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5e87be3_9201_490d_afd4_da02870f3f02.slice/crio-conmon-2707d4e69fba2a322dd87cca4f5b71f51dcac63a280853aac7bca2b1fd7133e5.scope\": RecentStats: unable to find data in memory cache]" Nov 28 13:41:54 crc kubenswrapper[4779]: I1128 13:41:54.495288 4779 generic.go:334] "Generic (PLEG): container finished" podID="e5e87be3-9201-490d-afd4-da02870f3f02" containerID="2707d4e69fba2a322dd87cca4f5b71f51dcac63a280853aac7bca2b1fd7133e5" exitCode=1 Nov 28 13:41:54 crc kubenswrapper[4779]: I1128 13:41:54.628648 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xp8br/crc-debug-mjwhx" Nov 28 13:41:54 crc kubenswrapper[4779]: I1128 13:41:54.739133 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e5e87be3-9201-490d-afd4-da02870f3f02-host\") pod \"e5e87be3-9201-490d-afd4-da02870f3f02\" (UID: \"e5e87be3-9201-490d-afd4-da02870f3f02\") " Nov 28 13:41:54 crc kubenswrapper[4779]: I1128 13:41:54.739479 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smm8b\" (UniqueName: \"kubernetes.io/projected/e5e87be3-9201-490d-afd4-da02870f3f02-kube-api-access-smm8b\") pod \"e5e87be3-9201-490d-afd4-da02870f3f02\" (UID: \"e5e87be3-9201-490d-afd4-da02870f3f02\") " Nov 28 13:41:54 crc kubenswrapper[4779]: I1128 13:41:54.741506 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5e87be3-9201-490d-afd4-da02870f3f02-host" (OuterVolumeSpecName: "host") pod "e5e87be3-9201-490d-afd4-da02870f3f02" (UID: "e5e87be3-9201-490d-afd4-da02870f3f02"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 13:41:54 crc kubenswrapper[4779]: I1128 13:41:54.749607 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5e87be3-9201-490d-afd4-da02870f3f02-kube-api-access-smm8b" (OuterVolumeSpecName: "kube-api-access-smm8b") pod "e5e87be3-9201-490d-afd4-da02870f3f02" (UID: "e5e87be3-9201-490d-afd4-da02870f3f02"). InnerVolumeSpecName "kube-api-access-smm8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:41:54 crc kubenswrapper[4779]: I1128 13:41:54.841803 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smm8b\" (UniqueName: \"kubernetes.io/projected/e5e87be3-9201-490d-afd4-da02870f3f02-kube-api-access-smm8b\") on node \"crc\" DevicePath \"\"" Nov 28 13:41:54 crc kubenswrapper[4779]: I1128 13:41:54.841838 4779 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e5e87be3-9201-490d-afd4-da02870f3f02-host\") on node \"crc\" DevicePath \"\"" Nov 28 13:41:55 crc kubenswrapper[4779]: I1128 13:41:55.510302 4779 scope.go:117] "RemoveContainer" containerID="2707d4e69fba2a322dd87cca4f5b71f51dcac63a280853aac7bca2b1fd7133e5" Nov 28 13:41:55 crc kubenswrapper[4779]: I1128 13:41:55.510350 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xp8br/crc-debug-mjwhx" Nov 28 13:41:55 crc kubenswrapper[4779]: I1128 13:41:55.747663 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5e87be3-9201-490d-afd4-da02870f3f02" path="/var/lib/kubelet/pods/e5e87be3-9201-490d-afd4-da02870f3f02/volumes" Nov 28 13:42:36 crc kubenswrapper[4779]: I1128 13:42:36.289349 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_7222438c-fe9c-429a-899e-269d84def6d7/init-config-reloader/0.log" Nov 28 13:42:36 crc kubenswrapper[4779]: I1128 13:42:36.443417 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_7222438c-fe9c-429a-899e-269d84def6d7/init-config-reloader/0.log" Nov 28 13:42:36 crc kubenswrapper[4779]: I1128 13:42:36.500039 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_7222438c-fe9c-429a-899e-269d84def6d7/alertmanager/0.log" Nov 28 13:42:36 crc kubenswrapper[4779]: I1128 13:42:36.506984 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_7222438c-fe9c-429a-899e-269d84def6d7/config-reloader/0.log" Nov 28 13:42:36 crc kubenswrapper[4779]: I1128 13:42:36.654939 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_11b20377-b66b-48ee-a4ac-a9f12faf621c/aodh-api/0.log" Nov 28 13:42:36 crc kubenswrapper[4779]: I1128 13:42:36.713019 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_11b20377-b66b-48ee-a4ac-a9f12faf621c/aodh-evaluator/0.log" Nov 28 13:42:36 crc kubenswrapper[4779]: I1128 13:42:36.715396 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_11b20377-b66b-48ee-a4ac-a9f12faf621c/aodh-listener/0.log" Nov 28 13:42:36 crc kubenswrapper[4779]: I1128 13:42:36.796024 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_11b20377-b66b-48ee-a4ac-a9f12faf621c/aodh-notifier/0.log" Nov 28 13:42:36 crc kubenswrapper[4779]: I1128 13:42:36.935190 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5b764d4b5d-q6jq2_2cfb62ec-1fc1-42e9-b77b-9883c7deeaa9/barbican-api/0.log" Nov 28 13:42:36 crc kubenswrapper[4779]: I1128 13:42:36.944822 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5b764d4b5d-q6jq2_2cfb62ec-1fc1-42e9-b77b-9883c7deeaa9/barbican-api-log/0.log" Nov 28 13:42:37 crc kubenswrapper[4779]: I1128 13:42:37.117774 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7784844594-g7gws_6f944a10-9e80-47a5-8ad8-3b6edc0c3315/barbican-keystone-listener/0.log" Nov 28 13:42:37 crc kubenswrapper[4779]: I1128 13:42:37.191167 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7784844594-g7gws_6f944a10-9e80-47a5-8ad8-3b6edc0c3315/barbican-keystone-listener-log/0.log" Nov 28 13:42:37 crc kubenswrapper[4779]: I1128 13:42:37.266696 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-79c7d84d4c-82wcz_684d6129-3c1c-43df-b258-c32b447736d1/barbican-worker/0.log" Nov 28 13:42:37 crc kubenswrapper[4779]: I1128 13:42:37.356538 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-79c7d84d4c-82wcz_684d6129-3c1c-43df-b258-c32b447736d1/barbican-worker-log/0.log" Nov 28 13:42:37 crc kubenswrapper[4779]: I1128 13:42:37.512831 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-4tp8f_81be23ef-d854-4ac3-8f39-601540e013ea/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 13:42:37 crc kubenswrapper[4779]: I1128 13:42:37.628698 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_13db3856-5125-439c-86a8-4493e5619b44/ceilometer-central-agent/0.log" Nov 28 13:42:37 crc kubenswrapper[4779]: I1128 13:42:37.731238 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_13db3856-5125-439c-86a8-4493e5619b44/ceilometer-notification-agent/0.log" Nov 28 13:42:37 crc kubenswrapper[4779]: I1128 13:42:37.771794 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_13db3856-5125-439c-86a8-4493e5619b44/proxy-httpd/0.log" Nov 28 13:42:37 crc kubenswrapper[4779]: I1128 13:42:37.802032 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_13db3856-5125-439c-86a8-4493e5619b44/sg-core/0.log" Nov 28 13:42:38 crc kubenswrapper[4779]: I1128 13:42:38.003268 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_8605d0be-235c-4b63-8781-ea140c60e622/cinder-api/0.log" Nov 28 13:42:38 crc kubenswrapper[4779]: I1128 13:42:38.010303 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_8605d0be-235c-4b63-8781-ea140c60e622/cinder-api-log/0.log" Nov 28 13:42:38 crc kubenswrapper[4779]: I1128 13:42:38.237418 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_b208660d-de0e-4218-a31b-66ce968db066/cinder-scheduler/0.log" Nov 28 13:42:38 crc kubenswrapper[4779]: I1128 13:42:38.237876 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_b208660d-de0e-4218-a31b-66ce968db066/probe/0.log" Nov 28 13:42:38 crc kubenswrapper[4779]: I1128 13:42:38.345168 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-s4kqc_c4e4bcb3-1c6f-4b3c-9cfb-bcffde886f96/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 13:42:38 crc kubenswrapper[4779]: I1128 13:42:38.480575 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-45xv5_21c70f9d-fd7b-4629-8b4f-0f745fd9eccb/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 13:42:38 crc kubenswrapper[4779]: I1128 13:42:38.613699 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-bb85b8995-t8mt8_4d53ab8b-7d5c-4b0f-9cfa-992ed1fd2c0f/init/0.log" Nov 28 13:42:38 crc kubenswrapper[4779]: I1128 13:42:38.810461 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-bb85b8995-t8mt8_4d53ab8b-7d5c-4b0f-9cfa-992ed1fd2c0f/dnsmasq-dns/0.log" Nov 28 13:42:38 crc kubenswrapper[4779]: I1128 13:42:38.843615 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-bgfzz_867eb458-fc69-4d08-958e-67f69bbf7ec9/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 13:42:38 crc kubenswrapper[4779]: I1128 13:42:38.846205 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-bb85b8995-t8mt8_4d53ab8b-7d5c-4b0f-9cfa-992ed1fd2c0f/init/0.log" Nov 28 13:42:39 crc kubenswrapper[4779]: I1128 13:42:39.093662 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_44e7698e-14e1-4bbe-849b-3a90b6ebd431/glance-log/0.log" Nov 28 13:42:39 crc kubenswrapper[4779]: I1128 13:42:39.134479 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_44e7698e-14e1-4bbe-849b-3a90b6ebd431/glance-httpd/0.log" Nov 28 13:42:39 crc kubenswrapper[4779]: I1128 13:42:39.511947 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_05d19641-3a16-482f-bcaf-da12573ca2e6/glance-log/0.log" Nov 28 13:42:39 crc kubenswrapper[4779]: I1128 13:42:39.687128 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nx8g9"] Nov 28 13:42:39 crc kubenswrapper[4779]: E1128 13:42:39.687832 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5e87be3-9201-490d-afd4-da02870f3f02" containerName="container-00" Nov 28 13:42:39 crc kubenswrapper[4779]: I1128 13:42:39.687852 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5e87be3-9201-490d-afd4-da02870f3f02" containerName="container-00" Nov 28 13:42:39 crc kubenswrapper[4779]: I1128 13:42:39.688054 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5e87be3-9201-490d-afd4-da02870f3f02" containerName="container-00" Nov 28 13:42:39 crc kubenswrapper[4779]: I1128 13:42:39.689444 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nx8g9" Nov 28 13:42:39 crc kubenswrapper[4779]: I1128 13:42:39.699057 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nx8g9"] Nov 28 13:42:39 crc kubenswrapper[4779]: I1128 13:42:39.835959 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ebbe3fe-2895-4662-b623-d66afd272cdd-catalog-content\") pod \"redhat-operators-nx8g9\" (UID: \"7ebbe3fe-2895-4662-b623-d66afd272cdd\") " pod="openshift-marketplace/redhat-operators-nx8g9" Nov 28 13:42:39 crc kubenswrapper[4779]: I1128 13:42:39.869630 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps6rl\" (UniqueName: \"kubernetes.io/projected/7ebbe3fe-2895-4662-b623-d66afd272cdd-kube-api-access-ps6rl\") pod \"redhat-operators-nx8g9\" (UID: \"7ebbe3fe-2895-4662-b623-d66afd272cdd\") " pod="openshift-marketplace/redhat-operators-nx8g9" Nov 28 13:42:39 crc kubenswrapper[4779]: I1128 13:42:39.869776 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ebbe3fe-2895-4662-b623-d66afd272cdd-utilities\") pod \"redhat-operators-nx8g9\" (UID: \"7ebbe3fe-2895-4662-b623-d66afd272cdd\") " pod="openshift-marketplace/redhat-operators-nx8g9" Nov 28 13:42:39 crc kubenswrapper[4779]: I1128 13:42:39.923363 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_05d19641-3a16-482f-bcaf-da12573ca2e6/glance-httpd/0.log" Nov 28 13:42:39 crc kubenswrapper[4779]: I1128 13:42:39.975446 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ebbe3fe-2895-4662-b623-d66afd272cdd-catalog-content\") pod \"redhat-operators-nx8g9\" (UID: \"7ebbe3fe-2895-4662-b623-d66afd272cdd\") " pod="openshift-marketplace/redhat-operators-nx8g9" Nov 28 13:42:39 crc kubenswrapper[4779]: I1128 13:42:39.975535 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps6rl\" (UniqueName: \"kubernetes.io/projected/7ebbe3fe-2895-4662-b623-d66afd272cdd-kube-api-access-ps6rl\") pod \"redhat-operators-nx8g9\" (UID: \"7ebbe3fe-2895-4662-b623-d66afd272cdd\") " pod="openshift-marketplace/redhat-operators-nx8g9" Nov 28 13:42:39 crc kubenswrapper[4779]: I1128 13:42:39.975610 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ebbe3fe-2895-4662-b623-d66afd272cdd-utilities\") pod \"redhat-operators-nx8g9\" (UID: \"7ebbe3fe-2895-4662-b623-d66afd272cdd\") " pod="openshift-marketplace/redhat-operators-nx8g9" Nov 28 13:42:39 crc kubenswrapper[4779]: I1128 13:42:39.977436 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ebbe3fe-2895-4662-b623-d66afd272cdd-catalog-content\") pod \"redhat-operators-nx8g9\" (UID: \"7ebbe3fe-2895-4662-b623-d66afd272cdd\") " pod="openshift-marketplace/redhat-operators-nx8g9" Nov 28 13:42:39 crc kubenswrapper[4779]: I1128 13:42:39.977704 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ebbe3fe-2895-4662-b623-d66afd272cdd-utilities\") pod \"redhat-operators-nx8g9\" (UID: \"7ebbe3fe-2895-4662-b623-d66afd272cdd\") " pod="openshift-marketplace/redhat-operators-nx8g9" Nov 28 13:42:40 crc kubenswrapper[4779]: I1128 13:42:40.000209 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps6rl\" (UniqueName: \"kubernetes.io/projected/7ebbe3fe-2895-4662-b623-d66afd272cdd-kube-api-access-ps6rl\") pod \"redhat-operators-nx8g9\" (UID: \"7ebbe3fe-2895-4662-b623-d66afd272cdd\") " pod="openshift-marketplace/redhat-operators-nx8g9" Nov 28 13:42:40 crc kubenswrapper[4779]: I1128 13:42:40.181628 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nx8g9" Nov 28 13:42:40 crc kubenswrapper[4779]: I1128 13:42:40.263821 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-6dc88d6fdd-9vtxx_a86fc8ed-8b8b-4a8a-8b27-0aa2d40fb61b/heat-engine/0.log" Nov 28 13:42:40 crc kubenswrapper[4779]: I1128 13:42:40.490297 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-5675dff4b5-5c9sq_ec26de16-988c-4242-8de5-e379eeff18d8/heat-api/0.log" Nov 28 13:42:40 crc kubenswrapper[4779]: I1128 13:42:40.507970 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-74c96b7975-gndjl_b6ecf1b7-5d5c-4a0f-9fcb-caed8534c325/heat-cfnapi/0.log" Nov 28 13:42:40 crc kubenswrapper[4779]: I1128 13:42:40.510029 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-htvcg_85a14d9a-2667-48a0-83c1-2e37f92590fb/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 13:42:40 crc kubenswrapper[4779]: I1128 13:42:40.707084 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nx8g9"] Nov 28 13:42:40 crc kubenswrapper[4779]: I1128 13:42:40.803163 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-68b65c9788-nmrvn_8da74c5c-34bf-4136-a395-51d2be7258db/keystone-api/0.log" Nov 28 13:42:40 crc kubenswrapper[4779]: I1128 13:42:40.859356 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-cjqrj_10420a90-84fa-45f9-a726-b3fcb8db4a20/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 13:42:40 crc kubenswrapper[4779]: I1128 13:42:40.954448 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29405581-vh7vm_8ad47080-f81f-4366-ac8b-b110a18c1834/keystone-cron/0.log" Nov 28 13:42:41 crc kubenswrapper[4779]: I1128 13:42:41.000959 4779 generic.go:334] "Generic (PLEG): container finished" podID="7ebbe3fe-2895-4662-b623-d66afd272cdd" containerID="342c4a791c04616f98178f6b4b309b7121a1f819c69e0a681de4d850c0fb2680" exitCode=0 Nov 28 13:42:41 crc kubenswrapper[4779]: I1128 13:42:41.001007 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nx8g9" event={"ID":"7ebbe3fe-2895-4662-b623-d66afd272cdd","Type":"ContainerDied","Data":"342c4a791c04616f98178f6b4b309b7121a1f819c69e0a681de4d850c0fb2680"} Nov 28 13:42:41 crc kubenswrapper[4779]: I1128 13:42:41.001040 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nx8g9" event={"ID":"7ebbe3fe-2895-4662-b623-d66afd272cdd","Type":"ContainerStarted","Data":"accfeaee85ff7735712b3fbae07e84a0df8a76b52ce652de0b08144973883793"} Nov 28 13:42:41 crc kubenswrapper[4779]: I1128 13:42:41.128288 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_4e75c99e-5273-44e8-a5d1-98b317b5dacf/kube-state-metrics/0.log" Nov 28 13:42:41 crc kubenswrapper[4779]: I1128 13:42:41.218860 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-vs9sh_303327cf-5fdb-49b9-a9ee-f8498657b10d/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 13:42:41 crc kubenswrapper[4779]: I1128 13:42:41.636908 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7756d796d9-vcgbk_0cb2d061-fb70-4108-8204-9bf7e699c89f/neutron-api/0.log" Nov 28 13:42:41 crc kubenswrapper[4779]: I1128 13:42:41.738683 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7756d796d9-vcgbk_0cb2d061-fb70-4108-8204-9bf7e699c89f/neutron-httpd/0.log" Nov 28 13:42:41 crc kubenswrapper[4779]: I1128 13:42:41.981950 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-m59g8_6b291c86-c80b-41e0-9ebd-bff5f1d3de42/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 13:42:42 crc kubenswrapper[4779]: I1128 13:42:42.093675 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-g7ql9"] Nov 28 13:42:42 crc kubenswrapper[4779]: I1128 13:42:42.096742 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g7ql9" Nov 28 13:42:42 crc kubenswrapper[4779]: I1128 13:42:42.105873 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g7ql9"] Nov 28 13:42:42 crc kubenswrapper[4779]: I1128 13:42:42.173034 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_ccd118c8-a309-4e17-952e-647ce404bbeb/nova-api-log/0.log" Nov 28 13:42:42 crc kubenswrapper[4779]: I1128 13:42:42.231612 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxgjn\" (UniqueName: \"kubernetes.io/projected/9dbd1ab1-111a-4649-a20f-69fae58be8bc-kube-api-access-gxgjn\") pod \"community-operators-g7ql9\" (UID: \"9dbd1ab1-111a-4649-a20f-69fae58be8bc\") " pod="openshift-marketplace/community-operators-g7ql9" Nov 28 13:42:42 crc kubenswrapper[4779]: I1128 13:42:42.231753 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dbd1ab1-111a-4649-a20f-69fae58be8bc-catalog-content\") pod \"community-operators-g7ql9\" (UID: \"9dbd1ab1-111a-4649-a20f-69fae58be8bc\") " pod="openshift-marketplace/community-operators-g7ql9" Nov 28 13:42:42 crc kubenswrapper[4779]: I1128 13:42:42.231882 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dbd1ab1-111a-4649-a20f-69fae58be8bc-utilities\") pod \"community-operators-g7ql9\" (UID: \"9dbd1ab1-111a-4649-a20f-69fae58be8bc\") " pod="openshift-marketplace/community-operators-g7ql9" Nov 28 13:42:42 crc kubenswrapper[4779]: I1128 13:42:42.333227 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dbd1ab1-111a-4649-a20f-69fae58be8bc-catalog-content\") pod \"community-operators-g7ql9\" (UID: \"9dbd1ab1-111a-4649-a20f-69fae58be8bc\") " pod="openshift-marketplace/community-operators-g7ql9" Nov 28 13:42:42 crc kubenswrapper[4779]: I1128 13:42:42.333582 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dbd1ab1-111a-4649-a20f-69fae58be8bc-utilities\") pod \"community-operators-g7ql9\" (UID: \"9dbd1ab1-111a-4649-a20f-69fae58be8bc\") " pod="openshift-marketplace/community-operators-g7ql9" Nov 28 13:42:42 crc kubenswrapper[4779]: I1128 13:42:42.333714 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxgjn\" (UniqueName: \"kubernetes.io/projected/9dbd1ab1-111a-4649-a20f-69fae58be8bc-kube-api-access-gxgjn\") pod \"community-operators-g7ql9\" (UID: \"9dbd1ab1-111a-4649-a20f-69fae58be8bc\") " pod="openshift-marketplace/community-operators-g7ql9" Nov 28 13:42:42 crc kubenswrapper[4779]: I1128 13:42:42.334670 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dbd1ab1-111a-4649-a20f-69fae58be8bc-catalog-content\") pod \"community-operators-g7ql9\" (UID: \"9dbd1ab1-111a-4649-a20f-69fae58be8bc\") " pod="openshift-marketplace/community-operators-g7ql9" Nov 28 13:42:42 crc kubenswrapper[4779]: I1128 13:42:42.335048 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dbd1ab1-111a-4649-a20f-69fae58be8bc-utilities\") pod \"community-operators-g7ql9\" (UID: \"9dbd1ab1-111a-4649-a20f-69fae58be8bc\") " pod="openshift-marketplace/community-operators-g7ql9" Nov 28 13:42:42 crc kubenswrapper[4779]: I1128 13:42:42.365008 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxgjn\" (UniqueName: \"kubernetes.io/projected/9dbd1ab1-111a-4649-a20f-69fae58be8bc-kube-api-access-gxgjn\") pod \"community-operators-g7ql9\" (UID: \"9dbd1ab1-111a-4649-a20f-69fae58be8bc\") " pod="openshift-marketplace/community-operators-g7ql9" Nov 28 13:42:42 crc kubenswrapper[4779]: I1128 13:42:42.389205 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_55d804a5-57cf-458c-8941-c0ec9ea50d24/nova-cell0-conductor-conductor/0.log" Nov 28 13:42:42 crc kubenswrapper[4779]: I1128 13:42:42.454301 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g7ql9" Nov 28 13:42:42 crc kubenswrapper[4779]: I1128 13:42:42.502784 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_ccd118c8-a309-4e17-952e-647ce404bbeb/nova-api-api/0.log" Nov 28 13:42:43 crc kubenswrapper[4779]: I1128 13:42:43.053616 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nx8g9" event={"ID":"7ebbe3fe-2895-4662-b623-d66afd272cdd","Type":"ContainerStarted","Data":"efdf4e45fdc6d5255bfd4beca495438aff9a71da88cebf4c1e0c10245f024f4e"} Nov 28 13:42:43 crc kubenswrapper[4779]: I1128 13:42:43.069774 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g7ql9"] Nov 28 13:42:43 crc kubenswrapper[4779]: W1128 13:42:43.092277 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9dbd1ab1_111a_4649_a20f_69fae58be8bc.slice/crio-467dd6ee9985dad90acc52187bff8841f5509aa6675a6da9accc91cd469119ad WatchSource:0}: Error finding container 467dd6ee9985dad90acc52187bff8841f5509aa6675a6da9accc91cd469119ad: Status 404 returned error can't find the container with id 467dd6ee9985dad90acc52187bff8841f5509aa6675a6da9accc91cd469119ad Nov 28 13:42:43 crc kubenswrapper[4779]: I1128 13:42:43.396175 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_c2f7b630-265b-4501-87b8-44f47fe9a11f/nova-cell1-novncproxy-novncproxy/0.log" Nov 28 13:42:43 crc kubenswrapper[4779]: I1128 13:42:43.460118 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_bfd44820-2805-4e67-a5f8-05a5a31dc047/nova-cell1-conductor-conductor/0.log" Nov 28 13:42:43 crc kubenswrapper[4779]: I1128 13:42:43.769389 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-k9nk4_42f930a2-ac0c-43b5-ab17-1ccd2f30340e/nova-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 13:42:43 crc kubenswrapper[4779]: I1128 13:42:43.791110 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_9bdd523c-399a-4ea8-999b-850a2dd6897c/nova-metadata-log/0.log" Nov 28 13:42:44 crc kubenswrapper[4779]: I1128 13:42:44.065627 4779 generic.go:334] "Generic (PLEG): container finished" podID="9dbd1ab1-111a-4649-a20f-69fae58be8bc" containerID="e4f8d2ab6da78580853606940498cf72b302e5119c20d83a4ac71b5c2738911d" exitCode=0 Nov 28 13:42:44 crc kubenswrapper[4779]: I1128 13:42:44.067277 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g7ql9" event={"ID":"9dbd1ab1-111a-4649-a20f-69fae58be8bc","Type":"ContainerDied","Data":"e4f8d2ab6da78580853606940498cf72b302e5119c20d83a4ac71b5c2738911d"} Nov 28 13:42:44 crc kubenswrapper[4779]: I1128 13:42:44.067306 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g7ql9" event={"ID":"9dbd1ab1-111a-4649-a20f-69fae58be8bc","Type":"ContainerStarted","Data":"467dd6ee9985dad90acc52187bff8841f5509aa6675a6da9accc91cd469119ad"} Nov 28 13:42:44 crc kubenswrapper[4779]: I1128 13:42:44.181632 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_458c0c46-271b-40b8-aadc-10cfb6939487/nova-scheduler-scheduler/0.log" Nov 28 13:42:44 crc kubenswrapper[4779]: I1128 13:42:44.274542 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c27e5f17-320d-472d-a3e7-6a0e9fae960b/mysql-bootstrap/0.log" Nov 28 13:42:44 crc kubenswrapper[4779]: I1128 13:42:44.382142 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c27e5f17-320d-472d-a3e7-6a0e9fae960b/mysql-bootstrap/0.log" Nov 28 13:42:44 crc kubenswrapper[4779]: I1128 13:42:44.471152 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c27e5f17-320d-472d-a3e7-6a0e9fae960b/galera/0.log" Nov 28 13:42:44 crc kubenswrapper[4779]: I1128 13:42:44.576430 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_bd0f63de-dfe7-471d-92d8-b41e260d970b/mysql-bootstrap/0.log" Nov 28 13:42:44 crc kubenswrapper[4779]: I1128 13:42:44.893413 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_bd0f63de-dfe7-471d-92d8-b41e260d970b/mysql-bootstrap/0.log" Nov 28 13:42:45 crc kubenswrapper[4779]: I1128 13:42:45.110329 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_488dc09e-4b09-40a3-8bfa-fd3116307f09/openstackclient/0.log" Nov 28 13:42:45 crc kubenswrapper[4779]: I1128 13:42:45.155836 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_bd0f63de-dfe7-471d-92d8-b41e260d970b/galera/0.log" Nov 28 13:42:45 crc kubenswrapper[4779]: I1128 13:42:45.320189 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-7bg4l_5049f1f8-c081-4671-8d6a-9282a53dd6bd/ovn-controller/0.log" Nov 28 13:42:45 crc kubenswrapper[4779]: I1128 13:42:45.641994 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-szlhd_2e43d521-a73a-4d72-8270-bb959b5d0a53/openstack-network-exporter/0.log" Nov 28 13:42:45 crc kubenswrapper[4779]: I1128 13:42:45.656806 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_9bdd523c-399a-4ea8-999b-850a2dd6897c/nova-metadata-metadata/0.log" Nov 28 13:42:45 crc kubenswrapper[4779]: I1128 13:42:45.771310 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-c6d9j_a9ef6128-c3cf-4c5a-80ff-e0c4c263637d/ovsdb-server-init/0.log" Nov 28 13:42:46 crc kubenswrapper[4779]: I1128 13:42:46.082903 4779 generic.go:334] "Generic (PLEG): container finished" podID="7ebbe3fe-2895-4662-b623-d66afd272cdd" containerID="efdf4e45fdc6d5255bfd4beca495438aff9a71da88cebf4c1e0c10245f024f4e" exitCode=0 Nov 28 13:42:46 crc kubenswrapper[4779]: I1128 13:42:46.083049 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nx8g9" event={"ID":"7ebbe3fe-2895-4662-b623-d66afd272cdd","Type":"ContainerDied","Data":"efdf4e45fdc6d5255bfd4beca495438aff9a71da88cebf4c1e0c10245f024f4e"} Nov 28 13:42:46 crc kubenswrapper[4779]: I1128 13:42:46.095117 4779 generic.go:334] "Generic (PLEG): container finished" podID="9dbd1ab1-111a-4649-a20f-69fae58be8bc" containerID="610f8db4a551792e390f921e444b99470ad182cb9defc2e8bebc3c3e51805411" exitCode=0 Nov 28 13:42:46 crc kubenswrapper[4779]: I1128 13:42:46.095171 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g7ql9" event={"ID":"9dbd1ab1-111a-4649-a20f-69fae58be8bc","Type":"ContainerDied","Data":"610f8db4a551792e390f921e444b99470ad182cb9defc2e8bebc3c3e51805411"} Nov 28 13:42:46 crc kubenswrapper[4779]: I1128 13:42:46.172443 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-c6d9j_a9ef6128-c3cf-4c5a-80ff-e0c4c263637d/ovsdb-server-init/0.log" Nov 28 13:42:46 crc kubenswrapper[4779]: I1128 13:42:46.220133 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-c6d9j_a9ef6128-c3cf-4c5a-80ff-e0c4c263637d/ovsdb-server/0.log" Nov 28 13:42:46 crc kubenswrapper[4779]: I1128 13:42:46.285380 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-c6d9j_a9ef6128-c3cf-4c5a-80ff-e0c4c263637d/ovs-vswitchd/0.log" Nov 28 13:42:46 crc kubenswrapper[4779]: I1128 13:42:46.636437 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_d78bd78f-4723-4bf3-99ee-95509a0100af/openstack-network-exporter/0.log" Nov 28 13:42:46 crc kubenswrapper[4779]: I1128 13:42:46.828646 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_d78bd78f-4723-4bf3-99ee-95509a0100af/ovn-northd/0.log" Nov 28 13:42:46 crc kubenswrapper[4779]: I1128 13:42:46.898877 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-nbj5n_b6763a62-0f2c-4f57-9391-731d12201cce/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 13:42:46 crc kubenswrapper[4779]: I1128 13:42:46.943319 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_aa122564-e2c8-4ceb-b66d-1b677aaa4b21/openstack-network-exporter/0.log" Nov 28 13:42:47 crc kubenswrapper[4779]: I1128 13:42:47.108835 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_aa122564-e2c8-4ceb-b66d-1b677aaa4b21/ovsdbserver-nb/0.log" Nov 28 13:42:47 crc kubenswrapper[4779]: I1128 13:42:47.142916 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_7312815e-950e-48e2-bcbe-c74717279168/openstack-network-exporter/0.log" Nov 28 13:42:47 crc kubenswrapper[4779]: I1128 13:42:47.241794 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_7312815e-950e-48e2-bcbe-c74717279168/ovsdbserver-sb/0.log" Nov 28 13:42:47 crc kubenswrapper[4779]: I1128 13:42:47.523737 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-674bfd5544-x2xz6_60595e82-374d-4133-8a19-c240290be2da/placement-api/0.log" Nov 28 13:42:47 crc kubenswrapper[4779]: I1128 13:42:47.534958 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-674bfd5544-x2xz6_60595e82-374d-4133-8a19-c240290be2da/placement-log/0.log" Nov 28 13:42:47 crc kubenswrapper[4779]: I1128 13:42:47.785652 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_ee069283-02ed-414e-960c-7ae288363bb4/config-reloader/0.log" Nov 28 13:42:47 crc kubenswrapper[4779]: I1128 13:42:47.827765 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_ee069283-02ed-414e-960c-7ae288363bb4/init-config-reloader/0.log" Nov 28 13:42:47 crc kubenswrapper[4779]: I1128 13:42:47.827929 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_ee069283-02ed-414e-960c-7ae288363bb4/init-config-reloader/0.log" Nov 28 13:42:47 crc kubenswrapper[4779]: I1128 13:42:47.841016 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_ee069283-02ed-414e-960c-7ae288363bb4/prometheus/0.log" Nov 28 13:42:48 crc kubenswrapper[4779]: I1128 13:42:48.031496 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_ee069283-02ed-414e-960c-7ae288363bb4/thanos-sidecar/0.log" Nov 28 13:42:48 crc kubenswrapper[4779]: I1128 13:42:48.079647 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_80c2f0f7-d979-400e-b9fe-9369c3fc8ec5/setup-container/0.log" Nov 28 13:42:48 crc kubenswrapper[4779]: I1128 13:42:48.239368 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_80c2f0f7-d979-400e-b9fe-9369c3fc8ec5/setup-container/0.log" Nov 28 13:42:48 crc kubenswrapper[4779]: I1128 13:42:48.299627 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_80c2f0f7-d979-400e-b9fe-9369c3fc8ec5/rabbitmq/0.log" Nov 28 13:42:48 crc kubenswrapper[4779]: I1128 13:42:48.399777 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b0a12679-627a-4310-a9f7-93731231b12e/setup-container/0.log" Nov 28 13:42:48 crc kubenswrapper[4779]: I1128 13:42:48.655918 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-qkpsw_dd33d0ba-c2bf-47d8-8c6c-d1b6d9a67449/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 13:42:48 crc kubenswrapper[4779]: I1128 13:42:48.656492 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b0a12679-627a-4310-a9f7-93731231b12e/setup-container/0.log" Nov 28 13:42:48 crc kubenswrapper[4779]: I1128 13:42:48.721784 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b0a12679-627a-4310-a9f7-93731231b12e/rabbitmq/0.log" Nov 28 13:42:48 crc kubenswrapper[4779]: I1128 13:42:48.952717 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-vz8vz_1bec9363-8311-40e0-ab18-fcaf7acf3dc9/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 13:42:49 crc kubenswrapper[4779]: I1128 13:42:49.074174 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-q474j_833a68ba-d01e-49d6-9055-0e4342fd3305/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 13:42:49 crc kubenswrapper[4779]: I1128 13:42:49.123414 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g7ql9" event={"ID":"9dbd1ab1-111a-4649-a20f-69fae58be8bc","Type":"ContainerStarted","Data":"7398d2251aa03c411549c27ac5a5929b9a1bcbf97d931dc63cab97defe5bb6e5"} Nov 28 13:42:49 crc kubenswrapper[4779]: I1128 13:42:49.130878 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nx8g9" event={"ID":"7ebbe3fe-2895-4662-b623-d66afd272cdd","Type":"ContainerStarted","Data":"1926e6df4a5f8c7ea943faaebce87cd86df5ed80ff9f9cb79a8950216d35d215"} Nov 28 13:42:49 crc kubenswrapper[4779]: I1128 13:42:49.163565 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-g7ql9" podStartSLOduration=2.629042323 podStartE2EDuration="7.163547033s" podCreationTimestamp="2025-11-28 13:42:42 +0000 UTC" firstStartedPulling="2025-11-28 13:42:44.067919569 +0000 UTC m=+4024.633594923" lastFinishedPulling="2025-11-28 13:42:48.602424279 +0000 UTC m=+4029.168099633" observedRunningTime="2025-11-28 13:42:49.148373964 +0000 UTC m=+4029.714049318" watchObservedRunningTime="2025-11-28 13:42:49.163547033 +0000 UTC m=+4029.729222387" Nov 28 13:42:49 crc kubenswrapper[4779]: I1128 13:42:49.174744 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nx8g9" podStartSLOduration=2.538199491 podStartE2EDuration="10.174717227s" podCreationTimestamp="2025-11-28 13:42:39 +0000 UTC" firstStartedPulling="2025-11-28 13:42:41.002923395 +0000 UTC m=+4021.568598749" lastFinishedPulling="2025-11-28 13:42:48.639441121 +0000 UTC m=+4029.205116485" observedRunningTime="2025-11-28 13:42:49.167541578 +0000 UTC m=+4029.733216932" watchObservedRunningTime="2025-11-28 13:42:49.174717227 +0000 UTC m=+4029.740392581" Nov 28 13:42:49 crc kubenswrapper[4779]: I1128 13:42:49.411178 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-484k6_4e42e6af-a9aa-47a7-86ff-980266468175/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 13:42:49 crc kubenswrapper[4779]: I1128 13:42:49.441775 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-7mwdd_27fc90d6-d05c-412a-97fb-d9fe40d2a964/ssh-known-hosts-edpm-deployment/0.log" Nov 28 13:42:49 crc kubenswrapper[4779]: I1128 13:42:49.952672 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-9c6b99df5-82cnl_75d5987a-c7cb-400e-8efb-7375385f0e20/proxy-server/0.log" Nov 28 13:42:50 crc kubenswrapper[4779]: I1128 13:42:50.021586 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-9c6b99df5-82cnl_75d5987a-c7cb-400e-8efb-7375385f0e20/proxy-httpd/0.log" Nov 28 13:42:50 crc kubenswrapper[4779]: I1128 13:42:50.183894 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nx8g9" Nov 28 13:42:50 crc kubenswrapper[4779]: I1128 13:42:50.183951 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nx8g9" Nov 28 13:42:50 crc kubenswrapper[4779]: I1128 13:42:50.313517 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_265ee755-a70e-4f35-a40a-ef525a3c5088/account-reaper/0.log" Nov 28 13:42:50 crc kubenswrapper[4779]: I1128 13:42:50.338859 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_265ee755-a70e-4f35-a40a-ef525a3c5088/account-auditor/0.log" Nov 28 13:42:50 crc kubenswrapper[4779]: I1128 13:42:50.380806 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-25kzk_5e769641-0f27-4979-9823-dff8fe453054/swift-ring-rebalance/0.log" Nov 28 13:42:50 crc kubenswrapper[4779]: I1128 13:42:50.594974 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_265ee755-a70e-4f35-a40a-ef525a3c5088/container-auditor/0.log" Nov 28 13:42:50 crc kubenswrapper[4779]: I1128 13:42:50.652948 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_265ee755-a70e-4f35-a40a-ef525a3c5088/account-server/0.log" Nov 28 13:42:50 crc kubenswrapper[4779]: I1128 13:42:50.662416 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_265ee755-a70e-4f35-a40a-ef525a3c5088/account-replicator/0.log" Nov 28 13:42:50 crc kubenswrapper[4779]: I1128 13:42:50.796310 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_265ee755-a70e-4f35-a40a-ef525a3c5088/container-replicator/0.log" Nov 28 13:42:50 crc kubenswrapper[4779]: I1128 13:42:50.924366 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_265ee755-a70e-4f35-a40a-ef525a3c5088/container-server/0.log" Nov 28 13:42:50 crc kubenswrapper[4779]: I1128 13:42:50.939542 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_265ee755-a70e-4f35-a40a-ef525a3c5088/object-auditor/0.log" Nov 28 13:42:50 crc kubenswrapper[4779]: I1128 13:42:50.950739 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_265ee755-a70e-4f35-a40a-ef525a3c5088/container-updater/0.log" Nov 28 13:42:51 crc kubenswrapper[4779]: I1128 13:42:51.178182 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_265ee755-a70e-4f35-a40a-ef525a3c5088/object-expirer/0.log" Nov 28 13:42:51 crc kubenswrapper[4779]: I1128 13:42:51.188478 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_265ee755-a70e-4f35-a40a-ef525a3c5088/object-replicator/0.log" Nov 28 13:42:51 crc kubenswrapper[4779]: I1128 13:42:51.254595 4779 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nx8g9" podUID="7ebbe3fe-2895-4662-b623-d66afd272cdd" containerName="registry-server" probeResult="failure" output=< Nov 28 13:42:51 crc kubenswrapper[4779]: timeout: failed to connect service ":50051" within 1s Nov 28 13:42:51 crc kubenswrapper[4779]: > Nov 28 13:42:51 crc kubenswrapper[4779]: I1128 13:42:51.261704 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_265ee755-a70e-4f35-a40a-ef525a3c5088/object-server/0.log" Nov 28 13:42:51 crc kubenswrapper[4779]: I1128 13:42:51.299981 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_265ee755-a70e-4f35-a40a-ef525a3c5088/object-updater/0.log" Nov 28 13:42:51 crc kubenswrapper[4779]: I1128 13:42:51.460754 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_265ee755-a70e-4f35-a40a-ef525a3c5088/rsync/0.log" Nov 28 13:42:51 crc kubenswrapper[4779]: I1128 13:42:51.467582 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_265ee755-a70e-4f35-a40a-ef525a3c5088/swift-recon-cron/0.log" Nov 28 13:42:51 crc kubenswrapper[4779]: I1128 13:42:51.664413 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-rs7mr_1e165593-fee0-4b82-87b3-6f102fdabe4f/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 13:42:51 crc kubenswrapper[4779]: I1128 13:42:51.727591 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-nw7dt_01e25eb1-de3d-4912-933f-09b22837436d/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 13:42:52 crc kubenswrapper[4779]: I1128 13:42:52.455456 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-g7ql9" Nov 28 13:42:52 crc kubenswrapper[4779]: I1128 13:42:52.456607 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-g7ql9" Nov 28 13:42:52 crc kubenswrapper[4779]: I1128 13:42:52.521890 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-g7ql9" Nov 28 13:42:53 crc kubenswrapper[4779]: I1128 13:42:53.243073 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-g7ql9" Nov 28 13:42:56 crc kubenswrapper[4779]: I1128 13:42:56.281208 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g7ql9"] Nov 28 13:42:56 crc kubenswrapper[4779]: I1128 13:42:56.282007 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-g7ql9" podUID="9dbd1ab1-111a-4649-a20f-69fae58be8bc" containerName="registry-server" containerID="cri-o://7398d2251aa03c411549c27ac5a5929b9a1bcbf97d931dc63cab97defe5bb6e5" gracePeriod=2 Nov 28 13:42:56 crc kubenswrapper[4779]: I1128 13:42:56.860647 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g7ql9" Nov 28 13:42:56 crc kubenswrapper[4779]: I1128 13:42:56.975167 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxgjn\" (UniqueName: \"kubernetes.io/projected/9dbd1ab1-111a-4649-a20f-69fae58be8bc-kube-api-access-gxgjn\") pod \"9dbd1ab1-111a-4649-a20f-69fae58be8bc\" (UID: \"9dbd1ab1-111a-4649-a20f-69fae58be8bc\") " Nov 28 13:42:56 crc kubenswrapper[4779]: I1128 13:42:56.975332 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dbd1ab1-111a-4649-a20f-69fae58be8bc-utilities\") pod \"9dbd1ab1-111a-4649-a20f-69fae58be8bc\" (UID: \"9dbd1ab1-111a-4649-a20f-69fae58be8bc\") " Nov 28 13:42:56 crc kubenswrapper[4779]: I1128 13:42:56.975450 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dbd1ab1-111a-4649-a20f-69fae58be8bc-catalog-content\") pod \"9dbd1ab1-111a-4649-a20f-69fae58be8bc\" (UID: \"9dbd1ab1-111a-4649-a20f-69fae58be8bc\") " Nov 28 13:42:56 crc kubenswrapper[4779]: I1128 13:42:56.976241 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9dbd1ab1-111a-4649-a20f-69fae58be8bc-utilities" (OuterVolumeSpecName: "utilities") pod "9dbd1ab1-111a-4649-a20f-69fae58be8bc" (UID: "9dbd1ab1-111a-4649-a20f-69fae58be8bc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 13:42:56 crc kubenswrapper[4779]: I1128 13:42:56.985323 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dbd1ab1-111a-4649-a20f-69fae58be8bc-kube-api-access-gxgjn" (OuterVolumeSpecName: "kube-api-access-gxgjn") pod "9dbd1ab1-111a-4649-a20f-69fae58be8bc" (UID: "9dbd1ab1-111a-4649-a20f-69fae58be8bc"). InnerVolumeSpecName "kube-api-access-gxgjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:42:57 crc kubenswrapper[4779]: I1128 13:42:57.042429 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9dbd1ab1-111a-4649-a20f-69fae58be8bc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9dbd1ab1-111a-4649-a20f-69fae58be8bc" (UID: "9dbd1ab1-111a-4649-a20f-69fae58be8bc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 13:42:57 crc kubenswrapper[4779]: I1128 13:42:57.080941 4779 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dbd1ab1-111a-4649-a20f-69fae58be8bc-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 13:42:57 crc kubenswrapper[4779]: I1128 13:42:57.080969 4779 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dbd1ab1-111a-4649-a20f-69fae58be8bc-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 13:42:57 crc kubenswrapper[4779]: I1128 13:42:57.080981 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxgjn\" (UniqueName: \"kubernetes.io/projected/9dbd1ab1-111a-4649-a20f-69fae58be8bc-kube-api-access-gxgjn\") on node \"crc\" DevicePath \"\"" Nov 28 13:42:57 crc kubenswrapper[4779]: I1128 13:42:57.236863 4779 generic.go:334] "Generic (PLEG): container finished" podID="9dbd1ab1-111a-4649-a20f-69fae58be8bc" containerID="7398d2251aa03c411549c27ac5a5929b9a1bcbf97d931dc63cab97defe5bb6e5" exitCode=0 Nov 28 13:42:57 crc kubenswrapper[4779]: I1128 13:42:57.236907 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g7ql9" event={"ID":"9dbd1ab1-111a-4649-a20f-69fae58be8bc","Type":"ContainerDied","Data":"7398d2251aa03c411549c27ac5a5929b9a1bcbf97d931dc63cab97defe5bb6e5"} Nov 28 13:42:57 crc kubenswrapper[4779]: I1128 13:42:57.236938 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g7ql9" Nov 28 13:42:57 crc kubenswrapper[4779]: I1128 13:42:57.236950 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g7ql9" event={"ID":"9dbd1ab1-111a-4649-a20f-69fae58be8bc","Type":"ContainerDied","Data":"467dd6ee9985dad90acc52187bff8841f5509aa6675a6da9accc91cd469119ad"} Nov 28 13:42:57 crc kubenswrapper[4779]: I1128 13:42:57.236960 4779 scope.go:117] "RemoveContainer" containerID="7398d2251aa03c411549c27ac5a5929b9a1bcbf97d931dc63cab97defe5bb6e5" Nov 28 13:42:57 crc kubenswrapper[4779]: I1128 13:42:57.259033 4779 scope.go:117] "RemoveContainer" containerID="610f8db4a551792e390f921e444b99470ad182cb9defc2e8bebc3c3e51805411" Nov 28 13:42:57 crc kubenswrapper[4779]: I1128 13:42:57.266438 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g7ql9"] Nov 28 13:42:57 crc kubenswrapper[4779]: I1128 13:42:57.273439 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-g7ql9"] Nov 28 13:42:57 crc kubenswrapper[4779]: I1128 13:42:57.295999 4779 scope.go:117] "RemoveContainer" containerID="e4f8d2ab6da78580853606940498cf72b302e5119c20d83a4ac71b5c2738911d" Nov 28 13:42:57 crc kubenswrapper[4779]: I1128 13:42:57.345489 4779 scope.go:117] "RemoveContainer" containerID="7398d2251aa03c411549c27ac5a5929b9a1bcbf97d931dc63cab97defe5bb6e5" Nov 28 13:42:57 crc kubenswrapper[4779]: E1128 13:42:57.346826 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7398d2251aa03c411549c27ac5a5929b9a1bcbf97d931dc63cab97defe5bb6e5\": container with ID starting with 7398d2251aa03c411549c27ac5a5929b9a1bcbf97d931dc63cab97defe5bb6e5 not found: ID does not exist" containerID="7398d2251aa03c411549c27ac5a5929b9a1bcbf97d931dc63cab97defe5bb6e5" Nov 28 13:42:57 crc kubenswrapper[4779]: I1128 13:42:57.346887 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7398d2251aa03c411549c27ac5a5929b9a1bcbf97d931dc63cab97defe5bb6e5"} err="failed to get container status \"7398d2251aa03c411549c27ac5a5929b9a1bcbf97d931dc63cab97defe5bb6e5\": rpc error: code = NotFound desc = could not find container \"7398d2251aa03c411549c27ac5a5929b9a1bcbf97d931dc63cab97defe5bb6e5\": container with ID starting with 7398d2251aa03c411549c27ac5a5929b9a1bcbf97d931dc63cab97defe5bb6e5 not found: ID does not exist" Nov 28 13:42:57 crc kubenswrapper[4779]: I1128 13:42:57.346920 4779 scope.go:117] "RemoveContainer" containerID="610f8db4a551792e390f921e444b99470ad182cb9defc2e8bebc3c3e51805411" Nov 28 13:42:57 crc kubenswrapper[4779]: E1128 13:42:57.347465 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"610f8db4a551792e390f921e444b99470ad182cb9defc2e8bebc3c3e51805411\": container with ID starting with 610f8db4a551792e390f921e444b99470ad182cb9defc2e8bebc3c3e51805411 not found: ID does not exist" containerID="610f8db4a551792e390f921e444b99470ad182cb9defc2e8bebc3c3e51805411" Nov 28 13:42:57 crc kubenswrapper[4779]: I1128 13:42:57.347552 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"610f8db4a551792e390f921e444b99470ad182cb9defc2e8bebc3c3e51805411"} err="failed to get container status \"610f8db4a551792e390f921e444b99470ad182cb9defc2e8bebc3c3e51805411\": rpc error: code = NotFound desc = could not find container \"610f8db4a551792e390f921e444b99470ad182cb9defc2e8bebc3c3e51805411\": container with ID starting with 610f8db4a551792e390f921e444b99470ad182cb9defc2e8bebc3c3e51805411 not found: ID does not exist" Nov 28 13:42:57 crc kubenswrapper[4779]: I1128 13:42:57.347621 4779 scope.go:117] "RemoveContainer" containerID="e4f8d2ab6da78580853606940498cf72b302e5119c20d83a4ac71b5c2738911d" Nov 28 13:42:57 crc kubenswrapper[4779]: E1128 13:42:57.347981 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4f8d2ab6da78580853606940498cf72b302e5119c20d83a4ac71b5c2738911d\": container with ID starting with e4f8d2ab6da78580853606940498cf72b302e5119c20d83a4ac71b5c2738911d not found: ID does not exist" containerID="e4f8d2ab6da78580853606940498cf72b302e5119c20d83a4ac71b5c2738911d" Nov 28 13:42:57 crc kubenswrapper[4779]: I1128 13:42:57.348018 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4f8d2ab6da78580853606940498cf72b302e5119c20d83a4ac71b5c2738911d"} err="failed to get container status \"e4f8d2ab6da78580853606940498cf72b302e5119c20d83a4ac71b5c2738911d\": rpc error: code = NotFound desc = could not find container \"e4f8d2ab6da78580853606940498cf72b302e5119c20d83a4ac71b5c2738911d\": container with ID starting with e4f8d2ab6da78580853606940498cf72b302e5119c20d83a4ac71b5c2738911d not found: ID does not exist" Nov 28 13:42:57 crc kubenswrapper[4779]: I1128 13:42:57.738025 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9dbd1ab1-111a-4649-a20f-69fae58be8bc" path="/var/lib/kubelet/pods/9dbd1ab1-111a-4649-a20f-69fae58be8bc/volumes" Nov 28 13:42:58 crc kubenswrapper[4779]: I1128 13:42:58.857007 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_28783fa8-aac9-4041-aba2-ba78f5be6f66/memcached/0.log" Nov 28 13:43:00 crc kubenswrapper[4779]: I1128 13:43:00.241036 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nx8g9" Nov 28 13:43:00 crc kubenswrapper[4779]: I1128 13:43:00.294805 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nx8g9" Nov 28 13:43:03 crc kubenswrapper[4779]: I1128 13:43:03.665806 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nx8g9"] Nov 28 13:43:03 crc kubenswrapper[4779]: I1128 13:43:03.666606 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nx8g9" podUID="7ebbe3fe-2895-4662-b623-d66afd272cdd" containerName="registry-server" containerID="cri-o://1926e6df4a5f8c7ea943faaebce87cd86df5ed80ff9f9cb79a8950216d35d215" gracePeriod=2 Nov 28 13:43:04 crc kubenswrapper[4779]: I1128 13:43:04.186140 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nx8g9" Nov 28 13:43:04 crc kubenswrapper[4779]: I1128 13:43:04.296654 4779 generic.go:334] "Generic (PLEG): container finished" podID="7ebbe3fe-2895-4662-b623-d66afd272cdd" containerID="1926e6df4a5f8c7ea943faaebce87cd86df5ed80ff9f9cb79a8950216d35d215" exitCode=0 Nov 28 13:43:04 crc kubenswrapper[4779]: I1128 13:43:04.296717 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nx8g9" event={"ID":"7ebbe3fe-2895-4662-b623-d66afd272cdd","Type":"ContainerDied","Data":"1926e6df4a5f8c7ea943faaebce87cd86df5ed80ff9f9cb79a8950216d35d215"} Nov 28 13:43:04 crc kubenswrapper[4779]: I1128 13:43:04.296760 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nx8g9" event={"ID":"7ebbe3fe-2895-4662-b623-d66afd272cdd","Type":"ContainerDied","Data":"accfeaee85ff7735712b3fbae07e84a0df8a76b52ce652de0b08144973883793"} Nov 28 13:43:04 crc kubenswrapper[4779]: I1128 13:43:04.296780 4779 scope.go:117] "RemoveContainer" containerID="1926e6df4a5f8c7ea943faaebce87cd86df5ed80ff9f9cb79a8950216d35d215" Nov 28 13:43:04 crc kubenswrapper[4779]: I1128 13:43:04.296774 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nx8g9" Nov 28 13:43:04 crc kubenswrapper[4779]: I1128 13:43:04.315876 4779 scope.go:117] "RemoveContainer" containerID="efdf4e45fdc6d5255bfd4beca495438aff9a71da88cebf4c1e0c10245f024f4e" Nov 28 13:43:04 crc kubenswrapper[4779]: I1128 13:43:04.330754 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ebbe3fe-2895-4662-b623-d66afd272cdd-utilities\") pod \"7ebbe3fe-2895-4662-b623-d66afd272cdd\" (UID: \"7ebbe3fe-2895-4662-b623-d66afd272cdd\") " Nov 28 13:43:04 crc kubenswrapper[4779]: I1128 13:43:04.330984 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ps6rl\" (UniqueName: \"kubernetes.io/projected/7ebbe3fe-2895-4662-b623-d66afd272cdd-kube-api-access-ps6rl\") pod \"7ebbe3fe-2895-4662-b623-d66afd272cdd\" (UID: \"7ebbe3fe-2895-4662-b623-d66afd272cdd\") " Nov 28 13:43:04 crc kubenswrapper[4779]: I1128 13:43:04.331023 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ebbe3fe-2895-4662-b623-d66afd272cdd-catalog-content\") pod \"7ebbe3fe-2895-4662-b623-d66afd272cdd\" (UID: \"7ebbe3fe-2895-4662-b623-d66afd272cdd\") " Nov 28 13:43:04 crc kubenswrapper[4779]: I1128 13:43:04.333168 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ebbe3fe-2895-4662-b623-d66afd272cdd-utilities" (OuterVolumeSpecName: "utilities") pod "7ebbe3fe-2895-4662-b623-d66afd272cdd" (UID: "7ebbe3fe-2895-4662-b623-d66afd272cdd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 13:43:04 crc kubenswrapper[4779]: I1128 13:43:04.339789 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ebbe3fe-2895-4662-b623-d66afd272cdd-kube-api-access-ps6rl" (OuterVolumeSpecName: "kube-api-access-ps6rl") pod "7ebbe3fe-2895-4662-b623-d66afd272cdd" (UID: "7ebbe3fe-2895-4662-b623-d66afd272cdd"). InnerVolumeSpecName "kube-api-access-ps6rl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:43:04 crc kubenswrapper[4779]: I1128 13:43:04.342815 4779 scope.go:117] "RemoveContainer" containerID="342c4a791c04616f98178f6b4b309b7121a1f819c69e0a681de4d850c0fb2680" Nov 28 13:43:04 crc kubenswrapper[4779]: I1128 13:43:04.420700 4779 scope.go:117] "RemoveContainer" containerID="1926e6df4a5f8c7ea943faaebce87cd86df5ed80ff9f9cb79a8950216d35d215" Nov 28 13:43:04 crc kubenswrapper[4779]: E1128 13:43:04.421223 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1926e6df4a5f8c7ea943faaebce87cd86df5ed80ff9f9cb79a8950216d35d215\": container with ID starting with 1926e6df4a5f8c7ea943faaebce87cd86df5ed80ff9f9cb79a8950216d35d215 not found: ID does not exist" containerID="1926e6df4a5f8c7ea943faaebce87cd86df5ed80ff9f9cb79a8950216d35d215" Nov 28 13:43:04 crc kubenswrapper[4779]: I1128 13:43:04.421267 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1926e6df4a5f8c7ea943faaebce87cd86df5ed80ff9f9cb79a8950216d35d215"} err="failed to get container status \"1926e6df4a5f8c7ea943faaebce87cd86df5ed80ff9f9cb79a8950216d35d215\": rpc error: code = NotFound desc = could not find container \"1926e6df4a5f8c7ea943faaebce87cd86df5ed80ff9f9cb79a8950216d35d215\": container with ID starting with 1926e6df4a5f8c7ea943faaebce87cd86df5ed80ff9f9cb79a8950216d35d215 not found: ID does not exist" Nov 28 13:43:04 crc kubenswrapper[4779]: I1128 13:43:04.421291 4779 scope.go:117] "RemoveContainer" containerID="efdf4e45fdc6d5255bfd4beca495438aff9a71da88cebf4c1e0c10245f024f4e" Nov 28 13:43:04 crc kubenswrapper[4779]: E1128 13:43:04.421586 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efdf4e45fdc6d5255bfd4beca495438aff9a71da88cebf4c1e0c10245f024f4e\": container with ID starting with efdf4e45fdc6d5255bfd4beca495438aff9a71da88cebf4c1e0c10245f024f4e not found: ID does not exist" containerID="efdf4e45fdc6d5255bfd4beca495438aff9a71da88cebf4c1e0c10245f024f4e" Nov 28 13:43:04 crc kubenswrapper[4779]: I1128 13:43:04.421615 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efdf4e45fdc6d5255bfd4beca495438aff9a71da88cebf4c1e0c10245f024f4e"} err="failed to get container status \"efdf4e45fdc6d5255bfd4beca495438aff9a71da88cebf4c1e0c10245f024f4e\": rpc error: code = NotFound desc = could not find container \"efdf4e45fdc6d5255bfd4beca495438aff9a71da88cebf4c1e0c10245f024f4e\": container with ID starting with efdf4e45fdc6d5255bfd4beca495438aff9a71da88cebf4c1e0c10245f024f4e not found: ID does not exist" Nov 28 13:43:04 crc kubenswrapper[4779]: I1128 13:43:04.421632 4779 scope.go:117] "RemoveContainer" containerID="342c4a791c04616f98178f6b4b309b7121a1f819c69e0a681de4d850c0fb2680" Nov 28 13:43:04 crc kubenswrapper[4779]: E1128 13:43:04.421833 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"342c4a791c04616f98178f6b4b309b7121a1f819c69e0a681de4d850c0fb2680\": container with ID starting with 342c4a791c04616f98178f6b4b309b7121a1f819c69e0a681de4d850c0fb2680 not found: ID does not exist" containerID="342c4a791c04616f98178f6b4b309b7121a1f819c69e0a681de4d850c0fb2680" Nov 28 13:43:04 crc kubenswrapper[4779]: I1128 13:43:04.421856 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"342c4a791c04616f98178f6b4b309b7121a1f819c69e0a681de4d850c0fb2680"} err="failed to get container status \"342c4a791c04616f98178f6b4b309b7121a1f819c69e0a681de4d850c0fb2680\": rpc error: code = NotFound desc = could not find container \"342c4a791c04616f98178f6b4b309b7121a1f819c69e0a681de4d850c0fb2680\": container with ID starting with 342c4a791c04616f98178f6b4b309b7121a1f819c69e0a681de4d850c0fb2680 not found: ID does not exist" Nov 28 13:43:04 crc kubenswrapper[4779]: I1128 13:43:04.433343 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ps6rl\" (UniqueName: \"kubernetes.io/projected/7ebbe3fe-2895-4662-b623-d66afd272cdd-kube-api-access-ps6rl\") on node \"crc\" DevicePath \"\"" Nov 28 13:43:04 crc kubenswrapper[4779]: I1128 13:43:04.433380 4779 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ebbe3fe-2895-4662-b623-d66afd272cdd-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 13:43:04 crc kubenswrapper[4779]: I1128 13:43:04.446599 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ebbe3fe-2895-4662-b623-d66afd272cdd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7ebbe3fe-2895-4662-b623-d66afd272cdd" (UID: "7ebbe3fe-2895-4662-b623-d66afd272cdd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 13:43:04 crc kubenswrapper[4779]: I1128 13:43:04.535211 4779 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ebbe3fe-2895-4662-b623-d66afd272cdd-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 13:43:04 crc kubenswrapper[4779]: I1128 13:43:04.630267 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nx8g9"] Nov 28 13:43:04 crc kubenswrapper[4779]: I1128 13:43:04.640447 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nx8g9"] Nov 28 13:43:05 crc kubenswrapper[4779]: I1128 13:43:05.739119 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ebbe3fe-2895-4662-b623-d66afd272cdd" path="/var/lib/kubelet/pods/7ebbe3fe-2895-4662-b623-d66afd272cdd/volumes" Nov 28 13:43:19 crc kubenswrapper[4779]: I1128 13:43:19.501838 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b64f4fb85-hhr2g_e7e646e3-00c9-4359-b012-aaff60962a76/kube-rbac-proxy/0.log" Nov 28 13:43:19 crc kubenswrapper[4779]: I1128 13:43:19.552840 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b64f4fb85-hhr2g_e7e646e3-00c9-4359-b012-aaff60962a76/manager/0.log" Nov 28 13:43:19 crc kubenswrapper[4779]: I1128 13:43:19.740821 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6b7f75547b-l52fj_854f928b-5068-4de9-b865-7fb2a26ca9e4/kube-rbac-proxy/0.log" Nov 28 13:43:19 crc kubenswrapper[4779]: I1128 13:43:19.765782 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6b7f75547b-l52fj_854f928b-5068-4de9-b865-7fb2a26ca9e4/manager/0.log" Nov 28 13:43:19 crc kubenswrapper[4779]: I1128 13:43:19.810382 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-955677c94-rh5q9_8d20efbb-527c-4085-a974-d49ee454b545/kube-rbac-proxy/0.log" Nov 28 13:43:20 crc kubenswrapper[4779]: I1128 13:43:20.004080 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-955677c94-rh5q9_8d20efbb-527c-4085-a974-d49ee454b545/manager/0.log" Nov 28 13:43:20 crc kubenswrapper[4779]: I1128 13:43:20.026377 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e666c68ff9e9ac0d69ff4488828194992a4afe96aebe623791b2eb27d056z22_57c6c245-3c5b-41bf-9de3-c5d23d132c71/util/0.log" Nov 28 13:43:20 crc kubenswrapper[4779]: I1128 13:43:20.180718 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e666c68ff9e9ac0d69ff4488828194992a4afe96aebe623791b2eb27d056z22_57c6c245-3c5b-41bf-9de3-c5d23d132c71/util/0.log" Nov 28 13:43:20 crc kubenswrapper[4779]: I1128 13:43:20.184081 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e666c68ff9e9ac0d69ff4488828194992a4afe96aebe623791b2eb27d056z22_57c6c245-3c5b-41bf-9de3-c5d23d132c71/pull/0.log" Nov 28 13:43:20 crc kubenswrapper[4779]: I1128 13:43:20.211368 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e666c68ff9e9ac0d69ff4488828194992a4afe96aebe623791b2eb27d056z22_57c6c245-3c5b-41bf-9de3-c5d23d132c71/pull/0.log" Nov 28 13:43:20 crc kubenswrapper[4779]: I1128 13:43:20.404016 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e666c68ff9e9ac0d69ff4488828194992a4afe96aebe623791b2eb27d056z22_57c6c245-3c5b-41bf-9de3-c5d23d132c71/extract/0.log" Nov 28 13:43:20 crc kubenswrapper[4779]: I1128 13:43:20.414190 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e666c68ff9e9ac0d69ff4488828194992a4afe96aebe623791b2eb27d056z22_57c6c245-3c5b-41bf-9de3-c5d23d132c71/util/0.log" Nov 28 13:43:20 crc kubenswrapper[4779]: I1128 13:43:20.434003 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e666c68ff9e9ac0d69ff4488828194992a4afe96aebe623791b2eb27d056z22_57c6c245-3c5b-41bf-9de3-c5d23d132c71/pull/0.log" Nov 28 13:43:20 crc kubenswrapper[4779]: I1128 13:43:20.574561 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-589cbd6b5b-ns58c_eaf24224-e1f5-44d8-8151-54be9408b429/kube-rbac-proxy/0.log" Nov 28 13:43:20 crc kubenswrapper[4779]: I1128 13:43:20.646651 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-589cbd6b5b-ns58c_eaf24224-e1f5-44d8-8151-54be9408b429/manager/0.log" Nov 28 13:43:20 crc kubenswrapper[4779]: I1128 13:43:20.707111 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5b77f656f-wptr7_b3e0c6a3-33d8-4c1e-8b44-156de87d5621/kube-rbac-proxy/0.log" Nov 28 13:43:20 crc kubenswrapper[4779]: I1128 13:43:20.838676 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5b77f656f-wptr7_b3e0c6a3-33d8-4c1e-8b44-156de87d5621/manager/0.log" Nov 28 13:43:20 crc kubenswrapper[4779]: I1128 13:43:20.895318 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5d494799bf-vd654_40688ccc-932c-411e-8703-4bf0f11ec3bf/kube-rbac-proxy/0.log" Nov 28 13:43:20 crc kubenswrapper[4779]: I1128 13:43:20.914238 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5d494799bf-vd654_40688ccc-932c-411e-8703-4bf0f11ec3bf/manager/0.log" Nov 28 13:43:21 crc kubenswrapper[4779]: I1128 13:43:21.033689 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-7pv5r_af7046d6-f852-4c62-83e6-ea213812d86c/kube-rbac-proxy/0.log" Nov 28 13:43:21 crc kubenswrapper[4779]: I1128 13:43:21.245448 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-7pv5r_af7046d6-f852-4c62-83e6-ea213812d86c/manager/0.log" Nov 28 13:43:21 crc kubenswrapper[4779]: I1128 13:43:21.260917 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-67cb4dc6d4-n952x_493d54b8-1e0a-4270-8180-ba1bc746c783/kube-rbac-proxy/0.log" Nov 28 13:43:21 crc kubenswrapper[4779]: I1128 13:43:21.279263 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-67cb4dc6d4-n952x_493d54b8-1e0a-4270-8180-ba1bc746c783/manager/0.log" Nov 28 13:43:21 crc kubenswrapper[4779]: I1128 13:43:21.501604 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7b4567c7cf-lfj45_da8e3e32-3cc1-4b1b-91c5-31ac6e660d65/kube-rbac-proxy/0.log" Nov 28 13:43:21 crc kubenswrapper[4779]: I1128 13:43:21.548219 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7b4567c7cf-lfj45_da8e3e32-3cc1-4b1b-91c5-31ac6e660d65/manager/0.log" Nov 28 13:43:21 crc kubenswrapper[4779]: I1128 13:43:21.559126 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5d499bf58b-9xxwc_75996749-aa6c-4a8e-ba7f-412209db3939/kube-rbac-proxy/0.log" Nov 28 13:43:21 crc kubenswrapper[4779]: I1128 13:43:21.707403 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5d499bf58b-9xxwc_75996749-aa6c-4a8e-ba7f-412209db3939/manager/0.log" Nov 28 13:43:21 crc kubenswrapper[4779]: I1128 13:43:21.711366 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-66f4dd4bc7-xqxsn_b96763b6-e6a4-4429-8fe4-6b23620824c1/kube-rbac-proxy/0.log" Nov 28 13:43:21 crc kubenswrapper[4779]: I1128 13:43:21.761814 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-66f4dd4bc7-xqxsn_b96763b6-e6a4-4429-8fe4-6b23620824c1/manager/0.log" Nov 28 13:43:21 crc kubenswrapper[4779]: I1128 13:43:21.878021 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6fdcddb789-cnfmd_911b9690-ddec-439e-9ef5-a7d80562f51c/kube-rbac-proxy/0.log" Nov 28 13:43:22 crc kubenswrapper[4779]: I1128 13:43:22.021199 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6fdcddb789-cnfmd_911b9690-ddec-439e-9ef5-a7d80562f51c/manager/0.log" Nov 28 13:43:22 crc kubenswrapper[4779]: I1128 13:43:22.097787 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-zzflc_3b4accd2-e9c1-4e51-a559-c5cf108f5af1/kube-rbac-proxy/0.log" Nov 28 13:43:22 crc kubenswrapper[4779]: I1128 13:43:22.193839 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-zzflc_3b4accd2-e9c1-4e51-a559-c5cf108f5af1/manager/0.log" Nov 28 13:43:22 crc kubenswrapper[4779]: I1128 13:43:22.246234 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-64cdc6ff96-kvnt5_623cd065-a088-41d4-9b98-8be8d60c0f20/kube-rbac-proxy/0.log" Nov 28 13:43:22 crc kubenswrapper[4779]: I1128 13:43:22.313639 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-64cdc6ff96-kvnt5_623cd065-a088-41d4-9b98-8be8d60c0f20/manager/0.log" Nov 28 13:43:22 crc kubenswrapper[4779]: I1128 13:43:22.428291 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5fcdb54b6bsdkvh_66bfbaf1-3247-47c1-aa58-19cf5875882e/kube-rbac-proxy/0.log" Nov 28 13:43:22 crc kubenswrapper[4779]: I1128 13:43:22.428644 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5fcdb54b6bsdkvh_66bfbaf1-3247-47c1-aa58-19cf5875882e/manager/0.log" Nov 28 13:43:22 crc kubenswrapper[4779]: I1128 13:43:22.846173 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-qvbp8_527c77d8-6692-434a-88b6-4d5e3dc93337/registry-server/0.log" Nov 28 13:43:22 crc kubenswrapper[4779]: I1128 13:43:22.881277 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-7bb768d89f-48p4r_459f9c74-7dc8-401d-8df4-2c1b947f87df/operator/0.log" Nov 28 13:43:23 crc kubenswrapper[4779]: I1128 13:43:23.031282 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-56897c768d-v49kv_bb4ac6b3-6655-4e29-8cf7-bdae98df3386/kube-rbac-proxy/0.log" Nov 28 13:43:23 crc kubenswrapper[4779]: I1128 13:43:23.149624 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-56897c768d-v49kv_bb4ac6b3-6655-4e29-8cf7-bdae98df3386/manager/0.log" Nov 28 13:43:23 crc kubenswrapper[4779]: I1128 13:43:23.387690 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-57988cc5b5-lnf86_b1c19869-b98a-40c8-a312-8c49d69bdf0f/kube-rbac-proxy/0.log" Nov 28 13:43:23 crc kubenswrapper[4779]: I1128 13:43:23.456285 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-57988cc5b5-lnf86_b1c19869-b98a-40c8-a312-8c49d69bdf0f/manager/0.log" Nov 28 13:43:23 crc kubenswrapper[4779]: I1128 13:43:23.590474 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-495dt_1c62c5f4-5757-46d4-92e5-7fdb2b21c88e/operator/0.log" Nov 28 13:43:23 crc kubenswrapper[4779]: I1128 13:43:23.696069 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-d77b94747-c6wb2_f3d69218-2422-473c-ae41-bd2a2b902355/manager/0.log" Nov 28 13:43:23 crc kubenswrapper[4779]: I1128 13:43:23.730659 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-d77b94747-c6wb2_f3d69218-2422-473c-ae41-bd2a2b902355/kube-rbac-proxy/0.log" Nov 28 13:43:23 crc kubenswrapper[4779]: I1128 13:43:23.827250 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7574d9569-x822f_f1d9753d-b49d-4e32-b312-137314283984/kube-rbac-proxy/0.log" Nov 28 13:43:23 crc kubenswrapper[4779]: I1128 13:43:23.878150 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7d967756df-nvprs_31627cc1-b543-4da9-8fe1-ac12e7f09531/manager/0.log" Nov 28 13:43:24 crc kubenswrapper[4779]: I1128 13:43:24.021132 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7574d9569-x822f_f1d9753d-b49d-4e32-b312-137314283984/manager/0.log" Nov 28 13:43:24 crc kubenswrapper[4779]: I1128 13:43:24.027624 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cd6c7f4c8-h4czz_39fdca45-fa34-4d90-93a9-1123dff79930/kube-rbac-proxy/0.log" Nov 28 13:43:24 crc kubenswrapper[4779]: I1128 13:43:24.078835 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cd6c7f4c8-h4czz_39fdca45-fa34-4d90-93a9-1123dff79930/manager/0.log" Nov 28 13:43:24 crc kubenswrapper[4779]: I1128 13:43:24.163941 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-656dcb59d4-hjhz4_1799095f-becf-4b8e-bb0b-28c04a819e59/kube-rbac-proxy/0.log" Nov 28 13:43:24 crc kubenswrapper[4779]: I1128 13:43:24.196140 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-656dcb59d4-hjhz4_1799095f-becf-4b8e-bb0b-28c04a819e59/manager/0.log" Nov 28 13:43:45 crc kubenswrapper[4779]: I1128 13:43:45.256847 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-njrwv_1475f2e1-1c5b-470d-b0aa-0645ad327bb5/control-plane-machine-set-operator/0.log" Nov 28 13:43:45 crc kubenswrapper[4779]: I1128 13:43:45.471484 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-bz4fl_c3eebda0-cd9c-448c-8e0c-c25aea48fd54/kube-rbac-proxy/0.log" Nov 28 13:43:45 crc kubenswrapper[4779]: I1128 13:43:45.487286 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-bz4fl_c3eebda0-cd9c-448c-8e0c-c25aea48fd54/machine-api-operator/0.log" Nov 28 13:43:46 crc kubenswrapper[4779]: I1128 13:43:46.284432 4779 patch_prober.go:28] interesting pod/machine-config-daemon-kj9g2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 13:43:46 crc kubenswrapper[4779]: I1128 13:43:46.284687 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 13:43:59 crc kubenswrapper[4779]: I1128 13:43:59.838052 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-5qqff_ec2c397e-6b4d-4ffc-9ffa-4f437657da02/cert-manager-controller/0.log" Nov 28 13:43:59 crc kubenswrapper[4779]: I1128 13:43:59.928039 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-fx6q6_17acea2c-1197-4905-bb74-3f4137eb521d/cert-manager-cainjector/0.log" Nov 28 13:43:59 crc kubenswrapper[4779]: I1128 13:43:59.973012 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-bvk27_8445721b-8f86-4161-adc3-2ddf58f3aa94/cert-manager-webhook/0.log" Nov 28 13:44:12 crc kubenswrapper[4779]: I1128 13:44:12.855465 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-ss4d2_a8a297b2-fc61-4bcf-9872-106b5776cb43/nmstate-console-plugin/0.log" Nov 28 13:44:13 crc kubenswrapper[4779]: I1128 13:44:13.008234 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-mqs42_70ee469b-f21f-4b94-9f6a-1b79db90e4fd/nmstate-handler/0.log" Nov 28 13:44:13 crc kubenswrapper[4779]: I1128 13:44:13.039837 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-h6q7b_0c9a8cc1-da76-4824-8303-fe9e18c76af3/nmstate-metrics/0.log" Nov 28 13:44:13 crc kubenswrapper[4779]: I1128 13:44:13.069437 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-h6q7b_0c9a8cc1-da76-4824-8303-fe9e18c76af3/kube-rbac-proxy/0.log" Nov 28 13:44:13 crc kubenswrapper[4779]: I1128 13:44:13.211935 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-27lnx_82cdcdcc-f4b1-4f17-b8be-81e5525a2438/nmstate-operator/0.log" Nov 28 13:44:13 crc kubenswrapper[4779]: I1128 13:44:13.256599 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rdvjw"] Nov 28 13:44:13 crc kubenswrapper[4779]: E1128 13:44:13.257163 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ebbe3fe-2895-4662-b623-d66afd272cdd" containerName="extract-utilities" Nov 28 13:44:13 crc kubenswrapper[4779]: I1128 13:44:13.257184 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ebbe3fe-2895-4662-b623-d66afd272cdd" containerName="extract-utilities" Nov 28 13:44:13 crc kubenswrapper[4779]: E1128 13:44:13.257230 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dbd1ab1-111a-4649-a20f-69fae58be8bc" containerName="registry-server" Nov 28 13:44:13 crc kubenswrapper[4779]: I1128 13:44:13.257239 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dbd1ab1-111a-4649-a20f-69fae58be8bc" containerName="registry-server" Nov 28 13:44:13 crc kubenswrapper[4779]: E1128 13:44:13.257250 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ebbe3fe-2895-4662-b623-d66afd272cdd" containerName="extract-content" Nov 28 13:44:13 crc kubenswrapper[4779]: I1128 13:44:13.257259 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ebbe3fe-2895-4662-b623-d66afd272cdd" containerName="extract-content" Nov 28 13:44:13 crc kubenswrapper[4779]: E1128 13:44:13.257280 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ebbe3fe-2895-4662-b623-d66afd272cdd" containerName="registry-server" Nov 28 13:44:13 crc kubenswrapper[4779]: I1128 13:44:13.257289 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ebbe3fe-2895-4662-b623-d66afd272cdd" containerName="registry-server" Nov 28 13:44:13 crc kubenswrapper[4779]: E1128 13:44:13.257312 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dbd1ab1-111a-4649-a20f-69fae58be8bc" containerName="extract-utilities" Nov 28 13:44:13 crc kubenswrapper[4779]: I1128 13:44:13.257321 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dbd1ab1-111a-4649-a20f-69fae58be8bc" containerName="extract-utilities" Nov 28 13:44:13 crc kubenswrapper[4779]: E1128 13:44:13.257342 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dbd1ab1-111a-4649-a20f-69fae58be8bc" containerName="extract-content" Nov 28 13:44:13 crc kubenswrapper[4779]: I1128 13:44:13.257352 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dbd1ab1-111a-4649-a20f-69fae58be8bc" containerName="extract-content" Nov 28 13:44:13 crc kubenswrapper[4779]: I1128 13:44:13.257596 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dbd1ab1-111a-4649-a20f-69fae58be8bc" containerName="registry-server" Nov 28 13:44:13 crc kubenswrapper[4779]: I1128 13:44:13.257635 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ebbe3fe-2895-4662-b623-d66afd272cdd" containerName="registry-server" Nov 28 13:44:13 crc kubenswrapper[4779]: I1128 13:44:13.259557 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rdvjw" Nov 28 13:44:13 crc kubenswrapper[4779]: I1128 13:44:13.267936 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rdvjw"] Nov 28 13:44:13 crc kubenswrapper[4779]: I1128 13:44:13.303573 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-zrh7w_2ea9d3e0-ee7b-48bc-a358-689318fa4dae/nmstate-webhook/0.log" Nov 28 13:44:13 crc kubenswrapper[4779]: I1128 13:44:13.321190 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b821c3b-5fa9-4f87-961b-cc027c329d9a-utilities\") pod \"certified-operators-rdvjw\" (UID: \"1b821c3b-5fa9-4f87-961b-cc027c329d9a\") " pod="openshift-marketplace/certified-operators-rdvjw" Nov 28 13:44:13 crc kubenswrapper[4779]: I1128 13:44:13.321330 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxqnt\" (UniqueName: \"kubernetes.io/projected/1b821c3b-5fa9-4f87-961b-cc027c329d9a-kube-api-access-pxqnt\") pod \"certified-operators-rdvjw\" (UID: \"1b821c3b-5fa9-4f87-961b-cc027c329d9a\") " pod="openshift-marketplace/certified-operators-rdvjw" Nov 28 13:44:13 crc kubenswrapper[4779]: I1128 13:44:13.321397 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b821c3b-5fa9-4f87-961b-cc027c329d9a-catalog-content\") pod \"certified-operators-rdvjw\" (UID: \"1b821c3b-5fa9-4f87-961b-cc027c329d9a\") " pod="openshift-marketplace/certified-operators-rdvjw" Nov 28 13:44:13 crc kubenswrapper[4779]: I1128 13:44:13.423207 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b821c3b-5fa9-4f87-961b-cc027c329d9a-utilities\") pod \"certified-operators-rdvjw\" (UID: \"1b821c3b-5fa9-4f87-961b-cc027c329d9a\") " pod="openshift-marketplace/certified-operators-rdvjw" Nov 28 13:44:13 crc kubenswrapper[4779]: I1128 13:44:13.423269 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxqnt\" (UniqueName: \"kubernetes.io/projected/1b821c3b-5fa9-4f87-961b-cc027c329d9a-kube-api-access-pxqnt\") pod \"certified-operators-rdvjw\" (UID: \"1b821c3b-5fa9-4f87-961b-cc027c329d9a\") " pod="openshift-marketplace/certified-operators-rdvjw" Nov 28 13:44:13 crc kubenswrapper[4779]: I1128 13:44:13.423315 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b821c3b-5fa9-4f87-961b-cc027c329d9a-catalog-content\") pod \"certified-operators-rdvjw\" (UID: \"1b821c3b-5fa9-4f87-961b-cc027c329d9a\") " pod="openshift-marketplace/certified-operators-rdvjw" Nov 28 13:44:13 crc kubenswrapper[4779]: I1128 13:44:13.423834 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b821c3b-5fa9-4f87-961b-cc027c329d9a-utilities\") pod \"certified-operators-rdvjw\" (UID: \"1b821c3b-5fa9-4f87-961b-cc027c329d9a\") " pod="openshift-marketplace/certified-operators-rdvjw" Nov 28 13:44:13 crc kubenswrapper[4779]: I1128 13:44:13.423935 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b821c3b-5fa9-4f87-961b-cc027c329d9a-catalog-content\") pod \"certified-operators-rdvjw\" (UID: \"1b821c3b-5fa9-4f87-961b-cc027c329d9a\") " pod="openshift-marketplace/certified-operators-rdvjw" Nov 28 13:44:13 crc kubenswrapper[4779]: I1128 13:44:13.449845 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxqnt\" (UniqueName: \"kubernetes.io/projected/1b821c3b-5fa9-4f87-961b-cc027c329d9a-kube-api-access-pxqnt\") pod \"certified-operators-rdvjw\" (UID: \"1b821c3b-5fa9-4f87-961b-cc027c329d9a\") " pod="openshift-marketplace/certified-operators-rdvjw" Nov 28 13:44:13 crc kubenswrapper[4779]: I1128 13:44:13.592208 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rdvjw" Nov 28 13:44:14 crc kubenswrapper[4779]: W1128 13:44:14.129186 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b821c3b_5fa9_4f87_961b_cc027c329d9a.slice/crio-2e2fcfa02b5e0cc7dbe4903bdef8c5f66209889e190b0188e8cd7addc75ad561 WatchSource:0}: Error finding container 2e2fcfa02b5e0cc7dbe4903bdef8c5f66209889e190b0188e8cd7addc75ad561: Status 404 returned error can't find the container with id 2e2fcfa02b5e0cc7dbe4903bdef8c5f66209889e190b0188e8cd7addc75ad561 Nov 28 13:44:14 crc kubenswrapper[4779]: I1128 13:44:14.134310 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rdvjw"] Nov 28 13:44:15 crc kubenswrapper[4779]: I1128 13:44:15.593616 4779 generic.go:334] "Generic (PLEG): container finished" podID="1b821c3b-5fa9-4f87-961b-cc027c329d9a" containerID="c393ce0fdae1a86beeac3a533f54c9691bdbc50d698fc1f5259c571f16105b4d" exitCode=0 Nov 28 13:44:15 crc kubenswrapper[4779]: I1128 13:44:15.594198 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rdvjw" event={"ID":"1b821c3b-5fa9-4f87-961b-cc027c329d9a","Type":"ContainerDied","Data":"c393ce0fdae1a86beeac3a533f54c9691bdbc50d698fc1f5259c571f16105b4d"} Nov 28 13:44:15 crc kubenswrapper[4779]: I1128 13:44:15.594295 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rdvjw" event={"ID":"1b821c3b-5fa9-4f87-961b-cc027c329d9a","Type":"ContainerStarted","Data":"2e2fcfa02b5e0cc7dbe4903bdef8c5f66209889e190b0188e8cd7addc75ad561"} Nov 28 13:44:16 crc kubenswrapper[4779]: I1128 13:44:16.285541 4779 patch_prober.go:28] interesting pod/machine-config-daemon-kj9g2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 13:44:16 crc kubenswrapper[4779]: I1128 13:44:16.286071 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 13:44:16 crc kubenswrapper[4779]: I1128 13:44:16.605949 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rdvjw" event={"ID":"1b821c3b-5fa9-4f87-961b-cc027c329d9a","Type":"ContainerStarted","Data":"5708238a9d1d8d3f3baa7c691573f2d4ce829adfa033210ee2876078c3030864"} Nov 28 13:44:17 crc kubenswrapper[4779]: I1128 13:44:17.628185 4779 generic.go:334] "Generic (PLEG): container finished" podID="1b821c3b-5fa9-4f87-961b-cc027c329d9a" containerID="5708238a9d1d8d3f3baa7c691573f2d4ce829adfa033210ee2876078c3030864" exitCode=0 Nov 28 13:44:17 crc kubenswrapper[4779]: I1128 13:44:17.628590 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rdvjw" event={"ID":"1b821c3b-5fa9-4f87-961b-cc027c329d9a","Type":"ContainerDied","Data":"5708238a9d1d8d3f3baa7c691573f2d4ce829adfa033210ee2876078c3030864"} Nov 28 13:44:18 crc kubenswrapper[4779]: I1128 13:44:18.640415 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rdvjw" event={"ID":"1b821c3b-5fa9-4f87-961b-cc027c329d9a","Type":"ContainerStarted","Data":"f9c7430a88f93a62517c081ee9d7e97bf2fc2b66ec9865d4d36e6e556f483e85"} Nov 28 13:44:18 crc kubenswrapper[4779]: I1128 13:44:18.657147 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rdvjw" podStartSLOduration=3.17813094 podStartE2EDuration="5.657133854s" podCreationTimestamp="2025-11-28 13:44:13 +0000 UTC" firstStartedPulling="2025-11-28 13:44:15.598994912 +0000 UTC m=+4116.164670266" lastFinishedPulling="2025-11-28 13:44:18.077997826 +0000 UTC m=+4118.643673180" observedRunningTime="2025-11-28 13:44:18.654441544 +0000 UTC m=+4119.220116898" watchObservedRunningTime="2025-11-28 13:44:18.657133854 +0000 UTC m=+4119.222809208" Nov 28 13:44:23 crc kubenswrapper[4779]: I1128 13:44:23.593237 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rdvjw" Nov 28 13:44:23 crc kubenswrapper[4779]: I1128 13:44:23.593738 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rdvjw" Nov 28 13:44:23 crc kubenswrapper[4779]: I1128 13:44:23.653186 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rdvjw" Nov 28 13:44:23 crc kubenswrapper[4779]: I1128 13:44:23.747800 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rdvjw" Nov 28 13:44:28 crc kubenswrapper[4779]: I1128 13:44:28.030902 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rdvjw"] Nov 28 13:44:28 crc kubenswrapper[4779]: I1128 13:44:28.031574 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rdvjw" podUID="1b821c3b-5fa9-4f87-961b-cc027c329d9a" containerName="registry-server" containerID="cri-o://f9c7430a88f93a62517c081ee9d7e97bf2fc2b66ec9865d4d36e6e556f483e85" gracePeriod=2 Nov 28 13:44:28 crc kubenswrapper[4779]: I1128 13:44:28.748778 4779 generic.go:334] "Generic (PLEG): container finished" podID="1b821c3b-5fa9-4f87-961b-cc027c329d9a" containerID="f9c7430a88f93a62517c081ee9d7e97bf2fc2b66ec9865d4d36e6e556f483e85" exitCode=0 Nov 28 13:44:28 crc kubenswrapper[4779]: I1128 13:44:28.748854 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rdvjw" event={"ID":"1b821c3b-5fa9-4f87-961b-cc027c329d9a","Type":"ContainerDied","Data":"f9c7430a88f93a62517c081ee9d7e97bf2fc2b66ec9865d4d36e6e556f483e85"} Nov 28 13:44:29 crc kubenswrapper[4779]: I1128 13:44:29.050663 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rdvjw" Nov 28 13:44:29 crc kubenswrapper[4779]: I1128 13:44:29.238305 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxqnt\" (UniqueName: \"kubernetes.io/projected/1b821c3b-5fa9-4f87-961b-cc027c329d9a-kube-api-access-pxqnt\") pod \"1b821c3b-5fa9-4f87-961b-cc027c329d9a\" (UID: \"1b821c3b-5fa9-4f87-961b-cc027c329d9a\") " Nov 28 13:44:29 crc kubenswrapper[4779]: I1128 13:44:29.238359 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b821c3b-5fa9-4f87-961b-cc027c329d9a-utilities\") pod \"1b821c3b-5fa9-4f87-961b-cc027c329d9a\" (UID: \"1b821c3b-5fa9-4f87-961b-cc027c329d9a\") " Nov 28 13:44:29 crc kubenswrapper[4779]: I1128 13:44:29.238392 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b821c3b-5fa9-4f87-961b-cc027c329d9a-catalog-content\") pod \"1b821c3b-5fa9-4f87-961b-cc027c329d9a\" (UID: \"1b821c3b-5fa9-4f87-961b-cc027c329d9a\") " Nov 28 13:44:29 crc kubenswrapper[4779]: I1128 13:44:29.239046 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b821c3b-5fa9-4f87-961b-cc027c329d9a-utilities" (OuterVolumeSpecName: "utilities") pod "1b821c3b-5fa9-4f87-961b-cc027c329d9a" (UID: "1b821c3b-5fa9-4f87-961b-cc027c329d9a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 13:44:29 crc kubenswrapper[4779]: I1128 13:44:29.244315 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b821c3b-5fa9-4f87-961b-cc027c329d9a-kube-api-access-pxqnt" (OuterVolumeSpecName: "kube-api-access-pxqnt") pod "1b821c3b-5fa9-4f87-961b-cc027c329d9a" (UID: "1b821c3b-5fa9-4f87-961b-cc027c329d9a"). InnerVolumeSpecName "kube-api-access-pxqnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:44:29 crc kubenswrapper[4779]: I1128 13:44:29.285503 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b821c3b-5fa9-4f87-961b-cc027c329d9a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1b821c3b-5fa9-4f87-961b-cc027c329d9a" (UID: "1b821c3b-5fa9-4f87-961b-cc027c329d9a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 13:44:29 crc kubenswrapper[4779]: I1128 13:44:29.341835 4779 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b821c3b-5fa9-4f87-961b-cc027c329d9a-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 13:44:29 crc kubenswrapper[4779]: I1128 13:44:29.341881 4779 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b821c3b-5fa9-4f87-961b-cc027c329d9a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 13:44:29 crc kubenswrapper[4779]: I1128 13:44:29.341917 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxqnt\" (UniqueName: \"kubernetes.io/projected/1b821c3b-5fa9-4f87-961b-cc027c329d9a-kube-api-access-pxqnt\") on node \"crc\" DevicePath \"\"" Nov 28 13:44:29 crc kubenswrapper[4779]: I1128 13:44:29.761781 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rdvjw" event={"ID":"1b821c3b-5fa9-4f87-961b-cc027c329d9a","Type":"ContainerDied","Data":"2e2fcfa02b5e0cc7dbe4903bdef8c5f66209889e190b0188e8cd7addc75ad561"} Nov 28 13:44:29 crc kubenswrapper[4779]: I1128 13:44:29.762131 4779 scope.go:117] "RemoveContainer" containerID="f9c7430a88f93a62517c081ee9d7e97bf2fc2b66ec9865d4d36e6e556f483e85" Nov 28 13:44:29 crc kubenswrapper[4779]: I1128 13:44:29.761843 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rdvjw" Nov 28 13:44:29 crc kubenswrapper[4779]: I1128 13:44:29.793306 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rdvjw"] Nov 28 13:44:29 crc kubenswrapper[4779]: I1128 13:44:29.796881 4779 scope.go:117] "RemoveContainer" containerID="5708238a9d1d8d3f3baa7c691573f2d4ce829adfa033210ee2876078c3030864" Nov 28 13:44:29 crc kubenswrapper[4779]: I1128 13:44:29.803060 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rdvjw"] Nov 28 13:44:29 crc kubenswrapper[4779]: I1128 13:44:29.821550 4779 scope.go:117] "RemoveContainer" containerID="c393ce0fdae1a86beeac3a533f54c9691bdbc50d698fc1f5259c571f16105b4d" Nov 28 13:44:30 crc kubenswrapper[4779]: I1128 13:44:30.227182 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-89xvz_7fe4463e-8739-494e-8171-7bfc925826a9/kube-rbac-proxy/0.log" Nov 28 13:44:30 crc kubenswrapper[4779]: I1128 13:44:30.305531 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-89xvz_7fe4463e-8739-494e-8171-7bfc925826a9/controller/0.log" Nov 28 13:44:30 crc kubenswrapper[4779]: I1128 13:44:30.858075 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w5vz4_e5db87da-4229-4c2f-abbd-bb5aff35de97/cp-frr-files/0.log" Nov 28 13:44:31 crc kubenswrapper[4779]: I1128 13:44:31.066532 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w5vz4_e5db87da-4229-4c2f-abbd-bb5aff35de97/cp-frr-files/0.log" Nov 28 13:44:31 crc kubenswrapper[4779]: I1128 13:44:31.081887 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w5vz4_e5db87da-4229-4c2f-abbd-bb5aff35de97/cp-metrics/0.log" Nov 28 13:44:31 crc kubenswrapper[4779]: I1128 13:44:31.100566 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w5vz4_e5db87da-4229-4c2f-abbd-bb5aff35de97/cp-reloader/0.log" Nov 28 13:44:31 crc kubenswrapper[4779]: I1128 13:44:31.157537 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w5vz4_e5db87da-4229-4c2f-abbd-bb5aff35de97/cp-reloader/0.log" Nov 28 13:44:31 crc kubenswrapper[4779]: I1128 13:44:31.294732 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w5vz4_e5db87da-4229-4c2f-abbd-bb5aff35de97/cp-frr-files/0.log" Nov 28 13:44:31 crc kubenswrapper[4779]: I1128 13:44:31.343971 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w5vz4_e5db87da-4229-4c2f-abbd-bb5aff35de97/cp-metrics/0.log" Nov 28 13:44:31 crc kubenswrapper[4779]: I1128 13:44:31.360732 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w5vz4_e5db87da-4229-4c2f-abbd-bb5aff35de97/cp-metrics/0.log" Nov 28 13:44:31 crc kubenswrapper[4779]: I1128 13:44:31.375563 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w5vz4_e5db87da-4229-4c2f-abbd-bb5aff35de97/cp-reloader/0.log" Nov 28 13:44:31 crc kubenswrapper[4779]: I1128 13:44:31.569952 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w5vz4_e5db87da-4229-4c2f-abbd-bb5aff35de97/cp-reloader/0.log" Nov 28 13:44:31 crc kubenswrapper[4779]: I1128 13:44:31.583314 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w5vz4_e5db87da-4229-4c2f-abbd-bb5aff35de97/cp-metrics/0.log" Nov 28 13:44:31 crc kubenswrapper[4779]: I1128 13:44:31.602429 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w5vz4_e5db87da-4229-4c2f-abbd-bb5aff35de97/cp-frr-files/0.log" Nov 28 13:44:31 crc kubenswrapper[4779]: I1128 13:44:31.613119 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w5vz4_e5db87da-4229-4c2f-abbd-bb5aff35de97/controller/0.log" Nov 28 13:44:31 crc kubenswrapper[4779]: I1128 13:44:31.737316 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b821c3b-5fa9-4f87-961b-cc027c329d9a" path="/var/lib/kubelet/pods/1b821c3b-5fa9-4f87-961b-cc027c329d9a/volumes" Nov 28 13:44:31 crc kubenswrapper[4779]: I1128 13:44:31.768449 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w5vz4_e5db87da-4229-4c2f-abbd-bb5aff35de97/frr-metrics/0.log" Nov 28 13:44:31 crc kubenswrapper[4779]: I1128 13:44:31.816406 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w5vz4_e5db87da-4229-4c2f-abbd-bb5aff35de97/kube-rbac-proxy-frr/0.log" Nov 28 13:44:31 crc kubenswrapper[4779]: I1128 13:44:31.865049 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w5vz4_e5db87da-4229-4c2f-abbd-bb5aff35de97/kube-rbac-proxy/0.log" Nov 28 13:44:31 crc kubenswrapper[4779]: I1128 13:44:31.970435 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w5vz4_e5db87da-4229-4c2f-abbd-bb5aff35de97/reloader/0.log" Nov 28 13:44:32 crc kubenswrapper[4779]: I1128 13:44:32.078823 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-nxg68_ea534549-07a6-43e1-98e7-906ee50e4146/frr-k8s-webhook-server/0.log" Nov 28 13:44:32 crc kubenswrapper[4779]: I1128 13:44:32.300669 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7d5c964c78-9tlcl_93890301-ca3f-4009-a55d-960edac754a9/manager/0.log" Nov 28 13:44:32 crc kubenswrapper[4779]: I1128 13:44:32.420546 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7c8544dcdc-ggmwl_60e79db0-fa26-46e7-80d8-55720f1372a2/webhook-server/0.log" Nov 28 13:44:32 crc kubenswrapper[4779]: I1128 13:44:32.533497 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-flq64_7bd19fff-499e-443a-b571-8af43ae08b4e/kube-rbac-proxy/0.log" Nov 28 13:44:33 crc kubenswrapper[4779]: I1128 13:44:33.204169 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-flq64_7bd19fff-499e-443a-b571-8af43ae08b4e/speaker/0.log" Nov 28 13:44:33 crc kubenswrapper[4779]: I1128 13:44:33.552965 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w5vz4_e5db87da-4229-4c2f-abbd-bb5aff35de97/frr/0.log" Nov 28 13:44:39 crc kubenswrapper[4779]: I1128 13:44:39.186157 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-9c6b99df5-82cnl" podUID="75d5987a-c7cb-400e-8efb-7375385f0e20" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Nov 28 13:44:46 crc kubenswrapper[4779]: I1128 13:44:46.285559 4779 patch_prober.go:28] interesting pod/machine-config-daemon-kj9g2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 13:44:46 crc kubenswrapper[4779]: I1128 13:44:46.286425 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 13:44:46 crc kubenswrapper[4779]: I1128 13:44:46.286493 4779 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" Nov 28 13:44:46 crc kubenswrapper[4779]: I1128 13:44:46.287694 4779 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e9f5178ede5c569f5567852868c11a94380c8b3f324c6b6ddd47699da5e82c76"} pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 13:44:46 crc kubenswrapper[4779]: I1128 13:44:46.287836 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" containerID="cri-o://e9f5178ede5c569f5567852868c11a94380c8b3f324c6b6ddd47699da5e82c76" gracePeriod=600 Nov 28 13:44:46 crc kubenswrapper[4779]: I1128 13:44:46.982777 4779 generic.go:334] "Generic (PLEG): container finished" podID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerID="e9f5178ede5c569f5567852868c11a94380c8b3f324c6b6ddd47699da5e82c76" exitCode=0 Nov 28 13:44:46 crc kubenswrapper[4779]: I1128 13:44:46.982855 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" event={"ID":"3b2a3eb4-4de5-491b-b466-3a35b7d745ec","Type":"ContainerDied","Data":"e9f5178ede5c569f5567852868c11a94380c8b3f324c6b6ddd47699da5e82c76"} Nov 28 13:44:46 crc kubenswrapper[4779]: I1128 13:44:46.983407 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" event={"ID":"3b2a3eb4-4de5-491b-b466-3a35b7d745ec","Type":"ContainerStarted","Data":"9f006008e295b62d0150e689d0b029c75904925a7bbc3374e7ffce20c396b60a"} Nov 28 13:44:46 crc kubenswrapper[4779]: I1128 13:44:46.983499 4779 scope.go:117] "RemoveContainer" containerID="e0979e2873372762dc22f2d860bfe12ccf1b62b9acc4eb82e9e76a9701d5036b" Nov 28 13:44:47 crc kubenswrapper[4779]: I1128 13:44:47.559685 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fg5659_f88ccd92-f82a-4b6a-9502-f458938ab085/util/0.log" Nov 28 13:44:47 crc kubenswrapper[4779]: I1128 13:44:47.693453 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fg5659_f88ccd92-f82a-4b6a-9502-f458938ab085/util/0.log" Nov 28 13:44:47 crc kubenswrapper[4779]: I1128 13:44:47.713749 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fg5659_f88ccd92-f82a-4b6a-9502-f458938ab085/pull/0.log" Nov 28 13:44:47 crc kubenswrapper[4779]: I1128 13:44:47.725401 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fg5659_f88ccd92-f82a-4b6a-9502-f458938ab085/pull/0.log" Nov 28 13:44:47 crc kubenswrapper[4779]: I1128 13:44:47.877430 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fg5659_f88ccd92-f82a-4b6a-9502-f458938ab085/util/0.log" Nov 28 13:44:47 crc kubenswrapper[4779]: I1128 13:44:47.898587 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fg5659_f88ccd92-f82a-4b6a-9502-f458938ab085/extract/0.log" Nov 28 13:44:47 crc kubenswrapper[4779]: I1128 13:44:47.912470 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fg5659_f88ccd92-f82a-4b6a-9502-f458938ab085/pull/0.log" Nov 28 13:44:48 crc kubenswrapper[4779]: I1128 13:44:48.035312 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921046cv4_e307524d-7be7-4841-ac8a-dea95d4c976e/util/0.log" Nov 28 13:44:48 crc kubenswrapper[4779]: I1128 13:44:48.189819 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921046cv4_e307524d-7be7-4841-ac8a-dea95d4c976e/pull/0.log" Nov 28 13:44:48 crc kubenswrapper[4779]: I1128 13:44:48.226153 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921046cv4_e307524d-7be7-4841-ac8a-dea95d4c976e/pull/0.log" Nov 28 13:44:48 crc kubenswrapper[4779]: I1128 13:44:48.250720 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921046cv4_e307524d-7be7-4841-ac8a-dea95d4c976e/util/0.log" Nov 28 13:44:48 crc kubenswrapper[4779]: I1128 13:44:48.383774 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921046cv4_e307524d-7be7-4841-ac8a-dea95d4c976e/extract/0.log" Nov 28 13:44:48 crc kubenswrapper[4779]: I1128 13:44:48.390366 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921046cv4_e307524d-7be7-4841-ac8a-dea95d4c976e/util/0.log" Nov 28 13:44:48 crc kubenswrapper[4779]: I1128 13:44:48.410900 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921046cv4_e307524d-7be7-4841-ac8a-dea95d4c976e/pull/0.log" Nov 28 13:44:48 crc kubenswrapper[4779]: I1128 13:44:48.542552 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83zqqz8_f70b1dfe-4b12-40c8-8052-da91227479b0/util/0.log" Nov 28 13:44:48 crc kubenswrapper[4779]: I1128 13:44:48.773110 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83zqqz8_f70b1dfe-4b12-40c8-8052-da91227479b0/pull/0.log" Nov 28 13:44:48 crc kubenswrapper[4779]: I1128 13:44:48.796131 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83zqqz8_f70b1dfe-4b12-40c8-8052-da91227479b0/pull/0.log" Nov 28 13:44:48 crc kubenswrapper[4779]: I1128 13:44:48.797193 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83zqqz8_f70b1dfe-4b12-40c8-8052-da91227479b0/util/0.log" Nov 28 13:44:48 crc kubenswrapper[4779]: I1128 13:44:48.936653 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83zqqz8_f70b1dfe-4b12-40c8-8052-da91227479b0/util/0.log" Nov 28 13:44:48 crc kubenswrapper[4779]: I1128 13:44:48.940899 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83zqqz8_f70b1dfe-4b12-40c8-8052-da91227479b0/pull/0.log" Nov 28 13:44:48 crc kubenswrapper[4779]: I1128 13:44:48.950604 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83zqqz8_f70b1dfe-4b12-40c8-8052-da91227479b0/extract/0.log" Nov 28 13:44:49 crc kubenswrapper[4779]: I1128 13:44:49.105950 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6qmdc_5b79674b-d129-4bf4-91f2-77b42f1d51ea/extract-utilities/0.log" Nov 28 13:44:49 crc kubenswrapper[4779]: I1128 13:44:49.964674 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6qmdc_5b79674b-d129-4bf4-91f2-77b42f1d51ea/extract-content/0.log" Nov 28 13:44:49 crc kubenswrapper[4779]: I1128 13:44:49.964898 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6qmdc_5b79674b-d129-4bf4-91f2-77b42f1d51ea/extract-utilities/0.log" Nov 28 13:44:49 crc kubenswrapper[4779]: I1128 13:44:49.976949 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6qmdc_5b79674b-d129-4bf4-91f2-77b42f1d51ea/extract-content/0.log" Nov 28 13:44:50 crc kubenswrapper[4779]: I1128 13:44:50.141936 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6qmdc_5b79674b-d129-4bf4-91f2-77b42f1d51ea/extract-utilities/0.log" Nov 28 13:44:50 crc kubenswrapper[4779]: I1128 13:44:50.226407 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6qmdc_5b79674b-d129-4bf4-91f2-77b42f1d51ea/extract-content/0.log" Nov 28 13:44:50 crc kubenswrapper[4779]: I1128 13:44:50.342689 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gdz82_218924d0-58ac-460f-a4f6-f00925ee6a97/extract-utilities/0.log" Nov 28 13:44:50 crc kubenswrapper[4779]: I1128 13:44:50.538935 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gdz82_218924d0-58ac-460f-a4f6-f00925ee6a97/extract-utilities/0.log" Nov 28 13:44:50 crc kubenswrapper[4779]: I1128 13:44:50.613780 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gdz82_218924d0-58ac-460f-a4f6-f00925ee6a97/extract-content/0.log" Nov 28 13:44:50 crc kubenswrapper[4779]: I1128 13:44:50.637672 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gdz82_218924d0-58ac-460f-a4f6-f00925ee6a97/extract-content/0.log" Nov 28 13:44:50 crc kubenswrapper[4779]: I1128 13:44:50.671363 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6qmdc_5b79674b-d129-4bf4-91f2-77b42f1d51ea/registry-server/0.log" Nov 28 13:44:50 crc kubenswrapper[4779]: I1128 13:44:50.788924 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gdz82_218924d0-58ac-460f-a4f6-f00925ee6a97/extract-utilities/0.log" Nov 28 13:44:50 crc kubenswrapper[4779]: I1128 13:44:50.815542 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gdz82_218924d0-58ac-460f-a4f6-f00925ee6a97/extract-content/0.log" Nov 28 13:44:51 crc kubenswrapper[4779]: I1128 13:44:51.016485 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-r6z5b_6d803c44-5049-4974-ad24-8bdf8082456f/marketplace-operator/0.log" Nov 28 13:44:51 crc kubenswrapper[4779]: I1128 13:44:51.114984 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jngtm_aeb5fca6-5157-4e18-8223-59f88908f1c8/extract-utilities/0.log" Nov 28 13:44:51 crc kubenswrapper[4779]: I1128 13:44:51.330188 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jngtm_aeb5fca6-5157-4e18-8223-59f88908f1c8/extract-content/0.log" Nov 28 13:44:51 crc kubenswrapper[4779]: I1128 13:44:51.360964 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jngtm_aeb5fca6-5157-4e18-8223-59f88908f1c8/extract-utilities/0.log" Nov 28 13:44:51 crc kubenswrapper[4779]: I1128 13:44:51.379798 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jngtm_aeb5fca6-5157-4e18-8223-59f88908f1c8/extract-content/0.log" Nov 28 13:44:51 crc kubenswrapper[4779]: I1128 13:44:51.525433 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gdz82_218924d0-58ac-460f-a4f6-f00925ee6a97/registry-server/0.log" Nov 28 13:44:51 crc kubenswrapper[4779]: I1128 13:44:51.544616 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jngtm_aeb5fca6-5157-4e18-8223-59f88908f1c8/extract-utilities/0.log" Nov 28 13:44:52 crc kubenswrapper[4779]: I1128 13:44:52.120158 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hppfn_b5d5dfb9-ebff-4d12-af9a-53220c054a90/extract-utilities/0.log" Nov 28 13:44:52 crc kubenswrapper[4779]: I1128 13:44:52.137208 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jngtm_aeb5fca6-5157-4e18-8223-59f88908f1c8/extract-content/0.log" Nov 28 13:44:52 crc kubenswrapper[4779]: I1128 13:44:52.177648 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jngtm_aeb5fca6-5157-4e18-8223-59f88908f1c8/registry-server/0.log" Nov 28 13:44:52 crc kubenswrapper[4779]: I1128 13:44:52.331901 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hppfn_b5d5dfb9-ebff-4d12-af9a-53220c054a90/extract-content/0.log" Nov 28 13:44:52 crc kubenswrapper[4779]: I1128 13:44:52.332154 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hppfn_b5d5dfb9-ebff-4d12-af9a-53220c054a90/extract-content/0.log" Nov 28 13:44:52 crc kubenswrapper[4779]: I1128 13:44:52.347838 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hppfn_b5d5dfb9-ebff-4d12-af9a-53220c054a90/extract-utilities/0.log" Nov 28 13:44:52 crc kubenswrapper[4779]: I1128 13:44:52.499613 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hppfn_b5d5dfb9-ebff-4d12-af9a-53220c054a90/extract-content/0.log" Nov 28 13:44:52 crc kubenswrapper[4779]: I1128 13:44:52.546566 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hppfn_b5d5dfb9-ebff-4d12-af9a-53220c054a90/extract-utilities/0.log" Nov 28 13:44:52 crc kubenswrapper[4779]: I1128 13:44:52.974475 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hppfn_b5d5dfb9-ebff-4d12-af9a-53220c054a90/registry-server/0.log" Nov 28 13:45:00 crc kubenswrapper[4779]: I1128 13:45:00.196961 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405625-sv6df"] Nov 28 13:45:00 crc kubenswrapper[4779]: E1128 13:45:00.197942 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b821c3b-5fa9-4f87-961b-cc027c329d9a" containerName="registry-server" Nov 28 13:45:00 crc kubenswrapper[4779]: I1128 13:45:00.197964 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b821c3b-5fa9-4f87-961b-cc027c329d9a" containerName="registry-server" Nov 28 13:45:00 crc kubenswrapper[4779]: E1128 13:45:00.197978 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b821c3b-5fa9-4f87-961b-cc027c329d9a" containerName="extract-content" Nov 28 13:45:00 crc kubenswrapper[4779]: I1128 13:45:00.197984 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b821c3b-5fa9-4f87-961b-cc027c329d9a" containerName="extract-content" Nov 28 13:45:00 crc kubenswrapper[4779]: E1128 13:45:00.198010 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b821c3b-5fa9-4f87-961b-cc027c329d9a" containerName="extract-utilities" Nov 28 13:45:00 crc kubenswrapper[4779]: I1128 13:45:00.198017 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b821c3b-5fa9-4f87-961b-cc027c329d9a" containerName="extract-utilities" Nov 28 13:45:00 crc kubenswrapper[4779]: I1128 13:45:00.198228 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b821c3b-5fa9-4f87-961b-cc027c329d9a" containerName="registry-server" Nov 28 13:45:00 crc kubenswrapper[4779]: I1128 13:45:00.198868 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405625-sv6df" Nov 28 13:45:00 crc kubenswrapper[4779]: I1128 13:45:00.202687 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 13:45:00 crc kubenswrapper[4779]: I1128 13:45:00.203615 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 13:45:00 crc kubenswrapper[4779]: I1128 13:45:00.210636 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405625-sv6df"] Nov 28 13:45:00 crc kubenswrapper[4779]: I1128 13:45:00.256696 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/65fb42af-0158-4437-9503-0e5fb11944df-secret-volume\") pod \"collect-profiles-29405625-sv6df\" (UID: \"65fb42af-0158-4437-9503-0e5fb11944df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405625-sv6df" Nov 28 13:45:00 crc kubenswrapper[4779]: I1128 13:45:00.256989 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-848ch\" (UniqueName: \"kubernetes.io/projected/65fb42af-0158-4437-9503-0e5fb11944df-kube-api-access-848ch\") pod \"collect-profiles-29405625-sv6df\" (UID: \"65fb42af-0158-4437-9503-0e5fb11944df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405625-sv6df" Nov 28 13:45:00 crc kubenswrapper[4779]: I1128 13:45:00.257134 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65fb42af-0158-4437-9503-0e5fb11944df-config-volume\") pod \"collect-profiles-29405625-sv6df\" (UID: \"65fb42af-0158-4437-9503-0e5fb11944df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405625-sv6df" Nov 28 13:45:00 crc kubenswrapper[4779]: I1128 13:45:00.358725 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/65fb42af-0158-4437-9503-0e5fb11944df-secret-volume\") pod \"collect-profiles-29405625-sv6df\" (UID: \"65fb42af-0158-4437-9503-0e5fb11944df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405625-sv6df" Nov 28 13:45:00 crc kubenswrapper[4779]: I1128 13:45:00.359011 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-848ch\" (UniqueName: \"kubernetes.io/projected/65fb42af-0158-4437-9503-0e5fb11944df-kube-api-access-848ch\") pod \"collect-profiles-29405625-sv6df\" (UID: \"65fb42af-0158-4437-9503-0e5fb11944df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405625-sv6df" Nov 28 13:45:00 crc kubenswrapper[4779]: I1128 13:45:00.359130 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65fb42af-0158-4437-9503-0e5fb11944df-config-volume\") pod \"collect-profiles-29405625-sv6df\" (UID: \"65fb42af-0158-4437-9503-0e5fb11944df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405625-sv6df" Nov 28 13:45:00 crc kubenswrapper[4779]: I1128 13:45:00.360085 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65fb42af-0158-4437-9503-0e5fb11944df-config-volume\") pod \"collect-profiles-29405625-sv6df\" (UID: \"65fb42af-0158-4437-9503-0e5fb11944df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405625-sv6df" Nov 28 13:45:00 crc kubenswrapper[4779]: I1128 13:45:00.679080 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/65fb42af-0158-4437-9503-0e5fb11944df-secret-volume\") pod \"collect-profiles-29405625-sv6df\" (UID: \"65fb42af-0158-4437-9503-0e5fb11944df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405625-sv6df" Nov 28 13:45:00 crc kubenswrapper[4779]: I1128 13:45:00.679123 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-848ch\" (UniqueName: \"kubernetes.io/projected/65fb42af-0158-4437-9503-0e5fb11944df-kube-api-access-848ch\") pod \"collect-profiles-29405625-sv6df\" (UID: \"65fb42af-0158-4437-9503-0e5fb11944df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405625-sv6df" Nov 28 13:45:00 crc kubenswrapper[4779]: I1128 13:45:00.835732 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405625-sv6df" Nov 28 13:45:01 crc kubenswrapper[4779]: I1128 13:45:01.287885 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405625-sv6df"] Nov 28 13:45:01 crc kubenswrapper[4779]: W1128 13:45:01.302377 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65fb42af_0158_4437_9503_0e5fb11944df.slice/crio-5d392c8b50d22be7f33505459a43d8e3666741423e664c571160a44d35c7c876 WatchSource:0}: Error finding container 5d392c8b50d22be7f33505459a43d8e3666741423e664c571160a44d35c7c876: Status 404 returned error can't find the container with id 5d392c8b50d22be7f33505459a43d8e3666741423e664c571160a44d35c7c876 Nov 28 13:45:02 crc kubenswrapper[4779]: I1128 13:45:02.115414 4779 generic.go:334] "Generic (PLEG): container finished" podID="65fb42af-0158-4437-9503-0e5fb11944df" containerID="df61c6ba54082234adb18b22c5a4bb3e7d946cfda1d2546dad2e0120dfdaa0b9" exitCode=0 Nov 28 13:45:02 crc kubenswrapper[4779]: I1128 13:45:02.115470 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405625-sv6df" event={"ID":"65fb42af-0158-4437-9503-0e5fb11944df","Type":"ContainerDied","Data":"df61c6ba54082234adb18b22c5a4bb3e7d946cfda1d2546dad2e0120dfdaa0b9"} Nov 28 13:45:02 crc kubenswrapper[4779]: I1128 13:45:02.115740 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405625-sv6df" event={"ID":"65fb42af-0158-4437-9503-0e5fb11944df","Type":"ContainerStarted","Data":"5d392c8b50d22be7f33505459a43d8e3666741423e664c571160a44d35c7c876"} Nov 28 13:45:03 crc kubenswrapper[4779]: I1128 13:45:03.633817 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405625-sv6df" Nov 28 13:45:03 crc kubenswrapper[4779]: I1128 13:45:03.726219 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-848ch\" (UniqueName: \"kubernetes.io/projected/65fb42af-0158-4437-9503-0e5fb11944df-kube-api-access-848ch\") pod \"65fb42af-0158-4437-9503-0e5fb11944df\" (UID: \"65fb42af-0158-4437-9503-0e5fb11944df\") " Nov 28 13:45:03 crc kubenswrapper[4779]: I1128 13:45:03.726628 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/65fb42af-0158-4437-9503-0e5fb11944df-secret-volume\") pod \"65fb42af-0158-4437-9503-0e5fb11944df\" (UID: \"65fb42af-0158-4437-9503-0e5fb11944df\") " Nov 28 13:45:03 crc kubenswrapper[4779]: I1128 13:45:03.726823 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65fb42af-0158-4437-9503-0e5fb11944df-config-volume\") pod \"65fb42af-0158-4437-9503-0e5fb11944df\" (UID: \"65fb42af-0158-4437-9503-0e5fb11944df\") " Nov 28 13:45:03 crc kubenswrapper[4779]: I1128 13:45:03.728467 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65fb42af-0158-4437-9503-0e5fb11944df-config-volume" (OuterVolumeSpecName: "config-volume") pod "65fb42af-0158-4437-9503-0e5fb11944df" (UID: "65fb42af-0158-4437-9503-0e5fb11944df"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 13:45:03 crc kubenswrapper[4779]: I1128 13:45:03.732707 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65fb42af-0158-4437-9503-0e5fb11944df-kube-api-access-848ch" (OuterVolumeSpecName: "kube-api-access-848ch") pod "65fb42af-0158-4437-9503-0e5fb11944df" (UID: "65fb42af-0158-4437-9503-0e5fb11944df"). InnerVolumeSpecName "kube-api-access-848ch". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:45:03 crc kubenswrapper[4779]: I1128 13:45:03.740334 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65fb42af-0158-4437-9503-0e5fb11944df-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "65fb42af-0158-4437-9503-0e5fb11944df" (UID: "65fb42af-0158-4437-9503-0e5fb11944df"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 13:45:03 crc kubenswrapper[4779]: I1128 13:45:03.828669 4779 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65fb42af-0158-4437-9503-0e5fb11944df-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 13:45:03 crc kubenswrapper[4779]: I1128 13:45:03.828708 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-848ch\" (UniqueName: \"kubernetes.io/projected/65fb42af-0158-4437-9503-0e5fb11944df-kube-api-access-848ch\") on node \"crc\" DevicePath \"\"" Nov 28 13:45:03 crc kubenswrapper[4779]: I1128 13:45:03.828809 4779 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/65fb42af-0158-4437-9503-0e5fb11944df-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 13:45:04 crc kubenswrapper[4779]: I1128 13:45:04.136599 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405625-sv6df" event={"ID":"65fb42af-0158-4437-9503-0e5fb11944df","Type":"ContainerDied","Data":"5d392c8b50d22be7f33505459a43d8e3666741423e664c571160a44d35c7c876"} Nov 28 13:45:04 crc kubenswrapper[4779]: I1128 13:45:04.136640 4779 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d392c8b50d22be7f33505459a43d8e3666741423e664c571160a44d35c7c876" Nov 28 13:45:04 crc kubenswrapper[4779]: I1128 13:45:04.136669 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405625-sv6df" Nov 28 13:45:04 crc kubenswrapper[4779]: I1128 13:45:04.709970 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405580-8vvdg"] Nov 28 13:45:04 crc kubenswrapper[4779]: I1128 13:45:04.721113 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405580-8vvdg"] Nov 28 13:45:05 crc kubenswrapper[4779]: I1128 13:45:05.166465 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-l5jtg_06f1d580-00d9-4699-8e8d-8087523ef59a/prometheus-operator/0.log" Nov 28 13:45:05 crc kubenswrapper[4779]: I1128 13:45:05.303594 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-d986bbfbc-cwqv4_4fc94f4f-278c-4c4f-a547-2779183ca661/prometheus-operator-admission-webhook/0.log" Nov 28 13:45:05 crc kubenswrapper[4779]: I1128 13:45:05.380480 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-d986bbfbc-z4cw2_9aef4803-506a-4ca3-9bdd-2ef8865a975c/prometheus-operator-admission-webhook/0.log" Nov 28 13:45:05 crc kubenswrapper[4779]: I1128 13:45:05.547593 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-njrck_cfb01668-ce93-42c0-8c77-1aaac40d5160/perses-operator/0.log" Nov 28 13:45:05 crc kubenswrapper[4779]: I1128 13:45:05.556472 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-z4wlc_179dd1bb-6c8d-443a-a408-40273ae8f6f6/operator/0.log" Nov 28 13:45:05 crc kubenswrapper[4779]: I1128 13:45:05.735952 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2652142-08f6-4c0d-ad6c-8efd85280704" path="/var/lib/kubelet/pods/a2652142-08f6-4c0d-ad6c-8efd85280704/volumes" Nov 28 13:45:18 crc kubenswrapper[4779]: I1128 13:45:18.598380 4779 scope.go:117] "RemoveContainer" containerID="49b0ed9fd87451994d3a4f5594d47ef303a466e5ee608ce014c056900053945b" Nov 28 13:46:34 crc kubenswrapper[4779]: I1128 13:46:34.161814 4779 generic.go:334] "Generic (PLEG): container finished" podID="7515361b-565f-4285-b116-a04b2e17a118" containerID="6987d79ccb0853d037bf530cbbde94901b356ecc956e9e70668379d3b3f48970" exitCode=0 Nov 28 13:46:34 crc kubenswrapper[4779]: I1128 13:46:34.161899 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xp8br/must-gather-cmg8t" event={"ID":"7515361b-565f-4285-b116-a04b2e17a118","Type":"ContainerDied","Data":"6987d79ccb0853d037bf530cbbde94901b356ecc956e9e70668379d3b3f48970"} Nov 28 13:46:34 crc kubenswrapper[4779]: I1128 13:46:34.163491 4779 scope.go:117] "RemoveContainer" containerID="6987d79ccb0853d037bf530cbbde94901b356ecc956e9e70668379d3b3f48970" Nov 28 13:46:34 crc kubenswrapper[4779]: I1128 13:46:34.507812 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-xp8br_must-gather-cmg8t_7515361b-565f-4285-b116-a04b2e17a118/gather/0.log" Nov 28 13:46:42 crc kubenswrapper[4779]: I1128 13:46:42.585084 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-xp8br/must-gather-cmg8t"] Nov 28 13:46:42 crc kubenswrapper[4779]: I1128 13:46:42.585938 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-xp8br/must-gather-cmg8t" podUID="7515361b-565f-4285-b116-a04b2e17a118" containerName="copy" containerID="cri-o://714911f522596752886c3ef56d13df25b835fee6526c5037b116dbe61e0eeb92" gracePeriod=2 Nov 28 13:46:42 crc kubenswrapper[4779]: I1128 13:46:42.596044 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-xp8br/must-gather-cmg8t"] Nov 28 13:46:43 crc kubenswrapper[4779]: I1128 13:46:43.028279 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-xp8br_must-gather-cmg8t_7515361b-565f-4285-b116-a04b2e17a118/copy/0.log" Nov 28 13:46:43 crc kubenswrapper[4779]: I1128 13:46:43.029163 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xp8br/must-gather-cmg8t" Nov 28 13:46:43 crc kubenswrapper[4779]: I1128 13:46:43.156491 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f527f\" (UniqueName: \"kubernetes.io/projected/7515361b-565f-4285-b116-a04b2e17a118-kube-api-access-f527f\") pod \"7515361b-565f-4285-b116-a04b2e17a118\" (UID: \"7515361b-565f-4285-b116-a04b2e17a118\") " Nov 28 13:46:43 crc kubenswrapper[4779]: I1128 13:46:43.156642 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7515361b-565f-4285-b116-a04b2e17a118-must-gather-output\") pod \"7515361b-565f-4285-b116-a04b2e17a118\" (UID: \"7515361b-565f-4285-b116-a04b2e17a118\") " Nov 28 13:46:43 crc kubenswrapper[4779]: I1128 13:46:43.166429 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7515361b-565f-4285-b116-a04b2e17a118-kube-api-access-f527f" (OuterVolumeSpecName: "kube-api-access-f527f") pod "7515361b-565f-4285-b116-a04b2e17a118" (UID: "7515361b-565f-4285-b116-a04b2e17a118"). InnerVolumeSpecName "kube-api-access-f527f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:46:43 crc kubenswrapper[4779]: I1128 13:46:43.258783 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f527f\" (UniqueName: \"kubernetes.io/projected/7515361b-565f-4285-b116-a04b2e17a118-kube-api-access-f527f\") on node \"crc\" DevicePath \"\"" Nov 28 13:46:43 crc kubenswrapper[4779]: I1128 13:46:43.302765 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7515361b-565f-4285-b116-a04b2e17a118-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "7515361b-565f-4285-b116-a04b2e17a118" (UID: "7515361b-565f-4285-b116-a04b2e17a118"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 13:46:43 crc kubenswrapper[4779]: I1128 13:46:43.306592 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-xp8br_must-gather-cmg8t_7515361b-565f-4285-b116-a04b2e17a118/copy/0.log" Nov 28 13:46:43 crc kubenswrapper[4779]: I1128 13:46:43.307077 4779 generic.go:334] "Generic (PLEG): container finished" podID="7515361b-565f-4285-b116-a04b2e17a118" containerID="714911f522596752886c3ef56d13df25b835fee6526c5037b116dbe61e0eeb92" exitCode=143 Nov 28 13:46:43 crc kubenswrapper[4779]: I1128 13:46:43.307156 4779 scope.go:117] "RemoveContainer" containerID="714911f522596752886c3ef56d13df25b835fee6526c5037b116dbe61e0eeb92" Nov 28 13:46:43 crc kubenswrapper[4779]: I1128 13:46:43.307354 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xp8br/must-gather-cmg8t" Nov 28 13:46:43 crc kubenswrapper[4779]: I1128 13:46:43.345544 4779 scope.go:117] "RemoveContainer" containerID="6987d79ccb0853d037bf530cbbde94901b356ecc956e9e70668379d3b3f48970" Nov 28 13:46:43 crc kubenswrapper[4779]: I1128 13:46:43.363842 4779 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7515361b-565f-4285-b116-a04b2e17a118-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 28 13:46:43 crc kubenswrapper[4779]: I1128 13:46:43.417909 4779 scope.go:117] "RemoveContainer" containerID="714911f522596752886c3ef56d13df25b835fee6526c5037b116dbe61e0eeb92" Nov 28 13:46:43 crc kubenswrapper[4779]: E1128 13:46:43.418362 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"714911f522596752886c3ef56d13df25b835fee6526c5037b116dbe61e0eeb92\": container with ID starting with 714911f522596752886c3ef56d13df25b835fee6526c5037b116dbe61e0eeb92 not found: ID does not exist" containerID="714911f522596752886c3ef56d13df25b835fee6526c5037b116dbe61e0eeb92" Nov 28 13:46:43 crc kubenswrapper[4779]: I1128 13:46:43.418408 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"714911f522596752886c3ef56d13df25b835fee6526c5037b116dbe61e0eeb92"} err="failed to get container status \"714911f522596752886c3ef56d13df25b835fee6526c5037b116dbe61e0eeb92\": rpc error: code = NotFound desc = could not find container \"714911f522596752886c3ef56d13df25b835fee6526c5037b116dbe61e0eeb92\": container with ID starting with 714911f522596752886c3ef56d13df25b835fee6526c5037b116dbe61e0eeb92 not found: ID does not exist" Nov 28 13:46:43 crc kubenswrapper[4779]: I1128 13:46:43.418433 4779 scope.go:117] "RemoveContainer" containerID="6987d79ccb0853d037bf530cbbde94901b356ecc956e9e70668379d3b3f48970" Nov 28 13:46:43 crc kubenswrapper[4779]: E1128 13:46:43.419164 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6987d79ccb0853d037bf530cbbde94901b356ecc956e9e70668379d3b3f48970\": container with ID starting with 6987d79ccb0853d037bf530cbbde94901b356ecc956e9e70668379d3b3f48970 not found: ID does not exist" containerID="6987d79ccb0853d037bf530cbbde94901b356ecc956e9e70668379d3b3f48970" Nov 28 13:46:43 crc kubenswrapper[4779]: I1128 13:46:43.419208 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6987d79ccb0853d037bf530cbbde94901b356ecc956e9e70668379d3b3f48970"} err="failed to get container status \"6987d79ccb0853d037bf530cbbde94901b356ecc956e9e70668379d3b3f48970\": rpc error: code = NotFound desc = could not find container \"6987d79ccb0853d037bf530cbbde94901b356ecc956e9e70668379d3b3f48970\": container with ID starting with 6987d79ccb0853d037bf530cbbde94901b356ecc956e9e70668379d3b3f48970 not found: ID does not exist" Nov 28 13:46:43 crc kubenswrapper[4779]: I1128 13:46:43.736543 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7515361b-565f-4285-b116-a04b2e17a118" path="/var/lib/kubelet/pods/7515361b-565f-4285-b116-a04b2e17a118/volumes" Nov 28 13:46:46 crc kubenswrapper[4779]: I1128 13:46:46.285366 4779 patch_prober.go:28] interesting pod/machine-config-daemon-kj9g2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 13:46:46 crc kubenswrapper[4779]: I1128 13:46:46.285760 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 13:47:16 crc kubenswrapper[4779]: I1128 13:47:16.284666 4779 patch_prober.go:28] interesting pod/machine-config-daemon-kj9g2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 13:47:16 crc kubenswrapper[4779]: I1128 13:47:16.285315 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 13:47:46 crc kubenswrapper[4779]: I1128 13:47:46.285516 4779 patch_prober.go:28] interesting pod/machine-config-daemon-kj9g2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 13:47:46 crc kubenswrapper[4779]: I1128 13:47:46.286272 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 13:47:46 crc kubenswrapper[4779]: I1128 13:47:46.286340 4779 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" Nov 28 13:47:46 crc kubenswrapper[4779]: I1128 13:47:46.287570 4779 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9f006008e295b62d0150e689d0b029c75904925a7bbc3374e7ffce20c396b60a"} pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 13:47:46 crc kubenswrapper[4779]: I1128 13:47:46.287693 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" containerID="cri-o://9f006008e295b62d0150e689d0b029c75904925a7bbc3374e7ffce20c396b60a" gracePeriod=600 Nov 28 13:47:47 crc kubenswrapper[4779]: E1128 13:47:47.024781 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:47:47 crc kubenswrapper[4779]: I1128 13:47:47.035150 4779 generic.go:334] "Generic (PLEG): container finished" podID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerID="9f006008e295b62d0150e689d0b029c75904925a7bbc3374e7ffce20c396b60a" exitCode=0 Nov 28 13:47:47 crc kubenswrapper[4779]: I1128 13:47:47.035195 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" event={"ID":"3b2a3eb4-4de5-491b-b466-3a35b7d745ec","Type":"ContainerDied","Data":"9f006008e295b62d0150e689d0b029c75904925a7bbc3374e7ffce20c396b60a"} Nov 28 13:47:47 crc kubenswrapper[4779]: I1128 13:47:47.035226 4779 scope.go:117] "RemoveContainer" containerID="e9f5178ede5c569f5567852868c11a94380c8b3f324c6b6ddd47699da5e82c76" Nov 28 13:47:48 crc kubenswrapper[4779]: I1128 13:47:48.048849 4779 scope.go:117] "RemoveContainer" containerID="9f006008e295b62d0150e689d0b029c75904925a7bbc3374e7ffce20c396b60a" Nov 28 13:47:48 crc kubenswrapper[4779]: E1128 13:47:48.049314 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:47:59 crc kubenswrapper[4779]: I1128 13:47:59.734209 4779 scope.go:117] "RemoveContainer" containerID="9f006008e295b62d0150e689d0b029c75904925a7bbc3374e7ffce20c396b60a" Nov 28 13:47:59 crc kubenswrapper[4779]: E1128 13:47:59.734841 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:48:14 crc kubenswrapper[4779]: I1128 13:48:14.726687 4779 scope.go:117] "RemoveContainer" containerID="9f006008e295b62d0150e689d0b029c75904925a7bbc3374e7ffce20c396b60a" Nov 28 13:48:14 crc kubenswrapper[4779]: E1128 13:48:14.727535 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:48:26 crc kubenswrapper[4779]: I1128 13:48:26.726207 4779 scope.go:117] "RemoveContainer" containerID="9f006008e295b62d0150e689d0b029c75904925a7bbc3374e7ffce20c396b60a" Nov 28 13:48:26 crc kubenswrapper[4779]: E1128 13:48:26.726843 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:48:38 crc kubenswrapper[4779]: I1128 13:48:38.727630 4779 scope.go:117] "RemoveContainer" containerID="9f006008e295b62d0150e689d0b029c75904925a7bbc3374e7ffce20c396b60a" Nov 28 13:48:38 crc kubenswrapper[4779]: E1128 13:48:38.729646 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:48:51 crc kubenswrapper[4779]: I1128 13:48:51.727859 4779 scope.go:117] "RemoveContainer" containerID="9f006008e295b62d0150e689d0b029c75904925a7bbc3374e7ffce20c396b60a" Nov 28 13:48:51 crc kubenswrapper[4779]: E1128 13:48:51.728611 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:49:03 crc kubenswrapper[4779]: I1128 13:49:03.727412 4779 scope.go:117] "RemoveContainer" containerID="9f006008e295b62d0150e689d0b029c75904925a7bbc3374e7ffce20c396b60a" Nov 28 13:49:03 crc kubenswrapper[4779]: E1128 13:49:03.728226 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:49:18 crc kubenswrapper[4779]: I1128 13:49:18.727015 4779 scope.go:117] "RemoveContainer" containerID="9f006008e295b62d0150e689d0b029c75904925a7bbc3374e7ffce20c396b60a" Nov 28 13:49:18 crc kubenswrapper[4779]: E1128 13:49:18.727966 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:49:22 crc kubenswrapper[4779]: I1128 13:49:22.678941 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-4ksl2/must-gather-p8g46"] Nov 28 13:49:22 crc kubenswrapper[4779]: E1128 13:49:22.679939 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7515361b-565f-4285-b116-a04b2e17a118" containerName="gather" Nov 28 13:49:22 crc kubenswrapper[4779]: I1128 13:49:22.679955 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="7515361b-565f-4285-b116-a04b2e17a118" containerName="gather" Nov 28 13:49:22 crc kubenswrapper[4779]: E1128 13:49:22.679978 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7515361b-565f-4285-b116-a04b2e17a118" containerName="copy" Nov 28 13:49:22 crc kubenswrapper[4779]: I1128 13:49:22.679984 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="7515361b-565f-4285-b116-a04b2e17a118" containerName="copy" Nov 28 13:49:22 crc kubenswrapper[4779]: E1128 13:49:22.679995 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65fb42af-0158-4437-9503-0e5fb11944df" containerName="collect-profiles" Nov 28 13:49:22 crc kubenswrapper[4779]: I1128 13:49:22.680002 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="65fb42af-0158-4437-9503-0e5fb11944df" containerName="collect-profiles" Nov 28 13:49:22 crc kubenswrapper[4779]: I1128 13:49:22.680237 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="65fb42af-0158-4437-9503-0e5fb11944df" containerName="collect-profiles" Nov 28 13:49:22 crc kubenswrapper[4779]: I1128 13:49:22.680263 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="7515361b-565f-4285-b116-a04b2e17a118" containerName="copy" Nov 28 13:49:22 crc kubenswrapper[4779]: I1128 13:49:22.680276 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="7515361b-565f-4285-b116-a04b2e17a118" containerName="gather" Nov 28 13:49:22 crc kubenswrapper[4779]: I1128 13:49:22.681314 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4ksl2/must-gather-p8g46" Nov 28 13:49:22 crc kubenswrapper[4779]: I1128 13:49:22.686191 4779 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-4ksl2"/"default-dockercfg-dqn2h" Nov 28 13:49:22 crc kubenswrapper[4779]: I1128 13:49:22.689276 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-4ksl2"/"openshift-service-ca.crt" Nov 28 13:49:22 crc kubenswrapper[4779]: I1128 13:49:22.689654 4779 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-4ksl2"/"kube-root-ca.crt" Nov 28 13:49:22 crc kubenswrapper[4779]: I1128 13:49:22.698979 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-4ksl2/must-gather-p8g46"] Nov 28 13:49:22 crc kubenswrapper[4779]: I1128 13:49:22.847403 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d4349ff9-0075-4c92-b53f-320ce678210e-must-gather-output\") pod \"must-gather-p8g46\" (UID: \"d4349ff9-0075-4c92-b53f-320ce678210e\") " pod="openshift-must-gather-4ksl2/must-gather-p8g46" Nov 28 13:49:22 crc kubenswrapper[4779]: I1128 13:49:22.847798 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42n6r\" (UniqueName: \"kubernetes.io/projected/d4349ff9-0075-4c92-b53f-320ce678210e-kube-api-access-42n6r\") pod \"must-gather-p8g46\" (UID: \"d4349ff9-0075-4c92-b53f-320ce678210e\") " pod="openshift-must-gather-4ksl2/must-gather-p8g46" Nov 28 13:49:22 crc kubenswrapper[4779]: I1128 13:49:22.949610 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d4349ff9-0075-4c92-b53f-320ce678210e-must-gather-output\") pod \"must-gather-p8g46\" (UID: \"d4349ff9-0075-4c92-b53f-320ce678210e\") " pod="openshift-must-gather-4ksl2/must-gather-p8g46" Nov 28 13:49:22 crc kubenswrapper[4779]: I1128 13:49:22.949923 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42n6r\" (UniqueName: \"kubernetes.io/projected/d4349ff9-0075-4c92-b53f-320ce678210e-kube-api-access-42n6r\") pod \"must-gather-p8g46\" (UID: \"d4349ff9-0075-4c92-b53f-320ce678210e\") " pod="openshift-must-gather-4ksl2/must-gather-p8g46" Nov 28 13:49:22 crc kubenswrapper[4779]: I1128 13:49:22.950079 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d4349ff9-0075-4c92-b53f-320ce678210e-must-gather-output\") pod \"must-gather-p8g46\" (UID: \"d4349ff9-0075-4c92-b53f-320ce678210e\") " pod="openshift-must-gather-4ksl2/must-gather-p8g46" Nov 28 13:49:22 crc kubenswrapper[4779]: I1128 13:49:22.968748 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42n6r\" (UniqueName: \"kubernetes.io/projected/d4349ff9-0075-4c92-b53f-320ce678210e-kube-api-access-42n6r\") pod \"must-gather-p8g46\" (UID: \"d4349ff9-0075-4c92-b53f-320ce678210e\") " pod="openshift-must-gather-4ksl2/must-gather-p8g46" Nov 28 13:49:23 crc kubenswrapper[4779]: I1128 13:49:23.000188 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4ksl2/must-gather-p8g46" Nov 28 13:49:23 crc kubenswrapper[4779]: I1128 13:49:23.370601 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-4ksl2/must-gather-p8g46"] Nov 28 13:49:24 crc kubenswrapper[4779]: I1128 13:49:24.307177 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4ksl2/must-gather-p8g46" event={"ID":"d4349ff9-0075-4c92-b53f-320ce678210e","Type":"ContainerStarted","Data":"e1771f7e2112641f070212df553bddbfeb8014ee745bc3449d7df0ee1515fd79"} Nov 28 13:49:24 crc kubenswrapper[4779]: I1128 13:49:24.307718 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4ksl2/must-gather-p8g46" event={"ID":"d4349ff9-0075-4c92-b53f-320ce678210e","Type":"ContainerStarted","Data":"5511cfad058a3c43be4d89c6008eb2775fd51ed836b46953a38148ab3283fb84"} Nov 28 13:49:24 crc kubenswrapper[4779]: I1128 13:49:24.307736 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4ksl2/must-gather-p8g46" event={"ID":"d4349ff9-0075-4c92-b53f-320ce678210e","Type":"ContainerStarted","Data":"bfc99c54c30006a23fd533b92b230c00442442b6d0399c1fa60bb4e1bf93ce2a"} Nov 28 13:49:24 crc kubenswrapper[4779]: I1128 13:49:24.339546 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-4ksl2/must-gather-p8g46" podStartSLOduration=2.339523711 podStartE2EDuration="2.339523711s" podCreationTimestamp="2025-11-28 13:49:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 13:49:24.330987493 +0000 UTC m=+4424.896662847" watchObservedRunningTime="2025-11-28 13:49:24.339523711 +0000 UTC m=+4424.905199065" Nov 28 13:49:28 crc kubenswrapper[4779]: I1128 13:49:28.004543 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-4ksl2/crc-debug-q4sq7"] Nov 28 13:49:28 crc kubenswrapper[4779]: I1128 13:49:28.006054 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4ksl2/crc-debug-q4sq7" Nov 28 13:49:28 crc kubenswrapper[4779]: I1128 13:49:28.067988 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/85698e7f-7364-4a79-a8c1-b82deeffbcf5-host\") pod \"crc-debug-q4sq7\" (UID: \"85698e7f-7364-4a79-a8c1-b82deeffbcf5\") " pod="openshift-must-gather-4ksl2/crc-debug-q4sq7" Nov 28 13:49:28 crc kubenswrapper[4779]: I1128 13:49:28.068059 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h46ct\" (UniqueName: \"kubernetes.io/projected/85698e7f-7364-4a79-a8c1-b82deeffbcf5-kube-api-access-h46ct\") pod \"crc-debug-q4sq7\" (UID: \"85698e7f-7364-4a79-a8c1-b82deeffbcf5\") " pod="openshift-must-gather-4ksl2/crc-debug-q4sq7" Nov 28 13:49:28 crc kubenswrapper[4779]: I1128 13:49:28.168962 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/85698e7f-7364-4a79-a8c1-b82deeffbcf5-host\") pod \"crc-debug-q4sq7\" (UID: \"85698e7f-7364-4a79-a8c1-b82deeffbcf5\") " pod="openshift-must-gather-4ksl2/crc-debug-q4sq7" Nov 28 13:49:28 crc kubenswrapper[4779]: I1128 13:49:28.169028 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h46ct\" (UniqueName: \"kubernetes.io/projected/85698e7f-7364-4a79-a8c1-b82deeffbcf5-kube-api-access-h46ct\") pod \"crc-debug-q4sq7\" (UID: \"85698e7f-7364-4a79-a8c1-b82deeffbcf5\") " pod="openshift-must-gather-4ksl2/crc-debug-q4sq7" Nov 28 13:49:28 crc kubenswrapper[4779]: I1128 13:49:28.169306 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/85698e7f-7364-4a79-a8c1-b82deeffbcf5-host\") pod \"crc-debug-q4sq7\" (UID: \"85698e7f-7364-4a79-a8c1-b82deeffbcf5\") " pod="openshift-must-gather-4ksl2/crc-debug-q4sq7" Nov 28 13:49:28 crc kubenswrapper[4779]: I1128 13:49:28.197805 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h46ct\" (UniqueName: \"kubernetes.io/projected/85698e7f-7364-4a79-a8c1-b82deeffbcf5-kube-api-access-h46ct\") pod \"crc-debug-q4sq7\" (UID: \"85698e7f-7364-4a79-a8c1-b82deeffbcf5\") " pod="openshift-must-gather-4ksl2/crc-debug-q4sq7" Nov 28 13:49:28 crc kubenswrapper[4779]: I1128 13:49:28.332398 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4ksl2/crc-debug-q4sq7" Nov 28 13:49:28 crc kubenswrapper[4779]: W1128 13:49:28.366147 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85698e7f_7364_4a79_a8c1_b82deeffbcf5.slice/crio-9631c434cae401fa4dd069e552e63b9c156666a5c3e94ea6527a40832f2c8c9f WatchSource:0}: Error finding container 9631c434cae401fa4dd069e552e63b9c156666a5c3e94ea6527a40832f2c8c9f: Status 404 returned error can't find the container with id 9631c434cae401fa4dd069e552e63b9c156666a5c3e94ea6527a40832f2c8c9f Nov 28 13:49:29 crc kubenswrapper[4779]: I1128 13:49:29.352743 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4ksl2/crc-debug-q4sq7" event={"ID":"85698e7f-7364-4a79-a8c1-b82deeffbcf5","Type":"ContainerStarted","Data":"12ad8eb75cf8269f6dee1a25cb6949179f8d925855d63c6a7025feea7e6f2eb5"} Nov 28 13:49:29 crc kubenswrapper[4779]: I1128 13:49:29.353450 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4ksl2/crc-debug-q4sq7" event={"ID":"85698e7f-7364-4a79-a8c1-b82deeffbcf5","Type":"ContainerStarted","Data":"9631c434cae401fa4dd069e552e63b9c156666a5c3e94ea6527a40832f2c8c9f"} Nov 28 13:49:29 crc kubenswrapper[4779]: I1128 13:49:29.373431 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-4ksl2/crc-debug-q4sq7" podStartSLOduration=2.373413474 podStartE2EDuration="2.373413474s" podCreationTimestamp="2025-11-28 13:49:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 13:49:29.366441578 +0000 UTC m=+4429.932116932" watchObservedRunningTime="2025-11-28 13:49:29.373413474 +0000 UTC m=+4429.939088828" Nov 28 13:49:32 crc kubenswrapper[4779]: I1128 13:49:32.725886 4779 scope.go:117] "RemoveContainer" containerID="9f006008e295b62d0150e689d0b029c75904925a7bbc3374e7ffce20c396b60a" Nov 28 13:49:32 crc kubenswrapper[4779]: E1128 13:49:32.726650 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:49:40 crc kubenswrapper[4779]: I1128 13:49:40.460557 4779 generic.go:334] "Generic (PLEG): container finished" podID="85698e7f-7364-4a79-a8c1-b82deeffbcf5" containerID="12ad8eb75cf8269f6dee1a25cb6949179f8d925855d63c6a7025feea7e6f2eb5" exitCode=0 Nov 28 13:49:40 crc kubenswrapper[4779]: I1128 13:49:40.460633 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4ksl2/crc-debug-q4sq7" event={"ID":"85698e7f-7364-4a79-a8c1-b82deeffbcf5","Type":"ContainerDied","Data":"12ad8eb75cf8269f6dee1a25cb6949179f8d925855d63c6a7025feea7e6f2eb5"} Nov 28 13:49:41 crc kubenswrapper[4779]: I1128 13:49:41.573152 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4ksl2/crc-debug-q4sq7" Nov 28 13:49:41 crc kubenswrapper[4779]: I1128 13:49:41.604331 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-4ksl2/crc-debug-q4sq7"] Nov 28 13:49:41 crc kubenswrapper[4779]: I1128 13:49:41.612730 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-4ksl2/crc-debug-q4sq7"] Nov 28 13:49:41 crc kubenswrapper[4779]: I1128 13:49:41.771800 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h46ct\" (UniqueName: \"kubernetes.io/projected/85698e7f-7364-4a79-a8c1-b82deeffbcf5-kube-api-access-h46ct\") pod \"85698e7f-7364-4a79-a8c1-b82deeffbcf5\" (UID: \"85698e7f-7364-4a79-a8c1-b82deeffbcf5\") " Nov 28 13:49:41 crc kubenswrapper[4779]: I1128 13:49:41.772139 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/85698e7f-7364-4a79-a8c1-b82deeffbcf5-host\") pod \"85698e7f-7364-4a79-a8c1-b82deeffbcf5\" (UID: \"85698e7f-7364-4a79-a8c1-b82deeffbcf5\") " Nov 28 13:49:41 crc kubenswrapper[4779]: I1128 13:49:41.772254 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85698e7f-7364-4a79-a8c1-b82deeffbcf5-host" (OuterVolumeSpecName: "host") pod "85698e7f-7364-4a79-a8c1-b82deeffbcf5" (UID: "85698e7f-7364-4a79-a8c1-b82deeffbcf5"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 13:49:41 crc kubenswrapper[4779]: I1128 13:49:41.773242 4779 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/85698e7f-7364-4a79-a8c1-b82deeffbcf5-host\") on node \"crc\" DevicePath \"\"" Nov 28 13:49:41 crc kubenswrapper[4779]: I1128 13:49:41.779970 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85698e7f-7364-4a79-a8c1-b82deeffbcf5-kube-api-access-h46ct" (OuterVolumeSpecName: "kube-api-access-h46ct") pod "85698e7f-7364-4a79-a8c1-b82deeffbcf5" (UID: "85698e7f-7364-4a79-a8c1-b82deeffbcf5"). InnerVolumeSpecName "kube-api-access-h46ct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:49:41 crc kubenswrapper[4779]: I1128 13:49:41.875220 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h46ct\" (UniqueName: \"kubernetes.io/projected/85698e7f-7364-4a79-a8c1-b82deeffbcf5-kube-api-access-h46ct\") on node \"crc\" DevicePath \"\"" Nov 28 13:49:42 crc kubenswrapper[4779]: I1128 13:49:42.478380 4779 scope.go:117] "RemoveContainer" containerID="12ad8eb75cf8269f6dee1a25cb6949179f8d925855d63c6a7025feea7e6f2eb5" Nov 28 13:49:42 crc kubenswrapper[4779]: I1128 13:49:42.478549 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4ksl2/crc-debug-q4sq7" Nov 28 13:49:42 crc kubenswrapper[4779]: I1128 13:49:42.816020 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-4ksl2/crc-debug-8p6zd"] Nov 28 13:49:42 crc kubenswrapper[4779]: E1128 13:49:42.816555 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85698e7f-7364-4a79-a8c1-b82deeffbcf5" containerName="container-00" Nov 28 13:49:42 crc kubenswrapper[4779]: I1128 13:49:42.816572 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="85698e7f-7364-4a79-a8c1-b82deeffbcf5" containerName="container-00" Nov 28 13:49:42 crc kubenswrapper[4779]: I1128 13:49:42.816809 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="85698e7f-7364-4a79-a8c1-b82deeffbcf5" containerName="container-00" Nov 28 13:49:42 crc kubenswrapper[4779]: I1128 13:49:42.817751 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4ksl2/crc-debug-8p6zd" Nov 28 13:49:42 crc kubenswrapper[4779]: I1128 13:49:42.996213 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdmnq\" (UniqueName: \"kubernetes.io/projected/8a1dd33d-3d1f-492e-ae52-e3b34e15f562-kube-api-access-zdmnq\") pod \"crc-debug-8p6zd\" (UID: \"8a1dd33d-3d1f-492e-ae52-e3b34e15f562\") " pod="openshift-must-gather-4ksl2/crc-debug-8p6zd" Nov 28 13:49:42 crc kubenswrapper[4779]: I1128 13:49:42.996309 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8a1dd33d-3d1f-492e-ae52-e3b34e15f562-host\") pod \"crc-debug-8p6zd\" (UID: \"8a1dd33d-3d1f-492e-ae52-e3b34e15f562\") " pod="openshift-must-gather-4ksl2/crc-debug-8p6zd" Nov 28 13:49:43 crc kubenswrapper[4779]: I1128 13:49:43.098903 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdmnq\" (UniqueName: \"kubernetes.io/projected/8a1dd33d-3d1f-492e-ae52-e3b34e15f562-kube-api-access-zdmnq\") pod \"crc-debug-8p6zd\" (UID: \"8a1dd33d-3d1f-492e-ae52-e3b34e15f562\") " pod="openshift-must-gather-4ksl2/crc-debug-8p6zd" Nov 28 13:49:43 crc kubenswrapper[4779]: I1128 13:49:43.099002 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8a1dd33d-3d1f-492e-ae52-e3b34e15f562-host\") pod \"crc-debug-8p6zd\" (UID: \"8a1dd33d-3d1f-492e-ae52-e3b34e15f562\") " pod="openshift-must-gather-4ksl2/crc-debug-8p6zd" Nov 28 13:49:43 crc kubenswrapper[4779]: I1128 13:49:43.099177 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8a1dd33d-3d1f-492e-ae52-e3b34e15f562-host\") pod \"crc-debug-8p6zd\" (UID: \"8a1dd33d-3d1f-492e-ae52-e3b34e15f562\") " pod="openshift-must-gather-4ksl2/crc-debug-8p6zd" Nov 28 13:49:43 crc kubenswrapper[4779]: I1128 13:49:43.118399 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdmnq\" (UniqueName: \"kubernetes.io/projected/8a1dd33d-3d1f-492e-ae52-e3b34e15f562-kube-api-access-zdmnq\") pod \"crc-debug-8p6zd\" (UID: \"8a1dd33d-3d1f-492e-ae52-e3b34e15f562\") " pod="openshift-must-gather-4ksl2/crc-debug-8p6zd" Nov 28 13:49:43 crc kubenswrapper[4779]: I1128 13:49:43.139128 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4ksl2/crc-debug-8p6zd" Nov 28 13:49:43 crc kubenswrapper[4779]: W1128 13:49:43.189891 4779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a1dd33d_3d1f_492e_ae52_e3b34e15f562.slice/crio-68e44532be6d7ec99ca90ba41c4491946e8b003ae67c2123ce4897b5126f6690 WatchSource:0}: Error finding container 68e44532be6d7ec99ca90ba41c4491946e8b003ae67c2123ce4897b5126f6690: Status 404 returned error can't find the container with id 68e44532be6d7ec99ca90ba41c4491946e8b003ae67c2123ce4897b5126f6690 Nov 28 13:49:43 crc kubenswrapper[4779]: I1128 13:49:43.492375 4779 generic.go:334] "Generic (PLEG): container finished" podID="8a1dd33d-3d1f-492e-ae52-e3b34e15f562" containerID="1f3c5b4dd0e40080487b7c537762e6ae678f9e69490f6edd90fc4e1ad1b7a041" exitCode=1 Nov 28 13:49:43 crc kubenswrapper[4779]: I1128 13:49:43.492476 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4ksl2/crc-debug-8p6zd" event={"ID":"8a1dd33d-3d1f-492e-ae52-e3b34e15f562","Type":"ContainerDied","Data":"1f3c5b4dd0e40080487b7c537762e6ae678f9e69490f6edd90fc4e1ad1b7a041"} Nov 28 13:49:43 crc kubenswrapper[4779]: I1128 13:49:43.492522 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4ksl2/crc-debug-8p6zd" event={"ID":"8a1dd33d-3d1f-492e-ae52-e3b34e15f562","Type":"ContainerStarted","Data":"68e44532be6d7ec99ca90ba41c4491946e8b003ae67c2123ce4897b5126f6690"} Nov 28 13:49:43 crc kubenswrapper[4779]: I1128 13:49:43.543380 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-4ksl2/crc-debug-8p6zd"] Nov 28 13:49:43 crc kubenswrapper[4779]: I1128 13:49:43.555541 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-4ksl2/crc-debug-8p6zd"] Nov 28 13:49:43 crc kubenswrapper[4779]: I1128 13:49:43.737441 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85698e7f-7364-4a79-a8c1-b82deeffbcf5" path="/var/lib/kubelet/pods/85698e7f-7364-4a79-a8c1-b82deeffbcf5/volumes" Nov 28 13:49:44 crc kubenswrapper[4779]: I1128 13:49:44.608557 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4ksl2/crc-debug-8p6zd" Nov 28 13:49:44 crc kubenswrapper[4779]: I1128 13:49:44.732030 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8a1dd33d-3d1f-492e-ae52-e3b34e15f562-host\") pod \"8a1dd33d-3d1f-492e-ae52-e3b34e15f562\" (UID: \"8a1dd33d-3d1f-492e-ae52-e3b34e15f562\") " Nov 28 13:49:44 crc kubenswrapper[4779]: I1128 13:49:44.732177 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a1dd33d-3d1f-492e-ae52-e3b34e15f562-host" (OuterVolumeSpecName: "host") pod "8a1dd33d-3d1f-492e-ae52-e3b34e15f562" (UID: "8a1dd33d-3d1f-492e-ae52-e3b34e15f562"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 13:49:44 crc kubenswrapper[4779]: I1128 13:49:44.732193 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdmnq\" (UniqueName: \"kubernetes.io/projected/8a1dd33d-3d1f-492e-ae52-e3b34e15f562-kube-api-access-zdmnq\") pod \"8a1dd33d-3d1f-492e-ae52-e3b34e15f562\" (UID: \"8a1dd33d-3d1f-492e-ae52-e3b34e15f562\") " Nov 28 13:49:44 crc kubenswrapper[4779]: I1128 13:49:44.733297 4779 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8a1dd33d-3d1f-492e-ae52-e3b34e15f562-host\") on node \"crc\" DevicePath \"\"" Nov 28 13:49:44 crc kubenswrapper[4779]: I1128 13:49:44.738751 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a1dd33d-3d1f-492e-ae52-e3b34e15f562-kube-api-access-zdmnq" (OuterVolumeSpecName: "kube-api-access-zdmnq") pod "8a1dd33d-3d1f-492e-ae52-e3b34e15f562" (UID: "8a1dd33d-3d1f-492e-ae52-e3b34e15f562"). InnerVolumeSpecName "kube-api-access-zdmnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:49:44 crc kubenswrapper[4779]: I1128 13:49:44.834985 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdmnq\" (UniqueName: \"kubernetes.io/projected/8a1dd33d-3d1f-492e-ae52-e3b34e15f562-kube-api-access-zdmnq\") on node \"crc\" DevicePath \"\"" Nov 28 13:49:45 crc kubenswrapper[4779]: I1128 13:49:45.515279 4779 scope.go:117] "RemoveContainer" containerID="1f3c5b4dd0e40080487b7c537762e6ae678f9e69490f6edd90fc4e1ad1b7a041" Nov 28 13:49:45 crc kubenswrapper[4779]: I1128 13:49:45.515353 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4ksl2/crc-debug-8p6zd" Nov 28 13:49:45 crc kubenswrapper[4779]: I1128 13:49:45.740146 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a1dd33d-3d1f-492e-ae52-e3b34e15f562" path="/var/lib/kubelet/pods/8a1dd33d-3d1f-492e-ae52-e3b34e15f562/volumes" Nov 28 13:49:46 crc kubenswrapper[4779]: I1128 13:49:46.726459 4779 scope.go:117] "RemoveContainer" containerID="9f006008e295b62d0150e689d0b029c75904925a7bbc3374e7ffce20c396b60a" Nov 28 13:49:46 crc kubenswrapper[4779]: E1128 13:49:46.727013 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:49:57 crc kubenswrapper[4779]: I1128 13:49:57.726546 4779 scope.go:117] "RemoveContainer" containerID="9f006008e295b62d0150e689d0b029c75904925a7bbc3374e7ffce20c396b60a" Nov 28 13:49:57 crc kubenswrapper[4779]: E1128 13:49:57.727396 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:50:08 crc kubenswrapper[4779]: I1128 13:50:08.726299 4779 scope.go:117] "RemoveContainer" containerID="9f006008e295b62d0150e689d0b029c75904925a7bbc3374e7ffce20c396b60a" Nov 28 13:50:08 crc kubenswrapper[4779]: E1128 13:50:08.727050 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:50:23 crc kubenswrapper[4779]: I1128 13:50:23.727600 4779 scope.go:117] "RemoveContainer" containerID="9f006008e295b62d0150e689d0b029c75904925a7bbc3374e7ffce20c396b60a" Nov 28 13:50:23 crc kubenswrapper[4779]: E1128 13:50:23.728603 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:50:38 crc kubenswrapper[4779]: I1128 13:50:38.726760 4779 scope.go:117] "RemoveContainer" containerID="9f006008e295b62d0150e689d0b029c75904925a7bbc3374e7ffce20c396b60a" Nov 28 13:50:38 crc kubenswrapper[4779]: E1128 13:50:38.727465 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:50:49 crc kubenswrapper[4779]: I1128 13:50:49.735825 4779 scope.go:117] "RemoveContainer" containerID="9f006008e295b62d0150e689d0b029c75904925a7bbc3374e7ffce20c396b60a" Nov 28 13:50:49 crc kubenswrapper[4779]: E1128 13:50:49.736517 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:50:50 crc kubenswrapper[4779]: I1128 13:50:50.365001 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_7222438c-fe9c-429a-899e-269d84def6d7/init-config-reloader/0.log" Nov 28 13:50:50 crc kubenswrapper[4779]: I1128 13:50:50.552618 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_7222438c-fe9c-429a-899e-269d84def6d7/config-reloader/0.log" Nov 28 13:50:50 crc kubenswrapper[4779]: I1128 13:50:50.556699 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_7222438c-fe9c-429a-899e-269d84def6d7/init-config-reloader/0.log" Nov 28 13:50:50 crc kubenswrapper[4779]: I1128 13:50:50.570864 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_7222438c-fe9c-429a-899e-269d84def6d7/alertmanager/0.log" Nov 28 13:50:50 crc kubenswrapper[4779]: I1128 13:50:50.712277 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_11b20377-b66b-48ee-a4ac-a9f12faf621c/aodh-api/0.log" Nov 28 13:50:50 crc kubenswrapper[4779]: I1128 13:50:50.744616 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_11b20377-b66b-48ee-a4ac-a9f12faf621c/aodh-evaluator/0.log" Nov 28 13:50:50 crc kubenswrapper[4779]: I1128 13:50:50.776981 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_11b20377-b66b-48ee-a4ac-a9f12faf621c/aodh-listener/0.log" Nov 28 13:50:50 crc kubenswrapper[4779]: I1128 13:50:50.802437 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_11b20377-b66b-48ee-a4ac-a9f12faf621c/aodh-notifier/0.log" Nov 28 13:50:50 crc kubenswrapper[4779]: I1128 13:50:50.951681 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5b764d4b5d-q6jq2_2cfb62ec-1fc1-42e9-b77b-9883c7deeaa9/barbican-api/0.log" Nov 28 13:50:50 crc kubenswrapper[4779]: I1128 13:50:50.970663 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5b764d4b5d-q6jq2_2cfb62ec-1fc1-42e9-b77b-9883c7deeaa9/barbican-api-log/0.log" Nov 28 13:50:51 crc kubenswrapper[4779]: I1128 13:50:51.152876 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7784844594-g7gws_6f944a10-9e80-47a5-8ad8-3b6edc0c3315/barbican-keystone-listener/0.log" Nov 28 13:50:51 crc kubenswrapper[4779]: I1128 13:50:51.179247 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7784844594-g7gws_6f944a10-9e80-47a5-8ad8-3b6edc0c3315/barbican-keystone-listener-log/0.log" Nov 28 13:50:51 crc kubenswrapper[4779]: I1128 13:50:51.207769 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-79c7d84d4c-82wcz_684d6129-3c1c-43df-b258-c32b447736d1/barbican-worker/0.log" Nov 28 13:50:51 crc kubenswrapper[4779]: I1128 13:50:51.359533 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-79c7d84d4c-82wcz_684d6129-3c1c-43df-b258-c32b447736d1/barbican-worker-log/0.log" Nov 28 13:50:51 crc kubenswrapper[4779]: I1128 13:50:51.438704 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-4tp8f_81be23ef-d854-4ac3-8f39-601540e013ea/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 13:50:51 crc kubenswrapper[4779]: I1128 13:50:51.562163 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_13db3856-5125-439c-86a8-4493e5619b44/ceilometer-central-agent/0.log" Nov 28 13:50:51 crc kubenswrapper[4779]: I1128 13:50:51.579440 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_13db3856-5125-439c-86a8-4493e5619b44/ceilometer-notification-agent/0.log" Nov 28 13:50:51 crc kubenswrapper[4779]: I1128 13:50:51.687772 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_13db3856-5125-439c-86a8-4493e5619b44/proxy-httpd/0.log" Nov 28 13:50:51 crc kubenswrapper[4779]: I1128 13:50:51.731246 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_13db3856-5125-439c-86a8-4493e5619b44/sg-core/0.log" Nov 28 13:50:52 crc kubenswrapper[4779]: I1128 13:50:52.359154 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_8605d0be-235c-4b63-8781-ea140c60e622/cinder-api-log/0.log" Nov 28 13:50:52 crc kubenswrapper[4779]: I1128 13:50:52.445799 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_8605d0be-235c-4b63-8781-ea140c60e622/cinder-api/0.log" Nov 28 13:50:52 crc kubenswrapper[4779]: I1128 13:50:52.610007 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_b208660d-de0e-4218-a31b-66ce968db066/probe/0.log" Nov 28 13:50:52 crc kubenswrapper[4779]: I1128 13:50:52.665720 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-s4kqc_c4e4bcb3-1c6f-4b3c-9cfb-bcffde886f96/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 13:50:52 crc kubenswrapper[4779]: I1128 13:50:52.698405 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_b208660d-de0e-4218-a31b-66ce968db066/cinder-scheduler/0.log" Nov 28 13:50:52 crc kubenswrapper[4779]: I1128 13:50:52.909421 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-45xv5_21c70f9d-fd7b-4629-8b4f-0f745fd9eccb/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 13:50:52 crc kubenswrapper[4779]: I1128 13:50:52.946553 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-bb85b8995-t8mt8_4d53ab8b-7d5c-4b0f-9cfa-992ed1fd2c0f/init/0.log" Nov 28 13:50:53 crc kubenswrapper[4779]: I1128 13:50:53.054642 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-bb85b8995-t8mt8_4d53ab8b-7d5c-4b0f-9cfa-992ed1fd2c0f/init/0.log" Nov 28 13:50:53 crc kubenswrapper[4779]: I1128 13:50:53.138844 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-bb85b8995-t8mt8_4d53ab8b-7d5c-4b0f-9cfa-992ed1fd2c0f/dnsmasq-dns/0.log" Nov 28 13:50:53 crc kubenswrapper[4779]: I1128 13:50:53.155370 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-bgfzz_867eb458-fc69-4d08-958e-67f69bbf7ec9/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 13:50:53 crc kubenswrapper[4779]: I1128 13:50:53.338545 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_44e7698e-14e1-4bbe-849b-3a90b6ebd431/glance-httpd/0.log" Nov 28 13:50:53 crc kubenswrapper[4779]: I1128 13:50:53.359540 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_44e7698e-14e1-4bbe-849b-3a90b6ebd431/glance-log/0.log" Nov 28 13:50:53 crc kubenswrapper[4779]: I1128 13:50:53.501681 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_05d19641-3a16-482f-bcaf-da12573ca2e6/glance-httpd/0.log" Nov 28 13:50:53 crc kubenswrapper[4779]: I1128 13:50:53.546166 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_05d19641-3a16-482f-bcaf-da12573ca2e6/glance-log/0.log" Nov 28 13:50:53 crc kubenswrapper[4779]: I1128 13:50:53.845868 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-6dc88d6fdd-9vtxx_a86fc8ed-8b8b-4a8a-8b27-0aa2d40fb61b/heat-engine/0.log" Nov 28 13:50:54 crc kubenswrapper[4779]: I1128 13:50:54.044632 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-htvcg_85a14d9a-2667-48a0-83c1-2e37f92590fb/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 13:50:54 crc kubenswrapper[4779]: I1128 13:50:54.066321 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-5675dff4b5-5c9sq_ec26de16-988c-4242-8de5-e379eeff18d8/heat-api/0.log" Nov 28 13:50:54 crc kubenswrapper[4779]: I1128 13:50:54.223260 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-74c96b7975-gndjl_b6ecf1b7-5d5c-4a0f-9fcb-caed8534c325/heat-cfnapi/0.log" Nov 28 13:50:54 crc kubenswrapper[4779]: I1128 13:50:54.244859 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-cjqrj_10420a90-84fa-45f9-a726-b3fcb8db4a20/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 13:50:54 crc kubenswrapper[4779]: I1128 13:50:54.367783 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-68b65c9788-nmrvn_8da74c5c-34bf-4136-a395-51d2be7258db/keystone-api/0.log" Nov 28 13:50:54 crc kubenswrapper[4779]: I1128 13:50:54.432519 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29405581-vh7vm_8ad47080-f81f-4366-ac8b-b110a18c1834/keystone-cron/0.log" Nov 28 13:50:54 crc kubenswrapper[4779]: I1128 13:50:54.480037 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_4e75c99e-5273-44e8-a5d1-98b317b5dacf/kube-state-metrics/0.log" Nov 28 13:50:54 crc kubenswrapper[4779]: I1128 13:50:54.614859 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-vs9sh_303327cf-5fdb-49b9-a9ee-f8498657b10d/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 13:50:54 crc kubenswrapper[4779]: I1128 13:50:54.867175 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7756d796d9-vcgbk_0cb2d061-fb70-4108-8204-9bf7e699c89f/neutron-httpd/0.log" Nov 28 13:50:54 crc kubenswrapper[4779]: I1128 13:50:54.883945 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7756d796d9-vcgbk_0cb2d061-fb70-4108-8204-9bf7e699c89f/neutron-api/0.log" Nov 28 13:50:54 crc kubenswrapper[4779]: I1128 13:50:54.987117 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-m59g8_6b291c86-c80b-41e0-9ebd-bff5f1d3de42/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 13:50:55 crc kubenswrapper[4779]: I1128 13:50:55.385280 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_ccd118c8-a309-4e17-952e-647ce404bbeb/nova-api-log/0.log" Nov 28 13:50:55 crc kubenswrapper[4779]: I1128 13:50:55.522223 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_55d804a5-57cf-458c-8941-c0ec9ea50d24/nova-cell0-conductor-conductor/0.log" Nov 28 13:50:55 crc kubenswrapper[4779]: I1128 13:50:55.772439 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_ccd118c8-a309-4e17-952e-647ce404bbeb/nova-api-api/0.log" Nov 28 13:50:55 crc kubenswrapper[4779]: I1128 13:50:55.816602 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_bfd44820-2805-4e67-a5f8-05a5a31dc047/nova-cell1-conductor-conductor/0.log" Nov 28 13:50:55 crc kubenswrapper[4779]: I1128 13:50:55.924632 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_c2f7b630-265b-4501-87b8-44f47fe9a11f/nova-cell1-novncproxy-novncproxy/0.log" Nov 28 13:50:56 crc kubenswrapper[4779]: I1128 13:50:56.082147 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-k9nk4_42f930a2-ac0c-43b5-ab17-1ccd2f30340e/nova-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 13:50:56 crc kubenswrapper[4779]: I1128 13:50:56.253540 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_9bdd523c-399a-4ea8-999b-850a2dd6897c/nova-metadata-log/0.log" Nov 28 13:50:56 crc kubenswrapper[4779]: I1128 13:50:56.505090 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_458c0c46-271b-40b8-aadc-10cfb6939487/nova-scheduler-scheduler/0.log" Nov 28 13:50:56 crc kubenswrapper[4779]: I1128 13:50:56.599548 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c27e5f17-320d-472d-a3e7-6a0e9fae960b/mysql-bootstrap/0.log" Nov 28 13:50:56 crc kubenswrapper[4779]: I1128 13:50:56.697489 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c27e5f17-320d-472d-a3e7-6a0e9fae960b/mysql-bootstrap/0.log" Nov 28 13:50:56 crc kubenswrapper[4779]: I1128 13:50:56.762336 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c27e5f17-320d-472d-a3e7-6a0e9fae960b/galera/0.log" Nov 28 13:50:56 crc kubenswrapper[4779]: I1128 13:50:56.958034 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_bd0f63de-dfe7-471d-92d8-b41e260d970b/mysql-bootstrap/0.log" Nov 28 13:50:57 crc kubenswrapper[4779]: I1128 13:50:57.136798 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_bd0f63de-dfe7-471d-92d8-b41e260d970b/galera/0.log" Nov 28 13:50:57 crc kubenswrapper[4779]: I1128 13:50:57.178163 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_bd0f63de-dfe7-471d-92d8-b41e260d970b/mysql-bootstrap/0.log" Nov 28 13:50:57 crc kubenswrapper[4779]: I1128 13:50:57.361039 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_488dc09e-4b09-40a3-8bfa-fd3116307f09/openstackclient/0.log" Nov 28 13:50:57 crc kubenswrapper[4779]: I1128 13:50:57.381157 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-7bg4l_5049f1f8-c081-4671-8d6a-9282a53dd6bd/ovn-controller/0.log" Nov 28 13:50:57 crc kubenswrapper[4779]: I1128 13:50:57.588926 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-szlhd_2e43d521-a73a-4d72-8270-bb959b5d0a53/openstack-network-exporter/0.log" Nov 28 13:50:57 crc kubenswrapper[4779]: I1128 13:50:57.759242 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_9bdd523c-399a-4ea8-999b-850a2dd6897c/nova-metadata-metadata/0.log" Nov 28 13:50:57 crc kubenswrapper[4779]: I1128 13:50:57.762417 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-c6d9j_a9ef6128-c3cf-4c5a-80ff-e0c4c263637d/ovsdb-server-init/0.log" Nov 28 13:50:57 crc kubenswrapper[4779]: I1128 13:50:57.954944 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-c6d9j_a9ef6128-c3cf-4c5a-80ff-e0c4c263637d/ovs-vswitchd/0.log" Nov 28 13:50:57 crc kubenswrapper[4779]: I1128 13:50:57.979654 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-c6d9j_a9ef6128-c3cf-4c5a-80ff-e0c4c263637d/ovsdb-server-init/0.log" Nov 28 13:50:58 crc kubenswrapper[4779]: I1128 13:50:58.004747 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-c6d9j_a9ef6128-c3cf-4c5a-80ff-e0c4c263637d/ovsdb-server/0.log" Nov 28 13:50:58 crc kubenswrapper[4779]: I1128 13:50:58.194824 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-nbj5n_b6763a62-0f2c-4f57-9391-731d12201cce/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 13:50:58 crc kubenswrapper[4779]: I1128 13:50:58.284544 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_d78bd78f-4723-4bf3-99ee-95509a0100af/openstack-network-exporter/0.log" Nov 28 13:50:58 crc kubenswrapper[4779]: I1128 13:50:58.330234 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_d78bd78f-4723-4bf3-99ee-95509a0100af/ovn-northd/0.log" Nov 28 13:50:58 crc kubenswrapper[4779]: I1128 13:50:58.445146 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_aa122564-e2c8-4ceb-b66d-1b677aaa4b21/openstack-network-exporter/0.log" Nov 28 13:50:58 crc kubenswrapper[4779]: I1128 13:50:58.489562 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_aa122564-e2c8-4ceb-b66d-1b677aaa4b21/ovsdbserver-nb/0.log" Nov 28 13:50:58 crc kubenswrapper[4779]: I1128 13:50:58.704717 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_7312815e-950e-48e2-bcbe-c74717279168/openstack-network-exporter/0.log" Nov 28 13:50:58 crc kubenswrapper[4779]: I1128 13:50:58.877467 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_7312815e-950e-48e2-bcbe-c74717279168/ovsdbserver-sb/0.log" Nov 28 13:50:59 crc kubenswrapper[4779]: I1128 13:50:59.033733 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-674bfd5544-x2xz6_60595e82-374d-4133-8a19-c240290be2da/placement-api/0.log" Nov 28 13:50:59 crc kubenswrapper[4779]: I1128 13:50:59.092359 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-674bfd5544-x2xz6_60595e82-374d-4133-8a19-c240290be2da/placement-log/0.log" Nov 28 13:50:59 crc kubenswrapper[4779]: I1128 13:50:59.230295 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_ee069283-02ed-414e-960c-7ae288363bb4/init-config-reloader/0.log" Nov 28 13:50:59 crc kubenswrapper[4779]: I1128 13:50:59.759784 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_ee069283-02ed-414e-960c-7ae288363bb4/init-config-reloader/0.log" Nov 28 13:50:59 crc kubenswrapper[4779]: I1128 13:50:59.790952 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_ee069283-02ed-414e-960c-7ae288363bb4/config-reloader/0.log" Nov 28 13:50:59 crc kubenswrapper[4779]: I1128 13:50:59.794068 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_ee069283-02ed-414e-960c-7ae288363bb4/prometheus/0.log" Nov 28 13:50:59 crc kubenswrapper[4779]: I1128 13:50:59.823907 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_ee069283-02ed-414e-960c-7ae288363bb4/thanos-sidecar/0.log" Nov 28 13:51:00 crc kubenswrapper[4779]: I1128 13:51:00.026183 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_80c2f0f7-d979-400e-b9fe-9369c3fc8ec5/setup-container/0.log" Nov 28 13:51:00 crc kubenswrapper[4779]: I1128 13:51:00.196472 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_80c2f0f7-d979-400e-b9fe-9369c3fc8ec5/setup-container/0.log" Nov 28 13:51:00 crc kubenswrapper[4779]: I1128 13:51:00.297034 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b0a12679-627a-4310-a9f7-93731231b12e/setup-container/0.log" Nov 28 13:51:00 crc kubenswrapper[4779]: I1128 13:51:00.323732 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_80c2f0f7-d979-400e-b9fe-9369c3fc8ec5/rabbitmq/0.log" Nov 28 13:51:00 crc kubenswrapper[4779]: I1128 13:51:00.468067 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b0a12679-627a-4310-a9f7-93731231b12e/setup-container/0.log" Nov 28 13:51:00 crc kubenswrapper[4779]: I1128 13:51:00.561274 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-qkpsw_dd33d0ba-c2bf-47d8-8c6c-d1b6d9a67449/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 13:51:00 crc kubenswrapper[4779]: I1128 13:51:00.568663 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b0a12679-627a-4310-a9f7-93731231b12e/rabbitmq/0.log" Nov 28 13:51:00 crc kubenswrapper[4779]: I1128 13:51:00.738982 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-vz8vz_1bec9363-8311-40e0-ab18-fcaf7acf3dc9/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 13:51:00 crc kubenswrapper[4779]: I1128 13:51:00.849419 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-q474j_833a68ba-d01e-49d6-9055-0e4342fd3305/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 13:51:01 crc kubenswrapper[4779]: I1128 13:51:01.007784 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-484k6_4e42e6af-a9aa-47a7-86ff-980266468175/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 13:51:01 crc kubenswrapper[4779]: I1128 13:51:01.074672 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-7mwdd_27fc90d6-d05c-412a-97fb-d9fe40d2a964/ssh-known-hosts-edpm-deployment/0.log" Nov 28 13:51:02 crc kubenswrapper[4779]: I1128 13:51:02.010808 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-9c6b99df5-82cnl_75d5987a-c7cb-400e-8efb-7375385f0e20/proxy-server/0.log" Nov 28 13:51:02 crc kubenswrapper[4779]: I1128 13:51:02.166825 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-9c6b99df5-82cnl_75d5987a-c7cb-400e-8efb-7375385f0e20/proxy-httpd/0.log" Nov 28 13:51:02 crc kubenswrapper[4779]: I1128 13:51:02.193452 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-25kzk_5e769641-0f27-4979-9823-dff8fe453054/swift-ring-rebalance/0.log" Nov 28 13:51:02 crc kubenswrapper[4779]: I1128 13:51:02.245489 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_265ee755-a70e-4f35-a40a-ef525a3c5088/account-auditor/0.log" Nov 28 13:51:02 crc kubenswrapper[4779]: I1128 13:51:02.355733 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_265ee755-a70e-4f35-a40a-ef525a3c5088/account-reaper/0.log" Nov 28 13:51:02 crc kubenswrapper[4779]: I1128 13:51:02.483424 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_265ee755-a70e-4f35-a40a-ef525a3c5088/account-replicator/0.log" Nov 28 13:51:02 crc kubenswrapper[4779]: I1128 13:51:02.511891 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_265ee755-a70e-4f35-a40a-ef525a3c5088/account-server/0.log" Nov 28 13:51:02 crc kubenswrapper[4779]: I1128 13:51:02.517229 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_265ee755-a70e-4f35-a40a-ef525a3c5088/container-auditor/0.log" Nov 28 13:51:02 crc kubenswrapper[4779]: I1128 13:51:02.595864 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_265ee755-a70e-4f35-a40a-ef525a3c5088/container-replicator/0.log" Nov 28 13:51:02 crc kubenswrapper[4779]: I1128 13:51:02.696234 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_265ee755-a70e-4f35-a40a-ef525a3c5088/container-server/0.log" Nov 28 13:51:02 crc kubenswrapper[4779]: I1128 13:51:02.697818 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_265ee755-a70e-4f35-a40a-ef525a3c5088/container-updater/0.log" Nov 28 13:51:02 crc kubenswrapper[4779]: I1128 13:51:02.723784 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_265ee755-a70e-4f35-a40a-ef525a3c5088/object-auditor/0.log" Nov 28 13:51:02 crc kubenswrapper[4779]: I1128 13:51:02.725909 4779 scope.go:117] "RemoveContainer" containerID="9f006008e295b62d0150e689d0b029c75904925a7bbc3374e7ffce20c396b60a" Nov 28 13:51:02 crc kubenswrapper[4779]: E1128 13:51:02.726219 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:51:02 crc kubenswrapper[4779]: I1128 13:51:02.831603 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_265ee755-a70e-4f35-a40a-ef525a3c5088/object-expirer/0.log" Nov 28 13:51:02 crc kubenswrapper[4779]: I1128 13:51:02.930033 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_265ee755-a70e-4f35-a40a-ef525a3c5088/object-server/0.log" Nov 28 13:51:02 crc kubenswrapper[4779]: I1128 13:51:02.950028 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_265ee755-a70e-4f35-a40a-ef525a3c5088/object-replicator/0.log" Nov 28 13:51:03 crc kubenswrapper[4779]: I1128 13:51:03.041690 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_265ee755-a70e-4f35-a40a-ef525a3c5088/object-updater/0.log" Nov 28 13:51:03 crc kubenswrapper[4779]: I1128 13:51:03.071568 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_265ee755-a70e-4f35-a40a-ef525a3c5088/rsync/0.log" Nov 28 13:51:03 crc kubenswrapper[4779]: I1128 13:51:03.169037 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_265ee755-a70e-4f35-a40a-ef525a3c5088/swift-recon-cron/0.log" Nov 28 13:51:03 crc kubenswrapper[4779]: I1128 13:51:03.375932 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-rs7mr_1e165593-fee0-4b82-87b3-6f102fdabe4f/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 13:51:03 crc kubenswrapper[4779]: I1128 13:51:03.423422 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-nw7dt_01e25eb1-de3d-4912-933f-09b22837436d/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 13:51:12 crc kubenswrapper[4779]: I1128 13:51:12.582380 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_28783fa8-aac9-4041-aba2-ba78f5be6f66/memcached/0.log" Nov 28 13:51:13 crc kubenswrapper[4779]: I1128 13:51:13.727185 4779 scope.go:117] "RemoveContainer" containerID="9f006008e295b62d0150e689d0b029c75904925a7bbc3374e7ffce20c396b60a" Nov 28 13:51:13 crc kubenswrapper[4779]: E1128 13:51:13.727448 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:51:26 crc kubenswrapper[4779]: I1128 13:51:26.726296 4779 scope.go:117] "RemoveContainer" containerID="9f006008e295b62d0150e689d0b029c75904925a7bbc3374e7ffce20c396b60a" Nov 28 13:51:26 crc kubenswrapper[4779]: E1128 13:51:26.726973 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:51:31 crc kubenswrapper[4779]: I1128 13:51:31.458878 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b64f4fb85-hhr2g_e7e646e3-00c9-4359-b012-aaff60962a76/kube-rbac-proxy/0.log" Nov 28 13:51:31 crc kubenswrapper[4779]: I1128 13:51:31.507611 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b64f4fb85-hhr2g_e7e646e3-00c9-4359-b012-aaff60962a76/manager/0.log" Nov 28 13:51:31 crc kubenswrapper[4779]: I1128 13:51:31.687546 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6b7f75547b-l52fj_854f928b-5068-4de9-b865-7fb2a26ca9e4/kube-rbac-proxy/0.log" Nov 28 13:51:31 crc kubenswrapper[4779]: I1128 13:51:31.722238 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6b7f75547b-l52fj_854f928b-5068-4de9-b865-7fb2a26ca9e4/manager/0.log" Nov 28 13:51:31 crc kubenswrapper[4779]: I1128 13:51:31.857278 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-955677c94-rh5q9_8d20efbb-527c-4085-a974-d49ee454b545/kube-rbac-proxy/0.log" Nov 28 13:51:31 crc kubenswrapper[4779]: I1128 13:51:31.894726 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e666c68ff9e9ac0d69ff4488828194992a4afe96aebe623791b2eb27d056z22_57c6c245-3c5b-41bf-9de3-c5d23d132c71/util/0.log" Nov 28 13:51:31 crc kubenswrapper[4779]: I1128 13:51:31.936010 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-955677c94-rh5q9_8d20efbb-527c-4085-a974-d49ee454b545/manager/0.log" Nov 28 13:51:32 crc kubenswrapper[4779]: I1128 13:51:32.096770 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e666c68ff9e9ac0d69ff4488828194992a4afe96aebe623791b2eb27d056z22_57c6c245-3c5b-41bf-9de3-c5d23d132c71/pull/0.log" Nov 28 13:51:32 crc kubenswrapper[4779]: I1128 13:51:32.119777 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e666c68ff9e9ac0d69ff4488828194992a4afe96aebe623791b2eb27d056z22_57c6c245-3c5b-41bf-9de3-c5d23d132c71/util/0.log" Nov 28 13:51:32 crc kubenswrapper[4779]: I1128 13:51:32.142387 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e666c68ff9e9ac0d69ff4488828194992a4afe96aebe623791b2eb27d056z22_57c6c245-3c5b-41bf-9de3-c5d23d132c71/pull/0.log" Nov 28 13:51:32 crc kubenswrapper[4779]: I1128 13:51:32.288706 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e666c68ff9e9ac0d69ff4488828194992a4afe96aebe623791b2eb27d056z22_57c6c245-3c5b-41bf-9de3-c5d23d132c71/util/0.log" Nov 28 13:51:32 crc kubenswrapper[4779]: I1128 13:51:32.299855 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e666c68ff9e9ac0d69ff4488828194992a4afe96aebe623791b2eb27d056z22_57c6c245-3c5b-41bf-9de3-c5d23d132c71/extract/0.log" Nov 28 13:51:32 crc kubenswrapper[4779]: I1128 13:51:32.354133 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e666c68ff9e9ac0d69ff4488828194992a4afe96aebe623791b2eb27d056z22_57c6c245-3c5b-41bf-9de3-c5d23d132c71/pull/0.log" Nov 28 13:51:32 crc kubenswrapper[4779]: I1128 13:51:32.528621 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-589cbd6b5b-ns58c_eaf24224-e1f5-44d8-8151-54be9408b429/kube-rbac-proxy/0.log" Nov 28 13:51:32 crc kubenswrapper[4779]: I1128 13:51:32.597476 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-589cbd6b5b-ns58c_eaf24224-e1f5-44d8-8151-54be9408b429/manager/0.log" Nov 28 13:51:32 crc kubenswrapper[4779]: I1128 13:51:32.612009 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5b77f656f-wptr7_b3e0c6a3-33d8-4c1e-8b44-156de87d5621/kube-rbac-proxy/0.log" Nov 28 13:51:32 crc kubenswrapper[4779]: I1128 13:51:32.799839 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5d494799bf-vd654_40688ccc-932c-411e-8703-4bf0f11ec3bf/kube-rbac-proxy/0.log" Nov 28 13:51:32 crc kubenswrapper[4779]: I1128 13:51:32.817524 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5b77f656f-wptr7_b3e0c6a3-33d8-4c1e-8b44-156de87d5621/manager/0.log" Nov 28 13:51:32 crc kubenswrapper[4779]: I1128 13:51:32.831961 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5d494799bf-vd654_40688ccc-932c-411e-8703-4bf0f11ec3bf/manager/0.log" Nov 28 13:51:33 crc kubenswrapper[4779]: I1128 13:51:33.085269 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-7pv5r_af7046d6-f852-4c62-83e6-ea213812d86c/kube-rbac-proxy/0.log" Nov 28 13:51:33 crc kubenswrapper[4779]: I1128 13:51:33.189208 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-7pv5r_af7046d6-f852-4c62-83e6-ea213812d86c/manager/0.log" Nov 28 13:51:33 crc kubenswrapper[4779]: I1128 13:51:33.254081 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-67cb4dc6d4-n952x_493d54b8-1e0a-4270-8180-ba1bc746c783/kube-rbac-proxy/0.log" Nov 28 13:51:33 crc kubenswrapper[4779]: I1128 13:51:33.274897 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-67cb4dc6d4-n952x_493d54b8-1e0a-4270-8180-ba1bc746c783/manager/0.log" Nov 28 13:51:33 crc kubenswrapper[4779]: I1128 13:51:33.403614 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7b4567c7cf-lfj45_da8e3e32-3cc1-4b1b-91c5-31ac6e660d65/kube-rbac-proxy/0.log" Nov 28 13:51:33 crc kubenswrapper[4779]: I1128 13:51:33.513324 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7b4567c7cf-lfj45_da8e3e32-3cc1-4b1b-91c5-31ac6e660d65/manager/0.log" Nov 28 13:51:33 crc kubenswrapper[4779]: I1128 13:51:33.554025 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5d499bf58b-9xxwc_75996749-aa6c-4a8e-ba7f-412209db3939/kube-rbac-proxy/0.log" Nov 28 13:51:33 crc kubenswrapper[4779]: I1128 13:51:33.600075 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5d499bf58b-9xxwc_75996749-aa6c-4a8e-ba7f-412209db3939/manager/0.log" Nov 28 13:51:33 crc kubenswrapper[4779]: I1128 13:51:33.676287 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-66f4dd4bc7-xqxsn_b96763b6-e6a4-4429-8fe4-6b23620824c1/kube-rbac-proxy/0.log" Nov 28 13:51:33 crc kubenswrapper[4779]: I1128 13:51:33.724975 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-66f4dd4bc7-xqxsn_b96763b6-e6a4-4429-8fe4-6b23620824c1/manager/0.log" Nov 28 13:51:33 crc kubenswrapper[4779]: I1128 13:51:33.874465 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6fdcddb789-cnfmd_911b9690-ddec-439e-9ef5-a7d80562f51c/kube-rbac-proxy/0.log" Nov 28 13:51:33 crc kubenswrapper[4779]: I1128 13:51:33.984440 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6fdcddb789-cnfmd_911b9690-ddec-439e-9ef5-a7d80562f51c/manager/0.log" Nov 28 13:51:34 crc kubenswrapper[4779]: I1128 13:51:34.109201 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-zzflc_3b4accd2-e9c1-4e51-a559-c5cf108f5af1/kube-rbac-proxy/0.log" Nov 28 13:51:34 crc kubenswrapper[4779]: I1128 13:51:34.197319 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-zzflc_3b4accd2-e9c1-4e51-a559-c5cf108f5af1/manager/0.log" Nov 28 13:51:34 crc kubenswrapper[4779]: I1128 13:51:34.234402 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-64cdc6ff96-kvnt5_623cd065-a088-41d4-9b98-8be8d60c0f20/kube-rbac-proxy/0.log" Nov 28 13:51:34 crc kubenswrapper[4779]: I1128 13:51:34.315607 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-64cdc6ff96-kvnt5_623cd065-a088-41d4-9b98-8be8d60c0f20/manager/0.log" Nov 28 13:51:34 crc kubenswrapper[4779]: I1128 13:51:34.436326 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5fcdb54b6bsdkvh_66bfbaf1-3247-47c1-aa58-19cf5875882e/kube-rbac-proxy/0.log" Nov 28 13:51:34 crc kubenswrapper[4779]: I1128 13:51:34.475654 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5fcdb54b6bsdkvh_66bfbaf1-3247-47c1-aa58-19cf5875882e/manager/0.log" Nov 28 13:51:34 crc kubenswrapper[4779]: I1128 13:51:34.965396 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-7bb768d89f-48p4r_459f9c74-7dc8-401d-8df4-2c1b947f87df/operator/0.log" Nov 28 13:51:34 crc kubenswrapper[4779]: I1128 13:51:34.972034 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-qvbp8_527c77d8-6692-434a-88b6-4d5e3dc93337/registry-server/0.log" Nov 28 13:51:35 crc kubenswrapper[4779]: I1128 13:51:35.213600 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-56897c768d-v49kv_bb4ac6b3-6655-4e29-8cf7-bdae98df3386/kube-rbac-proxy/0.log" Nov 28 13:51:35 crc kubenswrapper[4779]: I1128 13:51:35.271605 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-56897c768d-v49kv_bb4ac6b3-6655-4e29-8cf7-bdae98df3386/manager/0.log" Nov 28 13:51:35 crc kubenswrapper[4779]: I1128 13:51:35.430430 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-57988cc5b5-lnf86_b1c19869-b98a-40c8-a312-8c49d69bdf0f/kube-rbac-proxy/0.log" Nov 28 13:51:35 crc kubenswrapper[4779]: I1128 13:51:35.478815 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-57988cc5b5-lnf86_b1c19869-b98a-40c8-a312-8c49d69bdf0f/manager/0.log" Nov 28 13:51:35 crc kubenswrapper[4779]: I1128 13:51:35.620984 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-495dt_1c62c5f4-5757-46d4-92e5-7fdb2b21c88e/operator/0.log" Nov 28 13:51:35 crc kubenswrapper[4779]: I1128 13:51:35.678350 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-d77b94747-c6wb2_f3d69218-2422-473c-ae41-bd2a2b902355/kube-rbac-proxy/0.log" Nov 28 13:51:35 crc kubenswrapper[4779]: I1128 13:51:35.766260 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-d77b94747-c6wb2_f3d69218-2422-473c-ae41-bd2a2b902355/manager/0.log" Nov 28 13:51:35 crc kubenswrapper[4779]: I1128 13:51:35.897895 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7574d9569-x822f_f1d9753d-b49d-4e32-b312-137314283984/kube-rbac-proxy/0.log" Nov 28 13:51:36 crc kubenswrapper[4779]: I1128 13:51:36.057883 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7d967756df-nvprs_31627cc1-b543-4da9-8fe1-ac12e7f09531/manager/0.log" Nov 28 13:51:36 crc kubenswrapper[4779]: I1128 13:51:36.127644 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cd6c7f4c8-h4czz_39fdca45-fa34-4d90-93a9-1123dff79930/manager/0.log" Nov 28 13:51:36 crc kubenswrapper[4779]: I1128 13:51:36.137283 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cd6c7f4c8-h4czz_39fdca45-fa34-4d90-93a9-1123dff79930/kube-rbac-proxy/0.log" Nov 28 13:51:36 crc kubenswrapper[4779]: I1128 13:51:36.146638 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7574d9569-x822f_f1d9753d-b49d-4e32-b312-137314283984/manager/0.log" Nov 28 13:51:36 crc kubenswrapper[4779]: I1128 13:51:36.243075 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-656dcb59d4-hjhz4_1799095f-becf-4b8e-bb0b-28c04a819e59/kube-rbac-proxy/0.log" Nov 28 13:51:36 crc kubenswrapper[4779]: I1128 13:51:36.336987 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-656dcb59d4-hjhz4_1799095f-becf-4b8e-bb0b-28c04a819e59/manager/0.log" Nov 28 13:51:41 crc kubenswrapper[4779]: I1128 13:51:41.727828 4779 scope.go:117] "RemoveContainer" containerID="9f006008e295b62d0150e689d0b029c75904925a7bbc3374e7ffce20c396b60a" Nov 28 13:51:41 crc kubenswrapper[4779]: E1128 13:51:41.728742 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:51:54 crc kubenswrapper[4779]: I1128 13:51:54.726470 4779 scope.go:117] "RemoveContainer" containerID="9f006008e295b62d0150e689d0b029c75904925a7bbc3374e7ffce20c396b60a" Nov 28 13:51:54 crc kubenswrapper[4779]: E1128 13:51:54.727370 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:51:56 crc kubenswrapper[4779]: I1128 13:51:56.693156 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-njrwv_1475f2e1-1c5b-470d-b0aa-0645ad327bb5/control-plane-machine-set-operator/0.log" Nov 28 13:51:56 crc kubenswrapper[4779]: I1128 13:51:56.927856 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-bz4fl_c3eebda0-cd9c-448c-8e0c-c25aea48fd54/machine-api-operator/0.log" Nov 28 13:51:56 crc kubenswrapper[4779]: I1128 13:51:56.929060 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-bz4fl_c3eebda0-cd9c-448c-8e0c-c25aea48fd54/kube-rbac-proxy/0.log" Nov 28 13:52:06 crc kubenswrapper[4779]: I1128 13:52:06.727257 4779 scope.go:117] "RemoveContainer" containerID="9f006008e295b62d0150e689d0b029c75904925a7bbc3374e7ffce20c396b60a" Nov 28 13:52:06 crc kubenswrapper[4779]: E1128 13:52:06.728150 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:52:10 crc kubenswrapper[4779]: I1128 13:52:10.311297 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-5qqff_ec2c397e-6b4d-4ffc-9ffa-4f437657da02/cert-manager-controller/0.log" Nov 28 13:52:10 crc kubenswrapper[4779]: I1128 13:52:10.391587 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-fx6q6_17acea2c-1197-4905-bb74-3f4137eb521d/cert-manager-cainjector/0.log" Nov 28 13:52:10 crc kubenswrapper[4779]: I1128 13:52:10.444759 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-bvk27_8445721b-8f86-4161-adc3-2ddf58f3aa94/cert-manager-webhook/0.log" Nov 28 13:52:18 crc kubenswrapper[4779]: I1128 13:52:18.726014 4779 scope.go:117] "RemoveContainer" containerID="9f006008e295b62d0150e689d0b029c75904925a7bbc3374e7ffce20c396b60a" Nov 28 13:52:18 crc kubenswrapper[4779]: E1128 13:52:18.727788 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:52:22 crc kubenswrapper[4779]: I1128 13:52:22.722922 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-ss4d2_a8a297b2-fc61-4bcf-9872-106b5776cb43/nmstate-console-plugin/0.log" Nov 28 13:52:22 crc kubenswrapper[4779]: I1128 13:52:22.908538 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-h6q7b_0c9a8cc1-da76-4824-8303-fe9e18c76af3/kube-rbac-proxy/0.log" Nov 28 13:52:22 crc kubenswrapper[4779]: I1128 13:52:22.923362 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-h6q7b_0c9a8cc1-da76-4824-8303-fe9e18c76af3/nmstate-metrics/0.log" Nov 28 13:52:22 crc kubenswrapper[4779]: I1128 13:52:22.938360 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-mqs42_70ee469b-f21f-4b94-9f6a-1b79db90e4fd/nmstate-handler/0.log" Nov 28 13:52:23 crc kubenswrapper[4779]: I1128 13:52:23.097870 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-27lnx_82cdcdcc-f4b1-4f17-b8be-81e5525a2438/nmstate-operator/0.log" Nov 28 13:52:23 crc kubenswrapper[4779]: I1128 13:52:23.157379 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-zrh7w_2ea9d3e0-ee7b-48bc-a358-689318fa4dae/nmstate-webhook/0.log" Nov 28 13:52:31 crc kubenswrapper[4779]: I1128 13:52:31.726909 4779 scope.go:117] "RemoveContainer" containerID="9f006008e295b62d0150e689d0b029c75904925a7bbc3374e7ffce20c396b60a" Nov 28 13:52:31 crc kubenswrapper[4779]: E1128 13:52:31.728253 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:52:37 crc kubenswrapper[4779]: I1128 13:52:37.957886 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-89xvz_7fe4463e-8739-494e-8171-7bfc925826a9/kube-rbac-proxy/0.log" Nov 28 13:52:38 crc kubenswrapper[4779]: I1128 13:52:38.047750 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-89xvz_7fe4463e-8739-494e-8171-7bfc925826a9/controller/0.log" Nov 28 13:52:38 crc kubenswrapper[4779]: I1128 13:52:38.146243 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w5vz4_e5db87da-4229-4c2f-abbd-bb5aff35de97/cp-frr-files/0.log" Nov 28 13:52:38 crc kubenswrapper[4779]: I1128 13:52:38.351783 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w5vz4_e5db87da-4229-4c2f-abbd-bb5aff35de97/cp-metrics/0.log" Nov 28 13:52:38 crc kubenswrapper[4779]: I1128 13:52:38.358741 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w5vz4_e5db87da-4229-4c2f-abbd-bb5aff35de97/cp-reloader/0.log" Nov 28 13:52:38 crc kubenswrapper[4779]: I1128 13:52:38.360631 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w5vz4_e5db87da-4229-4c2f-abbd-bb5aff35de97/cp-reloader/0.log" Nov 28 13:52:38 crc kubenswrapper[4779]: I1128 13:52:38.362496 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w5vz4_e5db87da-4229-4c2f-abbd-bb5aff35de97/cp-frr-files/0.log" Nov 28 13:52:38 crc kubenswrapper[4779]: I1128 13:52:38.500419 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w5vz4_e5db87da-4229-4c2f-abbd-bb5aff35de97/cp-frr-files/0.log" Nov 28 13:52:39 crc kubenswrapper[4779]: I1128 13:52:39.249949 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w5vz4_e5db87da-4229-4c2f-abbd-bb5aff35de97/cp-metrics/0.log" Nov 28 13:52:39 crc kubenswrapper[4779]: I1128 13:52:39.332681 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w5vz4_e5db87da-4229-4c2f-abbd-bb5aff35de97/cp-metrics/0.log" Nov 28 13:52:39 crc kubenswrapper[4779]: I1128 13:52:39.333937 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w5vz4_e5db87da-4229-4c2f-abbd-bb5aff35de97/cp-reloader/0.log" Nov 28 13:52:39 crc kubenswrapper[4779]: I1128 13:52:39.497169 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w5vz4_e5db87da-4229-4c2f-abbd-bb5aff35de97/cp-reloader/0.log" Nov 28 13:52:39 crc kubenswrapper[4779]: I1128 13:52:39.534106 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w5vz4_e5db87da-4229-4c2f-abbd-bb5aff35de97/controller/0.log" Nov 28 13:52:39 crc kubenswrapper[4779]: I1128 13:52:39.541082 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w5vz4_e5db87da-4229-4c2f-abbd-bb5aff35de97/cp-frr-files/0.log" Nov 28 13:52:39 crc kubenswrapper[4779]: I1128 13:52:39.544273 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w5vz4_e5db87da-4229-4c2f-abbd-bb5aff35de97/cp-metrics/0.log" Nov 28 13:52:39 crc kubenswrapper[4779]: I1128 13:52:39.708340 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w5vz4_e5db87da-4229-4c2f-abbd-bb5aff35de97/frr-metrics/0.log" Nov 28 13:52:39 crc kubenswrapper[4779]: I1128 13:52:39.714775 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w5vz4_e5db87da-4229-4c2f-abbd-bb5aff35de97/kube-rbac-proxy/0.log" Nov 28 13:52:39 crc kubenswrapper[4779]: I1128 13:52:39.770118 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w5vz4_e5db87da-4229-4c2f-abbd-bb5aff35de97/kube-rbac-proxy-frr/0.log" Nov 28 13:52:39 crc kubenswrapper[4779]: I1128 13:52:39.941039 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w5vz4_e5db87da-4229-4c2f-abbd-bb5aff35de97/reloader/0.log" Nov 28 13:52:40 crc kubenswrapper[4779]: I1128 13:52:40.052128 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-nxg68_ea534549-07a6-43e1-98e7-906ee50e4146/frr-k8s-webhook-server/0.log" Nov 28 13:52:40 crc kubenswrapper[4779]: I1128 13:52:40.237245 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7d5c964c78-9tlcl_93890301-ca3f-4009-a55d-960edac754a9/manager/0.log" Nov 28 13:52:40 crc kubenswrapper[4779]: I1128 13:52:40.401054 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7c8544dcdc-ggmwl_60e79db0-fa26-46e7-80d8-55720f1372a2/webhook-server/0.log" Nov 28 13:52:40 crc kubenswrapper[4779]: I1128 13:52:40.496118 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-flq64_7bd19fff-499e-443a-b571-8af43ae08b4e/kube-rbac-proxy/0.log" Nov 28 13:52:41 crc kubenswrapper[4779]: I1128 13:52:41.175859 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-flq64_7bd19fff-499e-443a-b571-8af43ae08b4e/speaker/0.log" Nov 28 13:52:41 crc kubenswrapper[4779]: I1128 13:52:41.395479 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w5vz4_e5db87da-4229-4c2f-abbd-bb5aff35de97/frr/0.log" Nov 28 13:52:46 crc kubenswrapper[4779]: I1128 13:52:46.726834 4779 scope.go:117] "RemoveContainer" containerID="9f006008e295b62d0150e689d0b029c75904925a7bbc3374e7ffce20c396b60a" Nov 28 13:52:47 crc kubenswrapper[4779]: I1128 13:52:47.299368 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" event={"ID":"3b2a3eb4-4de5-491b-b466-3a35b7d745ec","Type":"ContainerStarted","Data":"e5e95ee5d438035986b77d6983fd6e0403152a1d57c6cbb8297ef9ebb38710a9"} Nov 28 13:52:53 crc kubenswrapper[4779]: I1128 13:52:53.482303 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fg5659_f88ccd92-f82a-4b6a-9502-f458938ab085/util/0.log" Nov 28 13:52:53 crc kubenswrapper[4779]: I1128 13:52:53.782043 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fg5659_f88ccd92-f82a-4b6a-9502-f458938ab085/pull/0.log" Nov 28 13:52:53 crc kubenswrapper[4779]: I1128 13:52:53.795352 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fg5659_f88ccd92-f82a-4b6a-9502-f458938ab085/pull/0.log" Nov 28 13:52:53 crc kubenswrapper[4779]: I1128 13:52:53.805289 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fg5659_f88ccd92-f82a-4b6a-9502-f458938ab085/util/0.log" Nov 28 13:52:54 crc kubenswrapper[4779]: I1128 13:52:54.008794 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fg5659_f88ccd92-f82a-4b6a-9502-f458938ab085/util/0.log" Nov 28 13:52:54 crc kubenswrapper[4779]: I1128 13:52:54.023016 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fg5659_f88ccd92-f82a-4b6a-9502-f458938ab085/extract/0.log" Nov 28 13:52:54 crc kubenswrapper[4779]: I1128 13:52:54.023282 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fg5659_f88ccd92-f82a-4b6a-9502-f458938ab085/pull/0.log" Nov 28 13:52:54 crc kubenswrapper[4779]: I1128 13:52:54.363748 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921046cv4_e307524d-7be7-4841-ac8a-dea95d4c976e/util/0.log" Nov 28 13:52:54 crc kubenswrapper[4779]: I1128 13:52:54.567604 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921046cv4_e307524d-7be7-4841-ac8a-dea95d4c976e/pull/0.log" Nov 28 13:52:54 crc kubenswrapper[4779]: I1128 13:52:54.577826 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921046cv4_e307524d-7be7-4841-ac8a-dea95d4c976e/util/0.log" Nov 28 13:52:54 crc kubenswrapper[4779]: I1128 13:52:54.605715 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921046cv4_e307524d-7be7-4841-ac8a-dea95d4c976e/pull/0.log" Nov 28 13:52:54 crc kubenswrapper[4779]: I1128 13:52:54.791189 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921046cv4_e307524d-7be7-4841-ac8a-dea95d4c976e/pull/0.log" Nov 28 13:52:54 crc kubenswrapper[4779]: I1128 13:52:54.792754 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921046cv4_e307524d-7be7-4841-ac8a-dea95d4c976e/util/0.log" Nov 28 13:52:54 crc kubenswrapper[4779]: I1128 13:52:54.829568 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921046cv4_e307524d-7be7-4841-ac8a-dea95d4c976e/extract/0.log" Nov 28 13:52:55 crc kubenswrapper[4779]: I1128 13:52:55.018345 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83zqqz8_f70b1dfe-4b12-40c8-8052-da91227479b0/util/0.log" Nov 28 13:52:55 crc kubenswrapper[4779]: I1128 13:52:55.152836 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83zqqz8_f70b1dfe-4b12-40c8-8052-da91227479b0/util/0.log" Nov 28 13:52:55 crc kubenswrapper[4779]: I1128 13:52:55.167466 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83zqqz8_f70b1dfe-4b12-40c8-8052-da91227479b0/pull/0.log" Nov 28 13:52:55 crc kubenswrapper[4779]: I1128 13:52:55.174038 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83zqqz8_f70b1dfe-4b12-40c8-8052-da91227479b0/pull/0.log" Nov 28 13:52:55 crc kubenswrapper[4779]: I1128 13:52:55.356939 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83zqqz8_f70b1dfe-4b12-40c8-8052-da91227479b0/pull/0.log" Nov 28 13:52:55 crc kubenswrapper[4779]: I1128 13:52:55.362002 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83zqqz8_f70b1dfe-4b12-40c8-8052-da91227479b0/extract/0.log" Nov 28 13:52:55 crc kubenswrapper[4779]: I1128 13:52:55.383227 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83zqqz8_f70b1dfe-4b12-40c8-8052-da91227479b0/util/0.log" Nov 28 13:52:55 crc kubenswrapper[4779]: I1128 13:52:55.542447 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6qmdc_5b79674b-d129-4bf4-91f2-77b42f1d51ea/extract-utilities/0.log" Nov 28 13:52:55 crc kubenswrapper[4779]: I1128 13:52:55.729470 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6qmdc_5b79674b-d129-4bf4-91f2-77b42f1d51ea/extract-content/0.log" Nov 28 13:52:55 crc kubenswrapper[4779]: I1128 13:52:55.735689 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6qmdc_5b79674b-d129-4bf4-91f2-77b42f1d51ea/extract-utilities/0.log" Nov 28 13:52:55 crc kubenswrapper[4779]: I1128 13:52:55.745770 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6qmdc_5b79674b-d129-4bf4-91f2-77b42f1d51ea/extract-content/0.log" Nov 28 13:52:55 crc kubenswrapper[4779]: I1128 13:52:55.908989 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6qmdc_5b79674b-d129-4bf4-91f2-77b42f1d51ea/extract-content/0.log" Nov 28 13:52:55 crc kubenswrapper[4779]: I1128 13:52:55.919512 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6qmdc_5b79674b-d129-4bf4-91f2-77b42f1d51ea/extract-utilities/0.log" Nov 28 13:52:56 crc kubenswrapper[4779]: I1128 13:52:56.169752 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gdz82_218924d0-58ac-460f-a4f6-f00925ee6a97/extract-utilities/0.log" Nov 28 13:52:56 crc kubenswrapper[4779]: I1128 13:52:56.372979 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gdz82_218924d0-58ac-460f-a4f6-f00925ee6a97/extract-content/0.log" Nov 28 13:52:56 crc kubenswrapper[4779]: I1128 13:52:56.434287 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gdz82_218924d0-58ac-460f-a4f6-f00925ee6a97/extract-content/0.log" Nov 28 13:52:56 crc kubenswrapper[4779]: I1128 13:52:56.449068 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gdz82_218924d0-58ac-460f-a4f6-f00925ee6a97/extract-utilities/0.log" Nov 28 13:52:56 crc kubenswrapper[4779]: I1128 13:52:56.450584 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6qmdc_5b79674b-d129-4bf4-91f2-77b42f1d51ea/registry-server/0.log" Nov 28 13:52:56 crc kubenswrapper[4779]: I1128 13:52:56.684930 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gdz82_218924d0-58ac-460f-a4f6-f00925ee6a97/extract-utilities/0.log" Nov 28 13:52:56 crc kubenswrapper[4779]: I1128 13:52:56.768881 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gdz82_218924d0-58ac-460f-a4f6-f00925ee6a97/extract-content/0.log" Nov 28 13:52:56 crc kubenswrapper[4779]: I1128 13:52:56.914980 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-r6z5b_6d803c44-5049-4974-ad24-8bdf8082456f/marketplace-operator/0.log" Nov 28 13:52:57 crc kubenswrapper[4779]: I1128 13:52:57.096577 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jngtm_aeb5fca6-5157-4e18-8223-59f88908f1c8/extract-utilities/0.log" Nov 28 13:52:57 crc kubenswrapper[4779]: I1128 13:52:57.276394 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jngtm_aeb5fca6-5157-4e18-8223-59f88908f1c8/extract-utilities/0.log" Nov 28 13:52:57 crc kubenswrapper[4779]: I1128 13:52:57.292436 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jngtm_aeb5fca6-5157-4e18-8223-59f88908f1c8/extract-content/0.log" Nov 28 13:52:57 crc kubenswrapper[4779]: I1128 13:52:57.323614 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gdz82_218924d0-58ac-460f-a4f6-f00925ee6a97/registry-server/0.log" Nov 28 13:52:57 crc kubenswrapper[4779]: I1128 13:52:57.330286 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jngtm_aeb5fca6-5157-4e18-8223-59f88908f1c8/extract-content/0.log" Nov 28 13:52:57 crc kubenswrapper[4779]: I1128 13:52:57.467348 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jngtm_aeb5fca6-5157-4e18-8223-59f88908f1c8/extract-utilities/0.log" Nov 28 13:52:57 crc kubenswrapper[4779]: I1128 13:52:57.481801 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jngtm_aeb5fca6-5157-4e18-8223-59f88908f1c8/extract-content/0.log" Nov 28 13:52:57 crc kubenswrapper[4779]: I1128 13:52:57.511266 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hppfn_b5d5dfb9-ebff-4d12-af9a-53220c054a90/extract-utilities/0.log" Nov 28 13:52:57 crc kubenswrapper[4779]: I1128 13:52:57.677001 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jngtm_aeb5fca6-5157-4e18-8223-59f88908f1c8/registry-server/0.log" Nov 28 13:52:57 crc kubenswrapper[4779]: I1128 13:52:57.697567 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hppfn_b5d5dfb9-ebff-4d12-af9a-53220c054a90/extract-utilities/0.log" Nov 28 13:52:57 crc kubenswrapper[4779]: I1128 13:52:57.714491 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hppfn_b5d5dfb9-ebff-4d12-af9a-53220c054a90/extract-content/0.log" Nov 28 13:52:57 crc kubenswrapper[4779]: I1128 13:52:57.739386 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hppfn_b5d5dfb9-ebff-4d12-af9a-53220c054a90/extract-content/0.log" Nov 28 13:52:57 crc kubenswrapper[4779]: I1128 13:52:57.897369 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hppfn_b5d5dfb9-ebff-4d12-af9a-53220c054a90/extract-utilities/0.log" Nov 28 13:52:57 crc kubenswrapper[4779]: I1128 13:52:57.918268 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hppfn_b5d5dfb9-ebff-4d12-af9a-53220c054a90/extract-content/0.log" Nov 28 13:52:58 crc kubenswrapper[4779]: I1128 13:52:58.380203 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hppfn_b5d5dfb9-ebff-4d12-af9a-53220c054a90/registry-server/0.log" Nov 28 13:53:13 crc kubenswrapper[4779]: I1128 13:53:13.015826 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-l5jtg_06f1d580-00d9-4699-8e8d-8087523ef59a/prometheus-operator/0.log" Nov 28 13:53:13 crc kubenswrapper[4779]: I1128 13:53:13.209566 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-d986bbfbc-cwqv4_4fc94f4f-278c-4c4f-a547-2779183ca661/prometheus-operator-admission-webhook/0.log" Nov 28 13:53:13 crc kubenswrapper[4779]: I1128 13:53:13.527005 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-d986bbfbc-z4cw2_9aef4803-506a-4ca3-9bdd-2ef8865a975c/prometheus-operator-admission-webhook/0.log" Nov 28 13:53:13 crc kubenswrapper[4779]: I1128 13:53:13.644571 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-z4wlc_179dd1bb-6c8d-443a-a408-40273ae8f6f6/operator/0.log" Nov 28 13:53:13 crc kubenswrapper[4779]: I1128 13:53:13.691872 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-njrck_cfb01668-ce93-42c0-8c77-1aaac40d5160/perses-operator/0.log" Nov 28 13:53:39 crc kubenswrapper[4779]: I1128 13:53:39.777589 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wh67r"] Nov 28 13:53:39 crc kubenswrapper[4779]: E1128 13:53:39.779731 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a1dd33d-3d1f-492e-ae52-e3b34e15f562" containerName="container-00" Nov 28 13:53:39 crc kubenswrapper[4779]: I1128 13:53:39.779748 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a1dd33d-3d1f-492e-ae52-e3b34e15f562" containerName="container-00" Nov 28 13:53:39 crc kubenswrapper[4779]: I1128 13:53:39.779962 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a1dd33d-3d1f-492e-ae52-e3b34e15f562" containerName="container-00" Nov 28 13:53:39 crc kubenswrapper[4779]: I1128 13:53:39.786008 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wh67r" Nov 28 13:53:39 crc kubenswrapper[4779]: I1128 13:53:39.790316 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wh67r"] Nov 28 13:53:39 crc kubenswrapper[4779]: I1128 13:53:39.921147 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83e22d6f-87d0-441d-b4b3-ffb34ddd8caf-utilities\") pod \"community-operators-wh67r\" (UID: \"83e22d6f-87d0-441d-b4b3-ffb34ddd8caf\") " pod="openshift-marketplace/community-operators-wh67r" Nov 28 13:53:39 crc kubenswrapper[4779]: I1128 13:53:39.921249 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83e22d6f-87d0-441d-b4b3-ffb34ddd8caf-catalog-content\") pod \"community-operators-wh67r\" (UID: \"83e22d6f-87d0-441d-b4b3-ffb34ddd8caf\") " pod="openshift-marketplace/community-operators-wh67r" Nov 28 13:53:39 crc kubenswrapper[4779]: I1128 13:53:39.921584 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjb9r\" (UniqueName: \"kubernetes.io/projected/83e22d6f-87d0-441d-b4b3-ffb34ddd8caf-kube-api-access-wjb9r\") pod \"community-operators-wh67r\" (UID: \"83e22d6f-87d0-441d-b4b3-ffb34ddd8caf\") " pod="openshift-marketplace/community-operators-wh67r" Nov 28 13:53:40 crc kubenswrapper[4779]: I1128 13:53:40.023298 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjb9r\" (UniqueName: \"kubernetes.io/projected/83e22d6f-87d0-441d-b4b3-ffb34ddd8caf-kube-api-access-wjb9r\") pod \"community-operators-wh67r\" (UID: \"83e22d6f-87d0-441d-b4b3-ffb34ddd8caf\") " pod="openshift-marketplace/community-operators-wh67r" Nov 28 13:53:40 crc kubenswrapper[4779]: I1128 13:53:40.023760 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83e22d6f-87d0-441d-b4b3-ffb34ddd8caf-utilities\") pod \"community-operators-wh67r\" (UID: \"83e22d6f-87d0-441d-b4b3-ffb34ddd8caf\") " pod="openshift-marketplace/community-operators-wh67r" Nov 28 13:53:40 crc kubenswrapper[4779]: I1128 13:53:40.024292 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83e22d6f-87d0-441d-b4b3-ffb34ddd8caf-utilities\") pod \"community-operators-wh67r\" (UID: \"83e22d6f-87d0-441d-b4b3-ffb34ddd8caf\") " pod="openshift-marketplace/community-operators-wh67r" Nov 28 13:53:40 crc kubenswrapper[4779]: I1128 13:53:40.025688 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83e22d6f-87d0-441d-b4b3-ffb34ddd8caf-catalog-content\") pod \"community-operators-wh67r\" (UID: \"83e22d6f-87d0-441d-b4b3-ffb34ddd8caf\") " pod="openshift-marketplace/community-operators-wh67r" Nov 28 13:53:40 crc kubenswrapper[4779]: I1128 13:53:40.025362 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83e22d6f-87d0-441d-b4b3-ffb34ddd8caf-catalog-content\") pod \"community-operators-wh67r\" (UID: \"83e22d6f-87d0-441d-b4b3-ffb34ddd8caf\") " pod="openshift-marketplace/community-operators-wh67r" Nov 28 13:53:40 crc kubenswrapper[4779]: I1128 13:53:40.042312 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjb9r\" (UniqueName: \"kubernetes.io/projected/83e22d6f-87d0-441d-b4b3-ffb34ddd8caf-kube-api-access-wjb9r\") pod \"community-operators-wh67r\" (UID: \"83e22d6f-87d0-441d-b4b3-ffb34ddd8caf\") " pod="openshift-marketplace/community-operators-wh67r" Nov 28 13:53:40 crc kubenswrapper[4779]: I1128 13:53:40.111146 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wh67r" Nov 28 13:53:40 crc kubenswrapper[4779]: I1128 13:53:40.731443 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wh67r"] Nov 28 13:53:40 crc kubenswrapper[4779]: I1128 13:53:40.819132 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wh67r" event={"ID":"83e22d6f-87d0-441d-b4b3-ffb34ddd8caf","Type":"ContainerStarted","Data":"ebe28673d967b26ac647c4dedb91f8685ba16ab62b9be36f20c45c57980d63da"} Nov 28 13:53:41 crc kubenswrapper[4779]: I1128 13:53:41.832962 4779 generic.go:334] "Generic (PLEG): container finished" podID="83e22d6f-87d0-441d-b4b3-ffb34ddd8caf" containerID="baa51f2d5c1ca8535e5e0f92fbef9b7f4b9e232f5c740815a23ae315472baf05" exitCode=0 Nov 28 13:53:41 crc kubenswrapper[4779]: I1128 13:53:41.833490 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wh67r" event={"ID":"83e22d6f-87d0-441d-b4b3-ffb34ddd8caf","Type":"ContainerDied","Data":"baa51f2d5c1ca8535e5e0f92fbef9b7f4b9e232f5c740815a23ae315472baf05"} Nov 28 13:53:41 crc kubenswrapper[4779]: I1128 13:53:41.836381 4779 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 13:53:43 crc kubenswrapper[4779]: I1128 13:53:43.867722 4779 generic.go:334] "Generic (PLEG): container finished" podID="83e22d6f-87d0-441d-b4b3-ffb34ddd8caf" containerID="d547474dae4bd93ff4af111d2041328ba27b7eb2a285bccd2a6a2f70381bcf40" exitCode=0 Nov 28 13:53:43 crc kubenswrapper[4779]: I1128 13:53:43.867801 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wh67r" event={"ID":"83e22d6f-87d0-441d-b4b3-ffb34ddd8caf","Type":"ContainerDied","Data":"d547474dae4bd93ff4af111d2041328ba27b7eb2a285bccd2a6a2f70381bcf40"} Nov 28 13:53:44 crc kubenswrapper[4779]: I1128 13:53:44.878676 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wh67r" event={"ID":"83e22d6f-87d0-441d-b4b3-ffb34ddd8caf","Type":"ContainerStarted","Data":"aefec31ef546887a6a6a195f80190c7a8db6c81a45e67f81434edec1a9e7b417"} Nov 28 13:53:45 crc kubenswrapper[4779]: I1128 13:53:45.918896 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wh67r" podStartSLOduration=4.399826593 podStartE2EDuration="6.918874945s" podCreationTimestamp="2025-11-28 13:53:39 +0000 UTC" firstStartedPulling="2025-11-28 13:53:41.836126037 +0000 UTC m=+4682.401801401" lastFinishedPulling="2025-11-28 13:53:44.355174399 +0000 UTC m=+4684.920849753" observedRunningTime="2025-11-28 13:53:45.905495508 +0000 UTC m=+4686.471170862" watchObservedRunningTime="2025-11-28 13:53:45.918874945 +0000 UTC m=+4686.484550299" Nov 28 13:53:50 crc kubenswrapper[4779]: I1128 13:53:50.111932 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wh67r" Nov 28 13:53:50 crc kubenswrapper[4779]: I1128 13:53:50.112791 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wh67r" Nov 28 13:53:50 crc kubenswrapper[4779]: I1128 13:53:50.346780 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wh67r" Nov 28 13:53:51 crc kubenswrapper[4779]: I1128 13:53:51.000144 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wh67r" Nov 28 13:53:51 crc kubenswrapper[4779]: I1128 13:53:51.051454 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wh67r"] Nov 28 13:53:52 crc kubenswrapper[4779]: I1128 13:53:52.956451 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wh67r" podUID="83e22d6f-87d0-441d-b4b3-ffb34ddd8caf" containerName="registry-server" containerID="cri-o://aefec31ef546887a6a6a195f80190c7a8db6c81a45e67f81434edec1a9e7b417" gracePeriod=2 Nov 28 13:53:53 crc kubenswrapper[4779]: I1128 13:53:53.967915 4779 generic.go:334] "Generic (PLEG): container finished" podID="83e22d6f-87d0-441d-b4b3-ffb34ddd8caf" containerID="aefec31ef546887a6a6a195f80190c7a8db6c81a45e67f81434edec1a9e7b417" exitCode=0 Nov 28 13:53:53 crc kubenswrapper[4779]: I1128 13:53:53.968277 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wh67r" event={"ID":"83e22d6f-87d0-441d-b4b3-ffb34ddd8caf","Type":"ContainerDied","Data":"aefec31ef546887a6a6a195f80190c7a8db6c81a45e67f81434edec1a9e7b417"} Nov 28 13:53:54 crc kubenswrapper[4779]: I1128 13:53:54.515388 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wh67r" Nov 28 13:53:54 crc kubenswrapper[4779]: I1128 13:53:54.640982 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjb9r\" (UniqueName: \"kubernetes.io/projected/83e22d6f-87d0-441d-b4b3-ffb34ddd8caf-kube-api-access-wjb9r\") pod \"83e22d6f-87d0-441d-b4b3-ffb34ddd8caf\" (UID: \"83e22d6f-87d0-441d-b4b3-ffb34ddd8caf\") " Nov 28 13:53:54 crc kubenswrapper[4779]: I1128 13:53:54.641041 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83e22d6f-87d0-441d-b4b3-ffb34ddd8caf-utilities\") pod \"83e22d6f-87d0-441d-b4b3-ffb34ddd8caf\" (UID: \"83e22d6f-87d0-441d-b4b3-ffb34ddd8caf\") " Nov 28 13:53:54 crc kubenswrapper[4779]: I1128 13:53:54.641245 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83e22d6f-87d0-441d-b4b3-ffb34ddd8caf-catalog-content\") pod \"83e22d6f-87d0-441d-b4b3-ffb34ddd8caf\" (UID: \"83e22d6f-87d0-441d-b4b3-ffb34ddd8caf\") " Nov 28 13:53:54 crc kubenswrapper[4779]: I1128 13:53:54.643237 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83e22d6f-87d0-441d-b4b3-ffb34ddd8caf-utilities" (OuterVolumeSpecName: "utilities") pod "83e22d6f-87d0-441d-b4b3-ffb34ddd8caf" (UID: "83e22d6f-87d0-441d-b4b3-ffb34ddd8caf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 13:53:54 crc kubenswrapper[4779]: I1128 13:53:54.651401 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83e22d6f-87d0-441d-b4b3-ffb34ddd8caf-kube-api-access-wjb9r" (OuterVolumeSpecName: "kube-api-access-wjb9r") pod "83e22d6f-87d0-441d-b4b3-ffb34ddd8caf" (UID: "83e22d6f-87d0-441d-b4b3-ffb34ddd8caf"). InnerVolumeSpecName "kube-api-access-wjb9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:53:54 crc kubenswrapper[4779]: I1128 13:53:54.700583 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83e22d6f-87d0-441d-b4b3-ffb34ddd8caf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "83e22d6f-87d0-441d-b4b3-ffb34ddd8caf" (UID: "83e22d6f-87d0-441d-b4b3-ffb34ddd8caf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 13:53:54 crc kubenswrapper[4779]: I1128 13:53:54.743824 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjb9r\" (UniqueName: \"kubernetes.io/projected/83e22d6f-87d0-441d-b4b3-ffb34ddd8caf-kube-api-access-wjb9r\") on node \"crc\" DevicePath \"\"" Nov 28 13:53:54 crc kubenswrapper[4779]: I1128 13:53:54.743862 4779 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83e22d6f-87d0-441d-b4b3-ffb34ddd8caf-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 13:53:54 crc kubenswrapper[4779]: I1128 13:53:54.743872 4779 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83e22d6f-87d0-441d-b4b3-ffb34ddd8caf-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 13:53:54 crc kubenswrapper[4779]: I1128 13:53:54.978261 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wh67r" event={"ID":"83e22d6f-87d0-441d-b4b3-ffb34ddd8caf","Type":"ContainerDied","Data":"ebe28673d967b26ac647c4dedb91f8685ba16ab62b9be36f20c45c57980d63da"} Nov 28 13:53:54 crc kubenswrapper[4779]: I1128 13:53:54.978312 4779 scope.go:117] "RemoveContainer" containerID="aefec31ef546887a6a6a195f80190c7a8db6c81a45e67f81434edec1a9e7b417" Nov 28 13:53:54 crc kubenswrapper[4779]: I1128 13:53:54.978438 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wh67r" Nov 28 13:53:55 crc kubenswrapper[4779]: I1128 13:53:55.003607 4779 scope.go:117] "RemoveContainer" containerID="d547474dae4bd93ff4af111d2041328ba27b7eb2a285bccd2a6a2f70381bcf40" Nov 28 13:53:55 crc kubenswrapper[4779]: I1128 13:53:55.027129 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wh67r"] Nov 28 13:53:55 crc kubenswrapper[4779]: I1128 13:53:55.031176 4779 scope.go:117] "RemoveContainer" containerID="baa51f2d5c1ca8535e5e0f92fbef9b7f4b9e232f5c740815a23ae315472baf05" Nov 28 13:53:55 crc kubenswrapper[4779]: I1128 13:53:55.037708 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wh67r"] Nov 28 13:53:55 crc kubenswrapper[4779]: I1128 13:53:55.776773 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83e22d6f-87d0-441d-b4b3-ffb34ddd8caf" path="/var/lib/kubelet/pods/83e22d6f-87d0-441d-b4b3-ffb34ddd8caf/volumes" Nov 28 13:54:46 crc kubenswrapper[4779]: I1128 13:54:46.284560 4779 patch_prober.go:28] interesting pod/machine-config-daemon-kj9g2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 13:54:46 crc kubenswrapper[4779]: I1128 13:54:46.285299 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 13:54:46 crc kubenswrapper[4779]: I1128 13:54:46.548630 4779 generic.go:334] "Generic (PLEG): container finished" podID="d4349ff9-0075-4c92-b53f-320ce678210e" containerID="5511cfad058a3c43be4d89c6008eb2775fd51ed836b46953a38148ab3283fb84" exitCode=0 Nov 28 13:54:46 crc kubenswrapper[4779]: I1128 13:54:46.548676 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4ksl2/must-gather-p8g46" event={"ID":"d4349ff9-0075-4c92-b53f-320ce678210e","Type":"ContainerDied","Data":"5511cfad058a3c43be4d89c6008eb2775fd51ed836b46953a38148ab3283fb84"} Nov 28 13:54:46 crc kubenswrapper[4779]: I1128 13:54:46.549381 4779 scope.go:117] "RemoveContainer" containerID="5511cfad058a3c43be4d89c6008eb2775fd51ed836b46953a38148ab3283fb84" Nov 28 13:54:47 crc kubenswrapper[4779]: I1128 13:54:47.494598 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-4ksl2_must-gather-p8g46_d4349ff9-0075-4c92-b53f-320ce678210e/gather/0.log" Nov 28 13:54:48 crc kubenswrapper[4779]: I1128 13:54:48.128316 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-p64dx"] Nov 28 13:54:48 crc kubenswrapper[4779]: E1128 13:54:48.129081 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83e22d6f-87d0-441d-b4b3-ffb34ddd8caf" containerName="extract-content" Nov 28 13:54:48 crc kubenswrapper[4779]: I1128 13:54:48.129155 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="83e22d6f-87d0-441d-b4b3-ffb34ddd8caf" containerName="extract-content" Nov 28 13:54:48 crc kubenswrapper[4779]: E1128 13:54:48.129180 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83e22d6f-87d0-441d-b4b3-ffb34ddd8caf" containerName="registry-server" Nov 28 13:54:48 crc kubenswrapper[4779]: I1128 13:54:48.129197 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="83e22d6f-87d0-441d-b4b3-ffb34ddd8caf" containerName="registry-server" Nov 28 13:54:48 crc kubenswrapper[4779]: E1128 13:54:48.129225 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83e22d6f-87d0-441d-b4b3-ffb34ddd8caf" containerName="extract-utilities" Nov 28 13:54:48 crc kubenswrapper[4779]: I1128 13:54:48.129242 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="83e22d6f-87d0-441d-b4b3-ffb34ddd8caf" containerName="extract-utilities" Nov 28 13:54:48 crc kubenswrapper[4779]: I1128 13:54:48.129690 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="83e22d6f-87d0-441d-b4b3-ffb34ddd8caf" containerName="registry-server" Nov 28 13:54:48 crc kubenswrapper[4779]: I1128 13:54:48.132630 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p64dx" Nov 28 13:54:48 crc kubenswrapper[4779]: I1128 13:54:48.144239 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p64dx"] Nov 28 13:54:48 crc kubenswrapper[4779]: I1128 13:54:48.153106 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgfbn\" (UniqueName: \"kubernetes.io/projected/8f66cfa0-8eb1-4f8a-94fa-b12c093fa337-kube-api-access-wgfbn\") pod \"redhat-marketplace-p64dx\" (UID: \"8f66cfa0-8eb1-4f8a-94fa-b12c093fa337\") " pod="openshift-marketplace/redhat-marketplace-p64dx" Nov 28 13:54:48 crc kubenswrapper[4779]: I1128 13:54:48.153220 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f66cfa0-8eb1-4f8a-94fa-b12c093fa337-utilities\") pod \"redhat-marketplace-p64dx\" (UID: \"8f66cfa0-8eb1-4f8a-94fa-b12c093fa337\") " pod="openshift-marketplace/redhat-marketplace-p64dx" Nov 28 13:54:48 crc kubenswrapper[4779]: I1128 13:54:48.153614 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f66cfa0-8eb1-4f8a-94fa-b12c093fa337-catalog-content\") pod \"redhat-marketplace-p64dx\" (UID: \"8f66cfa0-8eb1-4f8a-94fa-b12c093fa337\") " pod="openshift-marketplace/redhat-marketplace-p64dx" Nov 28 13:54:48 crc kubenswrapper[4779]: I1128 13:54:48.255126 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f66cfa0-8eb1-4f8a-94fa-b12c093fa337-utilities\") pod \"redhat-marketplace-p64dx\" (UID: \"8f66cfa0-8eb1-4f8a-94fa-b12c093fa337\") " pod="openshift-marketplace/redhat-marketplace-p64dx" Nov 28 13:54:48 crc kubenswrapper[4779]: I1128 13:54:48.255233 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f66cfa0-8eb1-4f8a-94fa-b12c093fa337-catalog-content\") pod \"redhat-marketplace-p64dx\" (UID: \"8f66cfa0-8eb1-4f8a-94fa-b12c093fa337\") " pod="openshift-marketplace/redhat-marketplace-p64dx" Nov 28 13:54:48 crc kubenswrapper[4779]: I1128 13:54:48.255261 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgfbn\" (UniqueName: \"kubernetes.io/projected/8f66cfa0-8eb1-4f8a-94fa-b12c093fa337-kube-api-access-wgfbn\") pod \"redhat-marketplace-p64dx\" (UID: \"8f66cfa0-8eb1-4f8a-94fa-b12c093fa337\") " pod="openshift-marketplace/redhat-marketplace-p64dx" Nov 28 13:54:48 crc kubenswrapper[4779]: I1128 13:54:48.255743 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f66cfa0-8eb1-4f8a-94fa-b12c093fa337-utilities\") pod \"redhat-marketplace-p64dx\" (UID: \"8f66cfa0-8eb1-4f8a-94fa-b12c093fa337\") " pod="openshift-marketplace/redhat-marketplace-p64dx" Nov 28 13:54:48 crc kubenswrapper[4779]: I1128 13:54:48.255773 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f66cfa0-8eb1-4f8a-94fa-b12c093fa337-catalog-content\") pod \"redhat-marketplace-p64dx\" (UID: \"8f66cfa0-8eb1-4f8a-94fa-b12c093fa337\") " pod="openshift-marketplace/redhat-marketplace-p64dx" Nov 28 13:54:48 crc kubenswrapper[4779]: I1128 13:54:48.279454 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgfbn\" (UniqueName: \"kubernetes.io/projected/8f66cfa0-8eb1-4f8a-94fa-b12c093fa337-kube-api-access-wgfbn\") pod \"redhat-marketplace-p64dx\" (UID: \"8f66cfa0-8eb1-4f8a-94fa-b12c093fa337\") " pod="openshift-marketplace/redhat-marketplace-p64dx" Nov 28 13:54:48 crc kubenswrapper[4779]: I1128 13:54:48.459729 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p64dx" Nov 28 13:54:48 crc kubenswrapper[4779]: I1128 13:54:48.976137 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p64dx"] Nov 28 13:54:49 crc kubenswrapper[4779]: I1128 13:54:49.589866 4779 generic.go:334] "Generic (PLEG): container finished" podID="8f66cfa0-8eb1-4f8a-94fa-b12c093fa337" containerID="ab00562adc2e39f3239f12cd8404e8148fc234e356dee8e08fc0e1219c96249e" exitCode=0 Nov 28 13:54:49 crc kubenswrapper[4779]: I1128 13:54:49.589911 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p64dx" event={"ID":"8f66cfa0-8eb1-4f8a-94fa-b12c093fa337","Type":"ContainerDied","Data":"ab00562adc2e39f3239f12cd8404e8148fc234e356dee8e08fc0e1219c96249e"} Nov 28 13:54:49 crc kubenswrapper[4779]: I1128 13:54:49.589941 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p64dx" event={"ID":"8f66cfa0-8eb1-4f8a-94fa-b12c093fa337","Type":"ContainerStarted","Data":"d321fa2f2296024ebc5756902298a26761235d3f512097a9b6bffcb46dd8c066"} Nov 28 13:54:51 crc kubenswrapper[4779]: I1128 13:54:51.613239 4779 generic.go:334] "Generic (PLEG): container finished" podID="8f66cfa0-8eb1-4f8a-94fa-b12c093fa337" containerID="ab9e67674bbd686308315c3db5a9deba0bdaa1c6ccde2fdf470bdf707e8028d7" exitCode=0 Nov 28 13:54:51 crc kubenswrapper[4779]: I1128 13:54:51.613337 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p64dx" event={"ID":"8f66cfa0-8eb1-4f8a-94fa-b12c093fa337","Type":"ContainerDied","Data":"ab9e67674bbd686308315c3db5a9deba0bdaa1c6ccde2fdf470bdf707e8028d7"} Nov 28 13:54:53 crc kubenswrapper[4779]: I1128 13:54:53.633804 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p64dx" event={"ID":"8f66cfa0-8eb1-4f8a-94fa-b12c093fa337","Type":"ContainerStarted","Data":"48cf9a7340f011f0392935289fc188abb72ac91d3e22bf9402b170b639c8012a"} Nov 28 13:54:53 crc kubenswrapper[4779]: I1128 13:54:53.663972 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-p64dx" podStartSLOduration=2.649385972 podStartE2EDuration="5.6639499s" podCreationTimestamp="2025-11-28 13:54:48 +0000 UTC" firstStartedPulling="2025-11-28 13:54:49.60322274 +0000 UTC m=+4750.168898094" lastFinishedPulling="2025-11-28 13:54:52.617786668 +0000 UTC m=+4753.183462022" observedRunningTime="2025-11-28 13:54:53.651781615 +0000 UTC m=+4754.217456969" watchObservedRunningTime="2025-11-28 13:54:53.6639499 +0000 UTC m=+4754.229625254" Nov 28 13:54:57 crc kubenswrapper[4779]: I1128 13:54:57.709079 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-4ksl2/must-gather-p8g46"] Nov 28 13:54:57 crc kubenswrapper[4779]: I1128 13:54:57.709828 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-4ksl2/must-gather-p8g46" podUID="d4349ff9-0075-4c92-b53f-320ce678210e" containerName="copy" containerID="cri-o://e1771f7e2112641f070212df553bddbfeb8014ee745bc3449d7df0ee1515fd79" gracePeriod=2 Nov 28 13:54:57 crc kubenswrapper[4779]: I1128 13:54:57.719583 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-4ksl2/must-gather-p8g46"] Nov 28 13:54:58 crc kubenswrapper[4779]: I1128 13:54:58.175822 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-4ksl2_must-gather-p8g46_d4349ff9-0075-4c92-b53f-320ce678210e/copy/0.log" Nov 28 13:54:58 crc kubenswrapper[4779]: I1128 13:54:58.176795 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4ksl2/must-gather-p8g46" Nov 28 13:54:58 crc kubenswrapper[4779]: I1128 13:54:58.350850 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d4349ff9-0075-4c92-b53f-320ce678210e-must-gather-output\") pod \"d4349ff9-0075-4c92-b53f-320ce678210e\" (UID: \"d4349ff9-0075-4c92-b53f-320ce678210e\") " Nov 28 13:54:58 crc kubenswrapper[4779]: I1128 13:54:58.351126 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42n6r\" (UniqueName: \"kubernetes.io/projected/d4349ff9-0075-4c92-b53f-320ce678210e-kube-api-access-42n6r\") pod \"d4349ff9-0075-4c92-b53f-320ce678210e\" (UID: \"d4349ff9-0075-4c92-b53f-320ce678210e\") " Nov 28 13:54:58 crc kubenswrapper[4779]: I1128 13:54:58.358003 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4349ff9-0075-4c92-b53f-320ce678210e-kube-api-access-42n6r" (OuterVolumeSpecName: "kube-api-access-42n6r") pod "d4349ff9-0075-4c92-b53f-320ce678210e" (UID: "d4349ff9-0075-4c92-b53f-320ce678210e"). InnerVolumeSpecName "kube-api-access-42n6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:54:58 crc kubenswrapper[4779]: I1128 13:54:58.453440 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42n6r\" (UniqueName: \"kubernetes.io/projected/d4349ff9-0075-4c92-b53f-320ce678210e-kube-api-access-42n6r\") on node \"crc\" DevicePath \"\"" Nov 28 13:54:58 crc kubenswrapper[4779]: I1128 13:54:58.461088 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-p64dx" Nov 28 13:54:58 crc kubenswrapper[4779]: I1128 13:54:58.461141 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-p64dx" Nov 28 13:54:58 crc kubenswrapper[4779]: I1128 13:54:58.541672 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-p64dx" Nov 28 13:54:58 crc kubenswrapper[4779]: I1128 13:54:58.549223 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4349ff9-0075-4c92-b53f-320ce678210e-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "d4349ff9-0075-4c92-b53f-320ce678210e" (UID: "d4349ff9-0075-4c92-b53f-320ce678210e"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 13:54:58 crc kubenswrapper[4779]: I1128 13:54:58.581552 4779 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d4349ff9-0075-4c92-b53f-320ce678210e-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 28 13:54:58 crc kubenswrapper[4779]: I1128 13:54:58.680862 4779 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-4ksl2_must-gather-p8g46_d4349ff9-0075-4c92-b53f-320ce678210e/copy/0.log" Nov 28 13:54:58 crc kubenswrapper[4779]: I1128 13:54:58.681486 4779 scope.go:117] "RemoveContainer" containerID="e1771f7e2112641f070212df553bddbfeb8014ee745bc3449d7df0ee1515fd79" Nov 28 13:54:58 crc kubenswrapper[4779]: I1128 13:54:58.681536 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4ksl2/must-gather-p8g46" Nov 28 13:54:58 crc kubenswrapper[4779]: I1128 13:54:58.681219 4779 generic.go:334] "Generic (PLEG): container finished" podID="d4349ff9-0075-4c92-b53f-320ce678210e" containerID="e1771f7e2112641f070212df553bddbfeb8014ee745bc3449d7df0ee1515fd79" exitCode=143 Nov 28 13:54:58 crc kubenswrapper[4779]: I1128 13:54:58.701047 4779 scope.go:117] "RemoveContainer" containerID="5511cfad058a3c43be4d89c6008eb2775fd51ed836b46953a38148ab3283fb84" Nov 28 13:54:58 crc kubenswrapper[4779]: I1128 13:54:58.747589 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-p64dx" Nov 28 13:54:58 crc kubenswrapper[4779]: I1128 13:54:58.828181 4779 scope.go:117] "RemoveContainer" containerID="e1771f7e2112641f070212df553bddbfeb8014ee745bc3449d7df0ee1515fd79" Nov 28 13:54:58 crc kubenswrapper[4779]: E1128 13:54:58.829301 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1771f7e2112641f070212df553bddbfeb8014ee745bc3449d7df0ee1515fd79\": container with ID starting with e1771f7e2112641f070212df553bddbfeb8014ee745bc3449d7df0ee1515fd79 not found: ID does not exist" containerID="e1771f7e2112641f070212df553bddbfeb8014ee745bc3449d7df0ee1515fd79" Nov 28 13:54:58 crc kubenswrapper[4779]: I1128 13:54:58.829341 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1771f7e2112641f070212df553bddbfeb8014ee745bc3449d7df0ee1515fd79"} err="failed to get container status \"e1771f7e2112641f070212df553bddbfeb8014ee745bc3449d7df0ee1515fd79\": rpc error: code = NotFound desc = could not find container \"e1771f7e2112641f070212df553bddbfeb8014ee745bc3449d7df0ee1515fd79\": container with ID starting with e1771f7e2112641f070212df553bddbfeb8014ee745bc3449d7df0ee1515fd79 not found: ID does not exist" Nov 28 13:54:58 crc kubenswrapper[4779]: I1128 13:54:58.829368 4779 scope.go:117] "RemoveContainer" containerID="5511cfad058a3c43be4d89c6008eb2775fd51ed836b46953a38148ab3283fb84" Nov 28 13:54:58 crc kubenswrapper[4779]: E1128 13:54:58.829659 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5511cfad058a3c43be4d89c6008eb2775fd51ed836b46953a38148ab3283fb84\": container with ID starting with 5511cfad058a3c43be4d89c6008eb2775fd51ed836b46953a38148ab3283fb84 not found: ID does not exist" containerID="5511cfad058a3c43be4d89c6008eb2775fd51ed836b46953a38148ab3283fb84" Nov 28 13:54:58 crc kubenswrapper[4779]: I1128 13:54:58.829689 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5511cfad058a3c43be4d89c6008eb2775fd51ed836b46953a38148ab3283fb84"} err="failed to get container status \"5511cfad058a3c43be4d89c6008eb2775fd51ed836b46953a38148ab3283fb84\": rpc error: code = NotFound desc = could not find container \"5511cfad058a3c43be4d89c6008eb2775fd51ed836b46953a38148ab3283fb84\": container with ID starting with 5511cfad058a3c43be4d89c6008eb2775fd51ed836b46953a38148ab3283fb84 not found: ID does not exist" Nov 28 13:54:58 crc kubenswrapper[4779]: I1128 13:54:58.851783 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p64dx"] Nov 28 13:54:59 crc kubenswrapper[4779]: I1128 13:54:59.740216 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4349ff9-0075-4c92-b53f-320ce678210e" path="/var/lib/kubelet/pods/d4349ff9-0075-4c92-b53f-320ce678210e/volumes" Nov 28 13:55:00 crc kubenswrapper[4779]: I1128 13:55:00.704416 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-p64dx" podUID="8f66cfa0-8eb1-4f8a-94fa-b12c093fa337" containerName="registry-server" containerID="cri-o://48cf9a7340f011f0392935289fc188abb72ac91d3e22bf9402b170b639c8012a" gracePeriod=2 Nov 28 13:55:01 crc kubenswrapper[4779]: I1128 13:55:01.200407 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p64dx" Nov 28 13:55:01 crc kubenswrapper[4779]: I1128 13:55:01.237509 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f66cfa0-8eb1-4f8a-94fa-b12c093fa337-catalog-content\") pod \"8f66cfa0-8eb1-4f8a-94fa-b12c093fa337\" (UID: \"8f66cfa0-8eb1-4f8a-94fa-b12c093fa337\") " Nov 28 13:55:01 crc kubenswrapper[4779]: I1128 13:55:01.237976 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgfbn\" (UniqueName: \"kubernetes.io/projected/8f66cfa0-8eb1-4f8a-94fa-b12c093fa337-kube-api-access-wgfbn\") pod \"8f66cfa0-8eb1-4f8a-94fa-b12c093fa337\" (UID: \"8f66cfa0-8eb1-4f8a-94fa-b12c093fa337\") " Nov 28 13:55:01 crc kubenswrapper[4779]: I1128 13:55:01.238007 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f66cfa0-8eb1-4f8a-94fa-b12c093fa337-utilities\") pod \"8f66cfa0-8eb1-4f8a-94fa-b12c093fa337\" (UID: \"8f66cfa0-8eb1-4f8a-94fa-b12c093fa337\") " Nov 28 13:55:01 crc kubenswrapper[4779]: I1128 13:55:01.238748 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f66cfa0-8eb1-4f8a-94fa-b12c093fa337-utilities" (OuterVolumeSpecName: "utilities") pod "8f66cfa0-8eb1-4f8a-94fa-b12c093fa337" (UID: "8f66cfa0-8eb1-4f8a-94fa-b12c093fa337"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 13:55:01 crc kubenswrapper[4779]: I1128 13:55:01.250400 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f66cfa0-8eb1-4f8a-94fa-b12c093fa337-kube-api-access-wgfbn" (OuterVolumeSpecName: "kube-api-access-wgfbn") pod "8f66cfa0-8eb1-4f8a-94fa-b12c093fa337" (UID: "8f66cfa0-8eb1-4f8a-94fa-b12c093fa337"). InnerVolumeSpecName "kube-api-access-wgfbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:55:01 crc kubenswrapper[4779]: I1128 13:55:01.260396 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f66cfa0-8eb1-4f8a-94fa-b12c093fa337-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8f66cfa0-8eb1-4f8a-94fa-b12c093fa337" (UID: "8f66cfa0-8eb1-4f8a-94fa-b12c093fa337"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 13:55:01 crc kubenswrapper[4779]: I1128 13:55:01.340762 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgfbn\" (UniqueName: \"kubernetes.io/projected/8f66cfa0-8eb1-4f8a-94fa-b12c093fa337-kube-api-access-wgfbn\") on node \"crc\" DevicePath \"\"" Nov 28 13:55:01 crc kubenswrapper[4779]: I1128 13:55:01.340810 4779 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f66cfa0-8eb1-4f8a-94fa-b12c093fa337-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 13:55:01 crc kubenswrapper[4779]: I1128 13:55:01.340826 4779 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f66cfa0-8eb1-4f8a-94fa-b12c093fa337-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 13:55:01 crc kubenswrapper[4779]: I1128 13:55:01.716728 4779 generic.go:334] "Generic (PLEG): container finished" podID="8f66cfa0-8eb1-4f8a-94fa-b12c093fa337" containerID="48cf9a7340f011f0392935289fc188abb72ac91d3e22bf9402b170b639c8012a" exitCode=0 Nov 28 13:55:01 crc kubenswrapper[4779]: I1128 13:55:01.716802 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p64dx" event={"ID":"8f66cfa0-8eb1-4f8a-94fa-b12c093fa337","Type":"ContainerDied","Data":"48cf9a7340f011f0392935289fc188abb72ac91d3e22bf9402b170b639c8012a"} Nov 28 13:55:01 crc kubenswrapper[4779]: I1128 13:55:01.716824 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p64dx" Nov 28 13:55:01 crc kubenswrapper[4779]: I1128 13:55:01.716851 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p64dx" event={"ID":"8f66cfa0-8eb1-4f8a-94fa-b12c093fa337","Type":"ContainerDied","Data":"d321fa2f2296024ebc5756902298a26761235d3f512097a9b6bffcb46dd8c066"} Nov 28 13:55:01 crc kubenswrapper[4779]: I1128 13:55:01.716875 4779 scope.go:117] "RemoveContainer" containerID="48cf9a7340f011f0392935289fc188abb72ac91d3e22bf9402b170b639c8012a" Nov 28 13:55:01 crc kubenswrapper[4779]: I1128 13:55:01.777035 4779 scope.go:117] "RemoveContainer" containerID="ab9e67674bbd686308315c3db5a9deba0bdaa1c6ccde2fdf470bdf707e8028d7" Nov 28 13:55:01 crc kubenswrapper[4779]: I1128 13:55:01.783616 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p64dx"] Nov 28 13:55:01 crc kubenswrapper[4779]: I1128 13:55:01.798910 4779 scope.go:117] "RemoveContainer" containerID="ab00562adc2e39f3239f12cd8404e8148fc234e356dee8e08fc0e1219c96249e" Nov 28 13:55:01 crc kubenswrapper[4779]: I1128 13:55:01.804670 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-p64dx"] Nov 28 13:55:01 crc kubenswrapper[4779]: I1128 13:55:01.852817 4779 scope.go:117] "RemoveContainer" containerID="48cf9a7340f011f0392935289fc188abb72ac91d3e22bf9402b170b639c8012a" Nov 28 13:55:01 crc kubenswrapper[4779]: E1128 13:55:01.853295 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48cf9a7340f011f0392935289fc188abb72ac91d3e22bf9402b170b639c8012a\": container with ID starting with 48cf9a7340f011f0392935289fc188abb72ac91d3e22bf9402b170b639c8012a not found: ID does not exist" containerID="48cf9a7340f011f0392935289fc188abb72ac91d3e22bf9402b170b639c8012a" Nov 28 13:55:01 crc kubenswrapper[4779]: I1128 13:55:01.853336 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48cf9a7340f011f0392935289fc188abb72ac91d3e22bf9402b170b639c8012a"} err="failed to get container status \"48cf9a7340f011f0392935289fc188abb72ac91d3e22bf9402b170b639c8012a\": rpc error: code = NotFound desc = could not find container \"48cf9a7340f011f0392935289fc188abb72ac91d3e22bf9402b170b639c8012a\": container with ID starting with 48cf9a7340f011f0392935289fc188abb72ac91d3e22bf9402b170b639c8012a not found: ID does not exist" Nov 28 13:55:01 crc kubenswrapper[4779]: I1128 13:55:01.853376 4779 scope.go:117] "RemoveContainer" containerID="ab9e67674bbd686308315c3db5a9deba0bdaa1c6ccde2fdf470bdf707e8028d7" Nov 28 13:55:01 crc kubenswrapper[4779]: E1128 13:55:01.853822 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab9e67674bbd686308315c3db5a9deba0bdaa1c6ccde2fdf470bdf707e8028d7\": container with ID starting with ab9e67674bbd686308315c3db5a9deba0bdaa1c6ccde2fdf470bdf707e8028d7 not found: ID does not exist" containerID="ab9e67674bbd686308315c3db5a9deba0bdaa1c6ccde2fdf470bdf707e8028d7" Nov 28 13:55:01 crc kubenswrapper[4779]: I1128 13:55:01.853852 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab9e67674bbd686308315c3db5a9deba0bdaa1c6ccde2fdf470bdf707e8028d7"} err="failed to get container status \"ab9e67674bbd686308315c3db5a9deba0bdaa1c6ccde2fdf470bdf707e8028d7\": rpc error: code = NotFound desc = could not find container \"ab9e67674bbd686308315c3db5a9deba0bdaa1c6ccde2fdf470bdf707e8028d7\": container with ID starting with ab9e67674bbd686308315c3db5a9deba0bdaa1c6ccde2fdf470bdf707e8028d7 not found: ID does not exist" Nov 28 13:55:01 crc kubenswrapper[4779]: I1128 13:55:01.853875 4779 scope.go:117] "RemoveContainer" containerID="ab00562adc2e39f3239f12cd8404e8148fc234e356dee8e08fc0e1219c96249e" Nov 28 13:55:01 crc kubenswrapper[4779]: E1128 13:55:01.854217 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab00562adc2e39f3239f12cd8404e8148fc234e356dee8e08fc0e1219c96249e\": container with ID starting with ab00562adc2e39f3239f12cd8404e8148fc234e356dee8e08fc0e1219c96249e not found: ID does not exist" containerID="ab00562adc2e39f3239f12cd8404e8148fc234e356dee8e08fc0e1219c96249e" Nov 28 13:55:01 crc kubenswrapper[4779]: I1128 13:55:01.854258 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab00562adc2e39f3239f12cd8404e8148fc234e356dee8e08fc0e1219c96249e"} err="failed to get container status \"ab00562adc2e39f3239f12cd8404e8148fc234e356dee8e08fc0e1219c96249e\": rpc error: code = NotFound desc = could not find container \"ab00562adc2e39f3239f12cd8404e8148fc234e356dee8e08fc0e1219c96249e\": container with ID starting with ab00562adc2e39f3239f12cd8404e8148fc234e356dee8e08fc0e1219c96249e not found: ID does not exist" Nov 28 13:55:03 crc kubenswrapper[4779]: I1128 13:55:03.738806 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f66cfa0-8eb1-4f8a-94fa-b12c093fa337" path="/var/lib/kubelet/pods/8f66cfa0-8eb1-4f8a-94fa-b12c093fa337/volumes" Nov 28 13:55:04 crc kubenswrapper[4779]: I1128 13:55:04.201855 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bhpwp"] Nov 28 13:55:04 crc kubenswrapper[4779]: E1128 13:55:04.202564 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f66cfa0-8eb1-4f8a-94fa-b12c093fa337" containerName="registry-server" Nov 28 13:55:04 crc kubenswrapper[4779]: I1128 13:55:04.202637 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f66cfa0-8eb1-4f8a-94fa-b12c093fa337" containerName="registry-server" Nov 28 13:55:04 crc kubenswrapper[4779]: E1128 13:55:04.202728 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4349ff9-0075-4c92-b53f-320ce678210e" containerName="copy" Nov 28 13:55:04 crc kubenswrapper[4779]: I1128 13:55:04.202789 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4349ff9-0075-4c92-b53f-320ce678210e" containerName="copy" Nov 28 13:55:04 crc kubenswrapper[4779]: E1128 13:55:04.202845 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f66cfa0-8eb1-4f8a-94fa-b12c093fa337" containerName="extract-utilities" Nov 28 13:55:04 crc kubenswrapper[4779]: I1128 13:55:04.202906 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f66cfa0-8eb1-4f8a-94fa-b12c093fa337" containerName="extract-utilities" Nov 28 13:55:04 crc kubenswrapper[4779]: E1128 13:55:04.202964 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4349ff9-0075-4c92-b53f-320ce678210e" containerName="gather" Nov 28 13:55:04 crc kubenswrapper[4779]: I1128 13:55:04.203014 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4349ff9-0075-4c92-b53f-320ce678210e" containerName="gather" Nov 28 13:55:04 crc kubenswrapper[4779]: E1128 13:55:04.203073 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f66cfa0-8eb1-4f8a-94fa-b12c093fa337" containerName="extract-content" Nov 28 13:55:04 crc kubenswrapper[4779]: I1128 13:55:04.203178 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f66cfa0-8eb1-4f8a-94fa-b12c093fa337" containerName="extract-content" Nov 28 13:55:04 crc kubenswrapper[4779]: I1128 13:55:04.203497 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f66cfa0-8eb1-4f8a-94fa-b12c093fa337" containerName="registry-server" Nov 28 13:55:04 crc kubenswrapper[4779]: I1128 13:55:04.203580 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4349ff9-0075-4c92-b53f-320ce678210e" containerName="copy" Nov 28 13:55:04 crc kubenswrapper[4779]: I1128 13:55:04.203648 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4349ff9-0075-4c92-b53f-320ce678210e" containerName="gather" Nov 28 13:55:04 crc kubenswrapper[4779]: I1128 13:55:04.205126 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bhpwp" Nov 28 13:55:04 crc kubenswrapper[4779]: I1128 13:55:04.213919 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bhpwp"] Nov 28 13:55:04 crc kubenswrapper[4779]: I1128 13:55:04.403267 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/654e7af0-884b-4dfe-b699-2d48e5abf883-catalog-content\") pod \"certified-operators-bhpwp\" (UID: \"654e7af0-884b-4dfe-b699-2d48e5abf883\") " pod="openshift-marketplace/certified-operators-bhpwp" Nov 28 13:55:04 crc kubenswrapper[4779]: I1128 13:55:04.403592 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/654e7af0-884b-4dfe-b699-2d48e5abf883-utilities\") pod \"certified-operators-bhpwp\" (UID: \"654e7af0-884b-4dfe-b699-2d48e5abf883\") " pod="openshift-marketplace/certified-operators-bhpwp" Nov 28 13:55:04 crc kubenswrapper[4779]: I1128 13:55:04.403920 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djc6q\" (UniqueName: \"kubernetes.io/projected/654e7af0-884b-4dfe-b699-2d48e5abf883-kube-api-access-djc6q\") pod \"certified-operators-bhpwp\" (UID: \"654e7af0-884b-4dfe-b699-2d48e5abf883\") " pod="openshift-marketplace/certified-operators-bhpwp" Nov 28 13:55:04 crc kubenswrapper[4779]: I1128 13:55:04.506223 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/654e7af0-884b-4dfe-b699-2d48e5abf883-catalog-content\") pod \"certified-operators-bhpwp\" (UID: \"654e7af0-884b-4dfe-b699-2d48e5abf883\") " pod="openshift-marketplace/certified-operators-bhpwp" Nov 28 13:55:04 crc kubenswrapper[4779]: I1128 13:55:04.506368 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/654e7af0-884b-4dfe-b699-2d48e5abf883-utilities\") pod \"certified-operators-bhpwp\" (UID: \"654e7af0-884b-4dfe-b699-2d48e5abf883\") " pod="openshift-marketplace/certified-operators-bhpwp" Nov 28 13:55:04 crc kubenswrapper[4779]: I1128 13:55:04.506450 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djc6q\" (UniqueName: \"kubernetes.io/projected/654e7af0-884b-4dfe-b699-2d48e5abf883-kube-api-access-djc6q\") pod \"certified-operators-bhpwp\" (UID: \"654e7af0-884b-4dfe-b699-2d48e5abf883\") " pod="openshift-marketplace/certified-operators-bhpwp" Nov 28 13:55:04 crc kubenswrapper[4779]: I1128 13:55:04.506839 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/654e7af0-884b-4dfe-b699-2d48e5abf883-catalog-content\") pod \"certified-operators-bhpwp\" (UID: \"654e7af0-884b-4dfe-b699-2d48e5abf883\") " pod="openshift-marketplace/certified-operators-bhpwp" Nov 28 13:55:04 crc kubenswrapper[4779]: I1128 13:55:04.506940 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/654e7af0-884b-4dfe-b699-2d48e5abf883-utilities\") pod \"certified-operators-bhpwp\" (UID: \"654e7af0-884b-4dfe-b699-2d48e5abf883\") " pod="openshift-marketplace/certified-operators-bhpwp" Nov 28 13:55:04 crc kubenswrapper[4779]: I1128 13:55:04.535526 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djc6q\" (UniqueName: \"kubernetes.io/projected/654e7af0-884b-4dfe-b699-2d48e5abf883-kube-api-access-djc6q\") pod \"certified-operators-bhpwp\" (UID: \"654e7af0-884b-4dfe-b699-2d48e5abf883\") " pod="openshift-marketplace/certified-operators-bhpwp" Nov 28 13:55:04 crc kubenswrapper[4779]: I1128 13:55:04.829347 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bhpwp" Nov 28 13:55:05 crc kubenswrapper[4779]: I1128 13:55:05.321504 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bhpwp"] Nov 28 13:55:06 crc kubenswrapper[4779]: I1128 13:55:06.771334 4779 generic.go:334] "Generic (PLEG): container finished" podID="654e7af0-884b-4dfe-b699-2d48e5abf883" containerID="9f565dc9e68410736ebb3507e28a8c59cd18506b8688e4bcd5cb733aca30fe6f" exitCode=0 Nov 28 13:55:06 crc kubenswrapper[4779]: I1128 13:55:06.771431 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhpwp" event={"ID":"654e7af0-884b-4dfe-b699-2d48e5abf883","Type":"ContainerDied","Data":"9f565dc9e68410736ebb3507e28a8c59cd18506b8688e4bcd5cb733aca30fe6f"} Nov 28 13:55:06 crc kubenswrapper[4779]: I1128 13:55:06.771655 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhpwp" event={"ID":"654e7af0-884b-4dfe-b699-2d48e5abf883","Type":"ContainerStarted","Data":"a70138b494104546512f28f0d3f284abcc055845cd00c686463d987e67299437"} Nov 28 13:55:07 crc kubenswrapper[4779]: I1128 13:55:07.781904 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhpwp" event={"ID":"654e7af0-884b-4dfe-b699-2d48e5abf883","Type":"ContainerStarted","Data":"f9620f1e2caa97fceedfad4069d0af7f59bd523061246095513a526d8459c474"} Nov 28 13:55:08 crc kubenswrapper[4779]: I1128 13:55:08.793675 4779 generic.go:334] "Generic (PLEG): container finished" podID="654e7af0-884b-4dfe-b699-2d48e5abf883" containerID="f9620f1e2caa97fceedfad4069d0af7f59bd523061246095513a526d8459c474" exitCode=0 Nov 28 13:55:08 crc kubenswrapper[4779]: I1128 13:55:08.793714 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhpwp" event={"ID":"654e7af0-884b-4dfe-b699-2d48e5abf883","Type":"ContainerDied","Data":"f9620f1e2caa97fceedfad4069d0af7f59bd523061246095513a526d8459c474"} Nov 28 13:55:10 crc kubenswrapper[4779]: I1128 13:55:10.820991 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhpwp" event={"ID":"654e7af0-884b-4dfe-b699-2d48e5abf883","Type":"ContainerStarted","Data":"9d1941c04e8ec724df1d0f42d04216aea92d0f17aafaa726a7fc32b0497bc072"} Nov 28 13:55:10 crc kubenswrapper[4779]: I1128 13:55:10.858575 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bhpwp" podStartSLOduration=4.083109858 podStartE2EDuration="6.858552259s" podCreationTimestamp="2025-11-28 13:55:04 +0000 UTC" firstStartedPulling="2025-11-28 13:55:06.772937354 +0000 UTC m=+4767.338612708" lastFinishedPulling="2025-11-28 13:55:09.548379755 +0000 UTC m=+4770.114055109" observedRunningTime="2025-11-28 13:55:10.845812529 +0000 UTC m=+4771.411487883" watchObservedRunningTime="2025-11-28 13:55:10.858552259 +0000 UTC m=+4771.424227613" Nov 28 13:55:14 crc kubenswrapper[4779]: I1128 13:55:14.829420 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bhpwp" Nov 28 13:55:14 crc kubenswrapper[4779]: I1128 13:55:14.829894 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bhpwp" Nov 28 13:55:14 crc kubenswrapper[4779]: I1128 13:55:14.897018 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bhpwp" Nov 28 13:55:14 crc kubenswrapper[4779]: I1128 13:55:14.954684 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bhpwp" Nov 28 13:55:15 crc kubenswrapper[4779]: I1128 13:55:15.141033 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bhpwp"] Nov 28 13:55:16 crc kubenswrapper[4779]: I1128 13:55:16.285145 4779 patch_prober.go:28] interesting pod/machine-config-daemon-kj9g2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 13:55:16 crc kubenswrapper[4779]: I1128 13:55:16.285377 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 13:55:16 crc kubenswrapper[4779]: I1128 13:55:16.882519 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bhpwp" podUID="654e7af0-884b-4dfe-b699-2d48e5abf883" containerName="registry-server" containerID="cri-o://9d1941c04e8ec724df1d0f42d04216aea92d0f17aafaa726a7fc32b0497bc072" gracePeriod=2 Nov 28 13:55:17 crc kubenswrapper[4779]: I1128 13:55:17.351901 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bhpwp" Nov 28 13:55:17 crc kubenswrapper[4779]: I1128 13:55:17.466146 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/654e7af0-884b-4dfe-b699-2d48e5abf883-catalog-content\") pod \"654e7af0-884b-4dfe-b699-2d48e5abf883\" (UID: \"654e7af0-884b-4dfe-b699-2d48e5abf883\") " Nov 28 13:55:17 crc kubenswrapper[4779]: I1128 13:55:17.466503 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/654e7af0-884b-4dfe-b699-2d48e5abf883-utilities\") pod \"654e7af0-884b-4dfe-b699-2d48e5abf883\" (UID: \"654e7af0-884b-4dfe-b699-2d48e5abf883\") " Nov 28 13:55:17 crc kubenswrapper[4779]: I1128 13:55:17.466567 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djc6q\" (UniqueName: \"kubernetes.io/projected/654e7af0-884b-4dfe-b699-2d48e5abf883-kube-api-access-djc6q\") pod \"654e7af0-884b-4dfe-b699-2d48e5abf883\" (UID: \"654e7af0-884b-4dfe-b699-2d48e5abf883\") " Nov 28 13:55:17 crc kubenswrapper[4779]: I1128 13:55:17.468359 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/654e7af0-884b-4dfe-b699-2d48e5abf883-utilities" (OuterVolumeSpecName: "utilities") pod "654e7af0-884b-4dfe-b699-2d48e5abf883" (UID: "654e7af0-884b-4dfe-b699-2d48e5abf883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 13:55:17 crc kubenswrapper[4779]: I1128 13:55:17.481459 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/654e7af0-884b-4dfe-b699-2d48e5abf883-kube-api-access-djc6q" (OuterVolumeSpecName: "kube-api-access-djc6q") pod "654e7af0-884b-4dfe-b699-2d48e5abf883" (UID: "654e7af0-884b-4dfe-b699-2d48e5abf883"). InnerVolumeSpecName "kube-api-access-djc6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:55:17 crc kubenswrapper[4779]: I1128 13:55:17.529926 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/654e7af0-884b-4dfe-b699-2d48e5abf883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "654e7af0-884b-4dfe-b699-2d48e5abf883" (UID: "654e7af0-884b-4dfe-b699-2d48e5abf883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 13:55:17 crc kubenswrapper[4779]: I1128 13:55:17.568669 4779 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/654e7af0-884b-4dfe-b699-2d48e5abf883-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 13:55:17 crc kubenswrapper[4779]: I1128 13:55:17.568705 4779 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/654e7af0-884b-4dfe-b699-2d48e5abf883-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 13:55:17 crc kubenswrapper[4779]: I1128 13:55:17.568715 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djc6q\" (UniqueName: \"kubernetes.io/projected/654e7af0-884b-4dfe-b699-2d48e5abf883-kube-api-access-djc6q\") on node \"crc\" DevicePath \"\"" Nov 28 13:55:17 crc kubenswrapper[4779]: I1128 13:55:17.894739 4779 generic.go:334] "Generic (PLEG): container finished" podID="654e7af0-884b-4dfe-b699-2d48e5abf883" containerID="9d1941c04e8ec724df1d0f42d04216aea92d0f17aafaa726a7fc32b0497bc072" exitCode=0 Nov 28 13:55:17 crc kubenswrapper[4779]: I1128 13:55:17.894793 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhpwp" event={"ID":"654e7af0-884b-4dfe-b699-2d48e5abf883","Type":"ContainerDied","Data":"9d1941c04e8ec724df1d0f42d04216aea92d0f17aafaa726a7fc32b0497bc072"} Nov 28 13:55:17 crc kubenswrapper[4779]: I1128 13:55:17.894833 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bhpwp" Nov 28 13:55:17 crc kubenswrapper[4779]: I1128 13:55:17.894858 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhpwp" event={"ID":"654e7af0-884b-4dfe-b699-2d48e5abf883","Type":"ContainerDied","Data":"a70138b494104546512f28f0d3f284abcc055845cd00c686463d987e67299437"} Nov 28 13:55:17 crc kubenswrapper[4779]: I1128 13:55:17.894885 4779 scope.go:117] "RemoveContainer" containerID="9d1941c04e8ec724df1d0f42d04216aea92d0f17aafaa726a7fc32b0497bc072" Nov 28 13:55:17 crc kubenswrapper[4779]: I1128 13:55:17.917790 4779 scope.go:117] "RemoveContainer" containerID="f9620f1e2caa97fceedfad4069d0af7f59bd523061246095513a526d8459c474" Nov 28 13:55:17 crc kubenswrapper[4779]: I1128 13:55:17.927319 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bhpwp"] Nov 28 13:55:17 crc kubenswrapper[4779]: I1128 13:55:17.940123 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bhpwp"] Nov 28 13:55:17 crc kubenswrapper[4779]: I1128 13:55:17.942899 4779 scope.go:117] "RemoveContainer" containerID="9f565dc9e68410736ebb3507e28a8c59cd18506b8688e4bcd5cb733aca30fe6f" Nov 28 13:55:17 crc kubenswrapper[4779]: I1128 13:55:17.988980 4779 scope.go:117] "RemoveContainer" containerID="9d1941c04e8ec724df1d0f42d04216aea92d0f17aafaa726a7fc32b0497bc072" Nov 28 13:55:17 crc kubenswrapper[4779]: E1128 13:55:17.989704 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d1941c04e8ec724df1d0f42d04216aea92d0f17aafaa726a7fc32b0497bc072\": container with ID starting with 9d1941c04e8ec724df1d0f42d04216aea92d0f17aafaa726a7fc32b0497bc072 not found: ID does not exist" containerID="9d1941c04e8ec724df1d0f42d04216aea92d0f17aafaa726a7fc32b0497bc072" Nov 28 13:55:17 crc kubenswrapper[4779]: I1128 13:55:17.989779 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d1941c04e8ec724df1d0f42d04216aea92d0f17aafaa726a7fc32b0497bc072"} err="failed to get container status \"9d1941c04e8ec724df1d0f42d04216aea92d0f17aafaa726a7fc32b0497bc072\": rpc error: code = NotFound desc = could not find container \"9d1941c04e8ec724df1d0f42d04216aea92d0f17aafaa726a7fc32b0497bc072\": container with ID starting with 9d1941c04e8ec724df1d0f42d04216aea92d0f17aafaa726a7fc32b0497bc072 not found: ID does not exist" Nov 28 13:55:17 crc kubenswrapper[4779]: I1128 13:55:17.989816 4779 scope.go:117] "RemoveContainer" containerID="f9620f1e2caa97fceedfad4069d0af7f59bd523061246095513a526d8459c474" Nov 28 13:55:17 crc kubenswrapper[4779]: E1128 13:55:17.990310 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9620f1e2caa97fceedfad4069d0af7f59bd523061246095513a526d8459c474\": container with ID starting with f9620f1e2caa97fceedfad4069d0af7f59bd523061246095513a526d8459c474 not found: ID does not exist" containerID="f9620f1e2caa97fceedfad4069d0af7f59bd523061246095513a526d8459c474" Nov 28 13:55:17 crc kubenswrapper[4779]: I1128 13:55:17.990360 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9620f1e2caa97fceedfad4069d0af7f59bd523061246095513a526d8459c474"} err="failed to get container status \"f9620f1e2caa97fceedfad4069d0af7f59bd523061246095513a526d8459c474\": rpc error: code = NotFound desc = could not find container \"f9620f1e2caa97fceedfad4069d0af7f59bd523061246095513a526d8459c474\": container with ID starting with f9620f1e2caa97fceedfad4069d0af7f59bd523061246095513a526d8459c474 not found: ID does not exist" Nov 28 13:55:17 crc kubenswrapper[4779]: I1128 13:55:17.990385 4779 scope.go:117] "RemoveContainer" containerID="9f565dc9e68410736ebb3507e28a8c59cd18506b8688e4bcd5cb733aca30fe6f" Nov 28 13:55:17 crc kubenswrapper[4779]: E1128 13:55:17.990722 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f565dc9e68410736ebb3507e28a8c59cd18506b8688e4bcd5cb733aca30fe6f\": container with ID starting with 9f565dc9e68410736ebb3507e28a8c59cd18506b8688e4bcd5cb733aca30fe6f not found: ID does not exist" containerID="9f565dc9e68410736ebb3507e28a8c59cd18506b8688e4bcd5cb733aca30fe6f" Nov 28 13:55:17 crc kubenswrapper[4779]: I1128 13:55:17.990767 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f565dc9e68410736ebb3507e28a8c59cd18506b8688e4bcd5cb733aca30fe6f"} err="failed to get container status \"9f565dc9e68410736ebb3507e28a8c59cd18506b8688e4bcd5cb733aca30fe6f\": rpc error: code = NotFound desc = could not find container \"9f565dc9e68410736ebb3507e28a8c59cd18506b8688e4bcd5cb733aca30fe6f\": container with ID starting with 9f565dc9e68410736ebb3507e28a8c59cd18506b8688e4bcd5cb733aca30fe6f not found: ID does not exist" Nov 28 13:55:19 crc kubenswrapper[4779]: I1128 13:55:19.744497 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="654e7af0-884b-4dfe-b699-2d48e5abf883" path="/var/lib/kubelet/pods/654e7af0-884b-4dfe-b699-2d48e5abf883/volumes" Nov 28 13:55:46 crc kubenswrapper[4779]: I1128 13:55:46.284460 4779 patch_prober.go:28] interesting pod/machine-config-daemon-kj9g2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 13:55:46 crc kubenswrapper[4779]: I1128 13:55:46.285145 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 13:55:46 crc kubenswrapper[4779]: I1128 13:55:46.285214 4779 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" Nov 28 13:55:46 crc kubenswrapper[4779]: I1128 13:55:46.286366 4779 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e5e95ee5d438035986b77d6983fd6e0403152a1d57c6cbb8297ef9ebb38710a9"} pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 13:55:46 crc kubenswrapper[4779]: I1128 13:55:46.286449 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" containerID="cri-o://e5e95ee5d438035986b77d6983fd6e0403152a1d57c6cbb8297ef9ebb38710a9" gracePeriod=600 Nov 28 13:55:46 crc kubenswrapper[4779]: E1128 13:55:46.496221 4779 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b2a3eb4_4de5_491b_b466_3a35b7d745ec.slice/crio-conmon-e5e95ee5d438035986b77d6983fd6e0403152a1d57c6cbb8297ef9ebb38710a9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b2a3eb4_4de5_491b_b466_3a35b7d745ec.slice/crio-e5e95ee5d438035986b77d6983fd6e0403152a1d57c6cbb8297ef9ebb38710a9.scope\": RecentStats: unable to find data in memory cache]" Nov 28 13:55:47 crc kubenswrapper[4779]: I1128 13:55:47.247272 4779 generic.go:334] "Generic (PLEG): container finished" podID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerID="e5e95ee5d438035986b77d6983fd6e0403152a1d57c6cbb8297ef9ebb38710a9" exitCode=0 Nov 28 13:55:47 crc kubenswrapper[4779]: I1128 13:55:47.247309 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" event={"ID":"3b2a3eb4-4de5-491b-b466-3a35b7d745ec","Type":"ContainerDied","Data":"e5e95ee5d438035986b77d6983fd6e0403152a1d57c6cbb8297ef9ebb38710a9"} Nov 28 13:55:47 crc kubenswrapper[4779]: I1128 13:55:47.248084 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" event={"ID":"3b2a3eb4-4de5-491b-b466-3a35b7d745ec","Type":"ContainerStarted","Data":"90d43b19c21c76ebfcdfe319ab08cbedbcc41b2aaa9010869b06c290a9c8d29b"} Nov 28 13:55:47 crc kubenswrapper[4779]: I1128 13:55:47.248183 4779 scope.go:117] "RemoveContainer" containerID="9f006008e295b62d0150e689d0b029c75904925a7bbc3374e7ffce20c396b60a" Nov 28 13:57:12 crc kubenswrapper[4779]: I1128 13:57:12.180845 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="13db3856-5125-439c-86a8-4493e5619b44" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Nov 28 13:57:46 crc kubenswrapper[4779]: I1128 13:57:46.285214 4779 patch_prober.go:28] interesting pod/machine-config-daemon-kj9g2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 13:57:46 crc kubenswrapper[4779]: I1128 13:57:46.286265 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 13:57:49 crc kubenswrapper[4779]: I1128 13:57:49.121490 4779 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tm27c"] Nov 28 13:57:49 crc kubenswrapper[4779]: E1128 13:57:49.122269 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="654e7af0-884b-4dfe-b699-2d48e5abf883" containerName="registry-server" Nov 28 13:57:49 crc kubenswrapper[4779]: I1128 13:57:49.122282 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="654e7af0-884b-4dfe-b699-2d48e5abf883" containerName="registry-server" Nov 28 13:57:49 crc kubenswrapper[4779]: E1128 13:57:49.122303 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="654e7af0-884b-4dfe-b699-2d48e5abf883" containerName="extract-content" Nov 28 13:57:49 crc kubenswrapper[4779]: I1128 13:57:49.122309 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="654e7af0-884b-4dfe-b699-2d48e5abf883" containerName="extract-content" Nov 28 13:57:49 crc kubenswrapper[4779]: E1128 13:57:49.122339 4779 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="654e7af0-884b-4dfe-b699-2d48e5abf883" containerName="extract-utilities" Nov 28 13:57:49 crc kubenswrapper[4779]: I1128 13:57:49.122346 4779 state_mem.go:107] "Deleted CPUSet assignment" podUID="654e7af0-884b-4dfe-b699-2d48e5abf883" containerName="extract-utilities" Nov 28 13:57:49 crc kubenswrapper[4779]: I1128 13:57:49.122524 4779 memory_manager.go:354] "RemoveStaleState removing state" podUID="654e7af0-884b-4dfe-b699-2d48e5abf883" containerName="registry-server" Nov 28 13:57:49 crc kubenswrapper[4779]: I1128 13:57:49.123957 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tm27c" Nov 28 13:57:49 crc kubenswrapper[4779]: I1128 13:57:49.150770 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e71500e-e601-4476-b9a1-7f24291dcc2b-catalog-content\") pod \"redhat-operators-tm27c\" (UID: \"2e71500e-e601-4476-b9a1-7f24291dcc2b\") " pod="openshift-marketplace/redhat-operators-tm27c" Nov 28 13:57:49 crc kubenswrapper[4779]: I1128 13:57:49.151053 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e71500e-e601-4476-b9a1-7f24291dcc2b-utilities\") pod \"redhat-operators-tm27c\" (UID: \"2e71500e-e601-4476-b9a1-7f24291dcc2b\") " pod="openshift-marketplace/redhat-operators-tm27c" Nov 28 13:57:49 crc kubenswrapper[4779]: I1128 13:57:49.151118 4779 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhnwt\" (UniqueName: \"kubernetes.io/projected/2e71500e-e601-4476-b9a1-7f24291dcc2b-kube-api-access-zhnwt\") pod \"redhat-operators-tm27c\" (UID: \"2e71500e-e601-4476-b9a1-7f24291dcc2b\") " pod="openshift-marketplace/redhat-operators-tm27c" Nov 28 13:57:49 crc kubenswrapper[4779]: I1128 13:57:49.156576 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tm27c"] Nov 28 13:57:49 crc kubenswrapper[4779]: I1128 13:57:49.253356 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e71500e-e601-4476-b9a1-7f24291dcc2b-utilities\") pod \"redhat-operators-tm27c\" (UID: \"2e71500e-e601-4476-b9a1-7f24291dcc2b\") " pod="openshift-marketplace/redhat-operators-tm27c" Nov 28 13:57:49 crc kubenswrapper[4779]: I1128 13:57:49.253404 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhnwt\" (UniqueName: \"kubernetes.io/projected/2e71500e-e601-4476-b9a1-7f24291dcc2b-kube-api-access-zhnwt\") pod \"redhat-operators-tm27c\" (UID: \"2e71500e-e601-4476-b9a1-7f24291dcc2b\") " pod="openshift-marketplace/redhat-operators-tm27c" Nov 28 13:57:49 crc kubenswrapper[4779]: I1128 13:57:49.253481 4779 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e71500e-e601-4476-b9a1-7f24291dcc2b-catalog-content\") pod \"redhat-operators-tm27c\" (UID: \"2e71500e-e601-4476-b9a1-7f24291dcc2b\") " pod="openshift-marketplace/redhat-operators-tm27c" Nov 28 13:57:49 crc kubenswrapper[4779]: I1128 13:57:49.253845 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e71500e-e601-4476-b9a1-7f24291dcc2b-utilities\") pod \"redhat-operators-tm27c\" (UID: \"2e71500e-e601-4476-b9a1-7f24291dcc2b\") " pod="openshift-marketplace/redhat-operators-tm27c" Nov 28 13:57:49 crc kubenswrapper[4779]: I1128 13:57:49.253888 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e71500e-e601-4476-b9a1-7f24291dcc2b-catalog-content\") pod \"redhat-operators-tm27c\" (UID: \"2e71500e-e601-4476-b9a1-7f24291dcc2b\") " pod="openshift-marketplace/redhat-operators-tm27c" Nov 28 13:57:49 crc kubenswrapper[4779]: I1128 13:57:49.278547 4779 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhnwt\" (UniqueName: \"kubernetes.io/projected/2e71500e-e601-4476-b9a1-7f24291dcc2b-kube-api-access-zhnwt\") pod \"redhat-operators-tm27c\" (UID: \"2e71500e-e601-4476-b9a1-7f24291dcc2b\") " pod="openshift-marketplace/redhat-operators-tm27c" Nov 28 13:57:49 crc kubenswrapper[4779]: I1128 13:57:49.464282 4779 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tm27c" Nov 28 13:57:50 crc kubenswrapper[4779]: I1128 13:57:50.012341 4779 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tm27c"] Nov 28 13:57:50 crc kubenswrapper[4779]: I1128 13:57:50.665598 4779 generic.go:334] "Generic (PLEG): container finished" podID="2e71500e-e601-4476-b9a1-7f24291dcc2b" containerID="a5d8f0171520188a51ec32baafb50fe19264aca96a5de49d873dfe927096b65c" exitCode=0 Nov 28 13:57:50 crc kubenswrapper[4779]: I1128 13:57:50.665635 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tm27c" event={"ID":"2e71500e-e601-4476-b9a1-7f24291dcc2b","Type":"ContainerDied","Data":"a5d8f0171520188a51ec32baafb50fe19264aca96a5de49d873dfe927096b65c"} Nov 28 13:57:50 crc kubenswrapper[4779]: I1128 13:57:50.665659 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tm27c" event={"ID":"2e71500e-e601-4476-b9a1-7f24291dcc2b","Type":"ContainerStarted","Data":"3476a586d75688dc816f5aeef196d1d329255f6cc32156db63ba4528941a5d29"} Nov 28 13:57:51 crc kubenswrapper[4779]: I1128 13:57:51.683009 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tm27c" event={"ID":"2e71500e-e601-4476-b9a1-7f24291dcc2b","Type":"ContainerStarted","Data":"85ae106cb3c6b83b77fb97cc57071a09a988231a102af613988694a8b5c27240"} Nov 28 13:57:53 crc kubenswrapper[4779]: I1128 13:57:53.701333 4779 generic.go:334] "Generic (PLEG): container finished" podID="2e71500e-e601-4476-b9a1-7f24291dcc2b" containerID="85ae106cb3c6b83b77fb97cc57071a09a988231a102af613988694a8b5c27240" exitCode=0 Nov 28 13:57:53 crc kubenswrapper[4779]: I1128 13:57:53.701414 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tm27c" event={"ID":"2e71500e-e601-4476-b9a1-7f24291dcc2b","Type":"ContainerDied","Data":"85ae106cb3c6b83b77fb97cc57071a09a988231a102af613988694a8b5c27240"} Nov 28 13:57:55 crc kubenswrapper[4779]: I1128 13:57:55.724698 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tm27c" event={"ID":"2e71500e-e601-4476-b9a1-7f24291dcc2b","Type":"ContainerStarted","Data":"01a11d6de3703797c443784c2168446c54b68e5f8d74626f73e92f58f0a35d1f"} Nov 28 13:57:55 crc kubenswrapper[4779]: I1128 13:57:55.763371 4779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tm27c" podStartSLOduration=3.265145437 podStartE2EDuration="6.763348782s" podCreationTimestamp="2025-11-28 13:57:49 +0000 UTC" firstStartedPulling="2025-11-28 13:57:50.667076874 +0000 UTC m=+4931.232752228" lastFinishedPulling="2025-11-28 13:57:54.165280219 +0000 UTC m=+4934.730955573" observedRunningTime="2025-11-28 13:57:55.750140209 +0000 UTC m=+4936.315815573" watchObservedRunningTime="2025-11-28 13:57:55.763348782 +0000 UTC m=+4936.329024136" Nov 28 13:57:59 crc kubenswrapper[4779]: I1128 13:57:59.464470 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tm27c" Nov 28 13:57:59 crc kubenswrapper[4779]: I1128 13:57:59.464977 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tm27c" Nov 28 13:58:00 crc kubenswrapper[4779]: I1128 13:58:00.532811 4779 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tm27c" podUID="2e71500e-e601-4476-b9a1-7f24291dcc2b" containerName="registry-server" probeResult="failure" output=< Nov 28 13:58:00 crc kubenswrapper[4779]: timeout: failed to connect service ":50051" within 1s Nov 28 13:58:00 crc kubenswrapper[4779]: > Nov 28 13:58:09 crc kubenswrapper[4779]: I1128 13:58:09.533637 4779 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tm27c" Nov 28 13:58:09 crc kubenswrapper[4779]: I1128 13:58:09.583616 4779 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tm27c" Nov 28 13:58:09 crc kubenswrapper[4779]: I1128 13:58:09.791744 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tm27c"] Nov 28 13:58:10 crc kubenswrapper[4779]: I1128 13:58:10.914255 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tm27c" podUID="2e71500e-e601-4476-b9a1-7f24291dcc2b" containerName="registry-server" containerID="cri-o://01a11d6de3703797c443784c2168446c54b68e5f8d74626f73e92f58f0a35d1f" gracePeriod=2 Nov 28 13:58:11 crc kubenswrapper[4779]: I1128 13:58:11.442634 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tm27c" Nov 28 13:58:11 crc kubenswrapper[4779]: I1128 13:58:11.556203 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhnwt\" (UniqueName: \"kubernetes.io/projected/2e71500e-e601-4476-b9a1-7f24291dcc2b-kube-api-access-zhnwt\") pod \"2e71500e-e601-4476-b9a1-7f24291dcc2b\" (UID: \"2e71500e-e601-4476-b9a1-7f24291dcc2b\") " Nov 28 13:58:11 crc kubenswrapper[4779]: I1128 13:58:11.558454 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e71500e-e601-4476-b9a1-7f24291dcc2b-catalog-content\") pod \"2e71500e-e601-4476-b9a1-7f24291dcc2b\" (UID: \"2e71500e-e601-4476-b9a1-7f24291dcc2b\") " Nov 28 13:58:11 crc kubenswrapper[4779]: I1128 13:58:11.558693 4779 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e71500e-e601-4476-b9a1-7f24291dcc2b-utilities\") pod \"2e71500e-e601-4476-b9a1-7f24291dcc2b\" (UID: \"2e71500e-e601-4476-b9a1-7f24291dcc2b\") " Nov 28 13:58:11 crc kubenswrapper[4779]: I1128 13:58:11.560746 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e71500e-e601-4476-b9a1-7f24291dcc2b-utilities" (OuterVolumeSpecName: "utilities") pod "2e71500e-e601-4476-b9a1-7f24291dcc2b" (UID: "2e71500e-e601-4476-b9a1-7f24291dcc2b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 13:58:11 crc kubenswrapper[4779]: I1128 13:58:11.562482 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e71500e-e601-4476-b9a1-7f24291dcc2b-kube-api-access-zhnwt" (OuterVolumeSpecName: "kube-api-access-zhnwt") pod "2e71500e-e601-4476-b9a1-7f24291dcc2b" (UID: "2e71500e-e601-4476-b9a1-7f24291dcc2b"). InnerVolumeSpecName "kube-api-access-zhnwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 13:58:11 crc kubenswrapper[4779]: I1128 13:58:11.661963 4779 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhnwt\" (UniqueName: \"kubernetes.io/projected/2e71500e-e601-4476-b9a1-7f24291dcc2b-kube-api-access-zhnwt\") on node \"crc\" DevicePath \"\"" Nov 28 13:58:11 crc kubenswrapper[4779]: I1128 13:58:11.662000 4779 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e71500e-e601-4476-b9a1-7f24291dcc2b-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 13:58:11 crc kubenswrapper[4779]: I1128 13:58:11.663224 4779 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e71500e-e601-4476-b9a1-7f24291dcc2b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2e71500e-e601-4476-b9a1-7f24291dcc2b" (UID: "2e71500e-e601-4476-b9a1-7f24291dcc2b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 13:58:11 crc kubenswrapper[4779]: I1128 13:58:11.764258 4779 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e71500e-e601-4476-b9a1-7f24291dcc2b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 13:58:11 crc kubenswrapper[4779]: I1128 13:58:11.953083 4779 generic.go:334] "Generic (PLEG): container finished" podID="2e71500e-e601-4476-b9a1-7f24291dcc2b" containerID="01a11d6de3703797c443784c2168446c54b68e5f8d74626f73e92f58f0a35d1f" exitCode=0 Nov 28 13:58:11 crc kubenswrapper[4779]: I1128 13:58:11.953204 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tm27c" event={"ID":"2e71500e-e601-4476-b9a1-7f24291dcc2b","Type":"ContainerDied","Data":"01a11d6de3703797c443784c2168446c54b68e5f8d74626f73e92f58f0a35d1f"} Nov 28 13:58:11 crc kubenswrapper[4779]: I1128 13:58:11.953236 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tm27c" event={"ID":"2e71500e-e601-4476-b9a1-7f24291dcc2b","Type":"ContainerDied","Data":"3476a586d75688dc816f5aeef196d1d329255f6cc32156db63ba4528941a5d29"} Nov 28 13:58:11 crc kubenswrapper[4779]: I1128 13:58:11.953294 4779 scope.go:117] "RemoveContainer" containerID="01a11d6de3703797c443784c2168446c54b68e5f8d74626f73e92f58f0a35d1f" Nov 28 13:58:11 crc kubenswrapper[4779]: I1128 13:58:11.954415 4779 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tm27c" Nov 28 13:58:11 crc kubenswrapper[4779]: I1128 13:58:11.986549 4779 scope.go:117] "RemoveContainer" containerID="85ae106cb3c6b83b77fb97cc57071a09a988231a102af613988694a8b5c27240" Nov 28 13:58:11 crc kubenswrapper[4779]: I1128 13:58:11.989180 4779 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tm27c"] Nov 28 13:58:11 crc kubenswrapper[4779]: I1128 13:58:11.998344 4779 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tm27c"] Nov 28 13:58:12 crc kubenswrapper[4779]: I1128 13:58:12.013275 4779 scope.go:117] "RemoveContainer" containerID="a5d8f0171520188a51ec32baafb50fe19264aca96a5de49d873dfe927096b65c" Nov 28 13:58:12 crc kubenswrapper[4779]: I1128 13:58:12.057740 4779 scope.go:117] "RemoveContainer" containerID="01a11d6de3703797c443784c2168446c54b68e5f8d74626f73e92f58f0a35d1f" Nov 28 13:58:12 crc kubenswrapper[4779]: E1128 13:58:12.058483 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01a11d6de3703797c443784c2168446c54b68e5f8d74626f73e92f58f0a35d1f\": container with ID starting with 01a11d6de3703797c443784c2168446c54b68e5f8d74626f73e92f58f0a35d1f not found: ID does not exist" containerID="01a11d6de3703797c443784c2168446c54b68e5f8d74626f73e92f58f0a35d1f" Nov 28 13:58:12 crc kubenswrapper[4779]: I1128 13:58:12.058554 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01a11d6de3703797c443784c2168446c54b68e5f8d74626f73e92f58f0a35d1f"} err="failed to get container status \"01a11d6de3703797c443784c2168446c54b68e5f8d74626f73e92f58f0a35d1f\": rpc error: code = NotFound desc = could not find container \"01a11d6de3703797c443784c2168446c54b68e5f8d74626f73e92f58f0a35d1f\": container with ID starting with 01a11d6de3703797c443784c2168446c54b68e5f8d74626f73e92f58f0a35d1f not found: ID does not exist" Nov 28 13:58:12 crc kubenswrapper[4779]: I1128 13:58:12.058642 4779 scope.go:117] "RemoveContainer" containerID="85ae106cb3c6b83b77fb97cc57071a09a988231a102af613988694a8b5c27240" Nov 28 13:58:12 crc kubenswrapper[4779]: E1128 13:58:12.058913 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85ae106cb3c6b83b77fb97cc57071a09a988231a102af613988694a8b5c27240\": container with ID starting with 85ae106cb3c6b83b77fb97cc57071a09a988231a102af613988694a8b5c27240 not found: ID does not exist" containerID="85ae106cb3c6b83b77fb97cc57071a09a988231a102af613988694a8b5c27240" Nov 28 13:58:12 crc kubenswrapper[4779]: I1128 13:58:12.058945 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85ae106cb3c6b83b77fb97cc57071a09a988231a102af613988694a8b5c27240"} err="failed to get container status \"85ae106cb3c6b83b77fb97cc57071a09a988231a102af613988694a8b5c27240\": rpc error: code = NotFound desc = could not find container \"85ae106cb3c6b83b77fb97cc57071a09a988231a102af613988694a8b5c27240\": container with ID starting with 85ae106cb3c6b83b77fb97cc57071a09a988231a102af613988694a8b5c27240 not found: ID does not exist" Nov 28 13:58:12 crc kubenswrapper[4779]: I1128 13:58:12.058965 4779 scope.go:117] "RemoveContainer" containerID="a5d8f0171520188a51ec32baafb50fe19264aca96a5de49d873dfe927096b65c" Nov 28 13:58:12 crc kubenswrapper[4779]: E1128 13:58:12.059396 4779 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5d8f0171520188a51ec32baafb50fe19264aca96a5de49d873dfe927096b65c\": container with ID starting with a5d8f0171520188a51ec32baafb50fe19264aca96a5de49d873dfe927096b65c not found: ID does not exist" containerID="a5d8f0171520188a51ec32baafb50fe19264aca96a5de49d873dfe927096b65c" Nov 28 13:58:12 crc kubenswrapper[4779]: I1128 13:58:12.059545 4779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5d8f0171520188a51ec32baafb50fe19264aca96a5de49d873dfe927096b65c"} err="failed to get container status \"a5d8f0171520188a51ec32baafb50fe19264aca96a5de49d873dfe927096b65c\": rpc error: code = NotFound desc = could not find container \"a5d8f0171520188a51ec32baafb50fe19264aca96a5de49d873dfe927096b65c\": container with ID starting with a5d8f0171520188a51ec32baafb50fe19264aca96a5de49d873dfe927096b65c not found: ID does not exist" Nov 28 13:58:13 crc kubenswrapper[4779]: I1128 13:58:13.740851 4779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e71500e-e601-4476-b9a1-7f24291dcc2b" path="/var/lib/kubelet/pods/2e71500e-e601-4476-b9a1-7f24291dcc2b/volumes" Nov 28 13:58:16 crc kubenswrapper[4779]: I1128 13:58:16.284909 4779 patch_prober.go:28] interesting pod/machine-config-daemon-kj9g2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 13:58:16 crc kubenswrapper[4779]: I1128 13:58:16.285453 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 13:58:46 crc kubenswrapper[4779]: I1128 13:58:46.285048 4779 patch_prober.go:28] interesting pod/machine-config-daemon-kj9g2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 13:58:46 crc kubenswrapper[4779]: I1128 13:58:46.286549 4779 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 13:58:46 crc kubenswrapper[4779]: I1128 13:58:46.286664 4779 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" Nov 28 13:58:46 crc kubenswrapper[4779]: I1128 13:58:46.287651 4779 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"90d43b19c21c76ebfcdfe319ab08cbedbcc41b2aaa9010869b06c290a9c8d29b"} pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 13:58:46 crc kubenswrapper[4779]: I1128 13:58:46.287794 4779 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerName="machine-config-daemon" containerID="cri-o://90d43b19c21c76ebfcdfe319ab08cbedbcc41b2aaa9010869b06c290a9c8d29b" gracePeriod=600 Nov 28 13:58:46 crc kubenswrapper[4779]: E1128 13:58:46.415944 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" Nov 28 13:58:47 crc kubenswrapper[4779]: I1128 13:58:47.309468 4779 generic.go:334] "Generic (PLEG): container finished" podID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec" containerID="90d43b19c21c76ebfcdfe319ab08cbedbcc41b2aaa9010869b06c290a9c8d29b" exitCode=0 Nov 28 13:58:47 crc kubenswrapper[4779]: I1128 13:58:47.309667 4779 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" event={"ID":"3b2a3eb4-4de5-491b-b466-3a35b7d745ec","Type":"ContainerDied","Data":"90d43b19c21c76ebfcdfe319ab08cbedbcc41b2aaa9010869b06c290a9c8d29b"} Nov 28 13:58:47 crc kubenswrapper[4779]: I1128 13:58:47.310365 4779 scope.go:117] "RemoveContainer" containerID="e5e95ee5d438035986b77d6983fd6e0403152a1d57c6cbb8297ef9ebb38710a9" Nov 28 13:58:47 crc kubenswrapper[4779]: I1128 13:58:47.311054 4779 scope.go:117] "RemoveContainer" containerID="90d43b19c21c76ebfcdfe319ab08cbedbcc41b2aaa9010869b06c290a9c8d29b" Nov 28 13:58:47 crc kubenswrapper[4779]: E1128 13:58:47.311315 4779 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kj9g2_openshift-machine-config-operator(3b2a3eb4-4de5-491b-b466-3a35b7d745ec)\"" pod="openshift-machine-config-operator/machine-config-daemon-kj9g2" podUID="3b2a3eb4-4de5-491b-b466-3a35b7d745ec"